emacs-orgmode
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Yet another browser extension for capturing notes - LinkRemark


From: Ihor Radchenko
Subject: Re: Yet another browser extension for capturing notes - LinkRemark
Date: Sat, 26 Dec 2020 21:49:41 +0800

Maxim Nikulin <manikulin@gmail.com> writes:

> I just inspected pages on several sites using developer tools and added
> code that handles noticed elements.

I see. I basically did the same, except some minimal support for
OpenGraph (though I stopped when I saw that even YouTube is not
following the standard, except the most basic fields).

> The only force to add some formal data is "share" buttons. Maybe some
> guides for web developers from social networks or search engines could
> be more useful than formal references, but I have not had a closer
> look.

It is also consistent with what I saw. <meta .. twitter:..> fields seems
to be very common.

>> Also, org-capture-ref does not really force the user to put BiBTeX into
>> the capture. Individual metadata fields are available using
>> org-capture-ref-get-bibtex-field (which extracts data from internal
>> alist structure). It's just that I mostly had BiBTeX in mind (with
>> distant goal of supporting export to LaTeX) for my use-cases.
>
> I do not have clear vision how to use collected data for queries. 
> Certainly I want to have more human-friendly representation than BibTeX 
> entries (maybe in addition to machine-parsable data) adjacent to my notes.

So far, I found author, website name, publication year, title, and
resource type useful. My standard capture template for links is:

* <Author> [<Website>] (<Year>) Title

Example:

* dash-docs-el [Github] Dash-Docs-El Helm-Dash: Browse Dash Docsets Inside Emacs

Such headlines can be easily searched later, especially when I also add
some #keywords manually.

> Personally, I would prefer to avoid http queries from Emacs. Sometimes 
> it is better to have current DOM state, not page source, that is why I 
> decided to gather data inside browser, despite security fences that are 
> placed quite strangely in some cases.

Completely agree here. That's why I directly reuse the current DOM state
from qutebrowser in my own setup. However, extension for qutebrowser was
easy to write for me as it can be simply a bash script. I know nothing
about Firefox/Chrome extensions and I do not know javascript.

On the other hand, having an ability to get html is still useful in my
case (Emacs package) when the capture is not done from browser. For
example, I often capture links from elfeed - http query from Emacs is
useful then.

>  From my point of view, you should be happy with any of projects you 
> mentioned below. Are all of them have some problems critical for you?

They are all javascript, except one (unicontent), which can be easily
replaced with built-in Elisp libraries (dom.el).

>> Finally, would you be interested to join efforts on metadata parsing?
>
> Could you, please, share a bit more details on your ideas? 

> Technically it should be possible to push e.g. raw 
> document.head.innerHtml to any external metadata parser using native 
> messaging (to deal with sites requiring authorization). However it could 
> cause an alarm during review before publication of the extension to the 
> browser catalogues.

That's unfortunate. Pushing raw html/dom is what I had in mind when
talking about joining efforts.

Another idea would be providing a callback from elisp to browser (I am
not sure if it is possible). org-capture-ref has a mechanism to check if
the link was captured in the past. If the link is already captured, the
information about the link location and todo-state can be messaged back
to the browser.

Example message (only qutebrowser is supported now):

Bookmark not saved!
Already captured into org-capture-ref:TODO maxnikulin [Github] linkremark: 
LinkRemark - page or link notes with context

>There is some room for improvement, but I do not think that quality of
> metadata for ordinary sites could be dramatically better. The case
> that is not handled it all is scientific publications, unfortunately
> currently I have quite little interest in it. Definitely results
> should be stored in some structured format such as BibTeX. I have seen
> huge <head> elements describing even all references. Certainly such
> lists are not for general-purpose notes (at least without explicit
> request from the user), they should be handled by some bibliography
> software to display citation graphs in the local library. On the other
> hand it is not a problem to feed such data to some tool using native
> messaging protocol. I have no idea if various publisher provide such
> data in a uniform way, I just hope that pressure from citation indices
> and bibliography management software has positive influence on
> standardization.

I think https://github.com/microlinkhq/metascraper#core-rules can be
used for ideas. It has generic parsing apart from site-specific rules.

For the scientific publications, the key point is usually getting
DOI/ISBN. Then, most of the metadata can be obtained using standard API
of doi.org or various ISBN databases. In addition, reference data is
generally available in OpenCitations.net (they also have all kinds of
web APIs).

Also, do you pass any of the parsed metadata to org-protocol? If you do,
it would be trivial to get it into capture templates on Elisp (and
org-capture-ref) side.

> I am not going to blow up the code with recipes for particular sites. 
> However I realize that some special cases still should be handled. I am 
> not ready to adapt user script model used by 
> Greasemonkey/Violentmonkey/Tampermonkey. I believe, it is better to 
> create dedicated extension(s) that either adds and overwrites existing 
> meta elements or allows to query gathered data using sendMessage 
> webextensions interface. By the way, scripts for above mentioned 
> extensions could be used as well. It should alleviate cases when some 
> site with insane metadata is important for particular user.

I see. This is another point I thought it could be worth collaborating.
The parser rules just need to be written once (probably in some common
format, like json) and then can be reused.

> For some reason I even did not tried to 
> find existing projects for metadata extraction. Maybe I still hope that 
> quite simple implementation could handle most of the cases.

Actually, simple parsing does fairly good job on most of websites. It's
just that it is not ideal. For example, I tweaked title of captured
github issues to include "issue#", which helps to distinguish such pages
from individual repo bookmarks. I believe that such adjustments should
be available for the users, which was where org-capture-ref code started
from.

Best,
Ihor




reply via email to

[Prev in Thread] Current Thread [Next in Thread]