emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Tree-sitter maturity


From: Lynn Winebarger
Subject: Re: Tree-sitter maturity
Date: Sat, 4 Jan 2025 16:25:51 -0500

On Sat, Jan 4, 2025 at 12:39 PM Daniel Colascione <dancol@dancol.org> wrote:
> Lynn Winebarger <owinebar@gmail.com> writes:
>
> > On Wed, Jan 1, 2025 at 3:23 PM Björn Bidar <bjorn.bidar@thaodan.de> wrote:
> >> Lynn Winebarger <owinebar@gmail.com> writes:
> >> >> Tree sitter, as wonderful as it is, strikes me as a bit of a Rube
> >> >> Goldberg machine architecturally: JS *and* Rust *and* C? Really? :-)
> >> >
> >> > They evidently decided to use JSON and a simple schema to specify the
> >> > concrete grammar, instead of creating a DSL for the purpose.
> >> > Javascript is just a convenient way for embedding code into JSON the
> >> > same way LISP programmers use lisp to generate S-expressions.  Once
> >> > you have the JSON format generated, javascript is not used.
> >> >
> >> > The rest of the project is really composed of orthogonal components,
> >> > the GLR grammar compiler (written in Rust) and the run-time GLR
> >> > parsing engine, written in C.  The grammar compiler produces the
> >> > parsing tables in the form of C source code that is compiled together
> >> > with the library for a single library per grammar, but the C library
> >> > does not actually require the parsing tables to be statically known at
> >> > compile-time, at least the last I looked, unless some really obscure
> >> > dependence.  The procedural interface to the parser just takes a
> >> > pointer to the parser table data structure at run-time.
> >> >
> >> > Since GLR grammars are basically arbitrary (ambiguous) LR(1) grammars,
> >> > the parser run-time has to implement a fairly sophisticated algorithm
> >> > (graph-stacks) to be efficient.  Having implemented the LALR parser
> >> > generator at least 3 times in the last couple of decades (just for my
> >> > own use), generating the parse tables looks like a lot simpler (and
> >> > well-understood) problem to solve than the GLR run-time.  More
> >> > importantly, the efficiency of the grammar compiler is not all that
> >> > critical compared to the run-time.
> >> >
> >>
> >> Additional alernatives instead of Node are already a good alternative.
> >> Using WASM as the output format also does not sound bad assuming their
> >> is some abstraction from the tree-sitter library side.
> >
> > I'm not sure why WASM would be interesting.  AFAICT, it's just another
> > set of bindings to the C library, maybe with the tables compiled into
> > WASM binary module (or whatever the correct term should be - I'm not a
> > WASM expert).  In any case, AFAIK Emacs has no particular capability
> > for using WASM files as dynamic libraries in general.  Maybe if Emacs
> > itself was compiled to WASM, in which case I suppose the function for
> > dynamically loading libraries would implicitly load such modules.
> >
> > OTOH, the generated WASM bindings might provide an example of using
> > the tree-sitter DLL with the in-memory parse table structure not
> > embedded in the tree-sitter DLL.  Is that what you meant?
>
> I think people get too excited about WASM.  It's just a 1) portable, 2)
> sandboxed mechanism for running the same programs you could compile to
> native code.  What's in it for us?  We don't need a security sandbox for
> parsers.  If we want to sandbox, we should do it at a higher level.
> The portability aspect seems like only a minor benefit: sure, it's less
> of a logistical headache to ship one prebuilt binary than to ship N for
> N different architectures, but either way, you're opting into the
> headache of prebuilt binaries.  I'd rather dynamically build from
> source, TBH.
>
> >> > I agree, a generic grammar capturing the structures of most
> >> > programming languages would be useful.  It is definitely possible to
> >> > extract the syntactic/semantic concepts from C++ and Python to create
> >> > such a grammar, if you are willing to allow nested grammars
> >> > appropriately delimited.  For example, a constructor context would
> >> > delimit an expression in a data language that is embedded in a
> >> > constructor context that may itself have delimited value contexts
> >> > where the functional/procedural grammar may appear, ad infinitum.  The
> >> > procedural and data grammars are distinct but mutually recursive.
> >> > That would be if the form appeared in an rvalue-context.  For l-value
> >> > expressions, the same constructor delimiting syntax can become a
> >> > binding form, at least, with subexpressions of binding forms also
> >> > being binding forms.  As long as the scanner is dynamically  set
> >> > according to the grammar context (and recognizes/signals the closing
> >> > delimiter), the grammar can be made non-ambiguous because a given
> >> > character will produce context-appropriate terminal symbols.
> >>
> >> What kind of scanner are you referring to? Something that works like a
> >> binding generator but for AST?
> >
> > A few years ago, I wanted a template system for this terrible
> > proprietary language I was working with, so I wrote this grammar that
> > could encompass that language (which, AFAICT, was only defined by
> > company programmers hacking additional patterns directly into their
> > hand-written parser, for which I reverse-engineered a LALR(1)
> > grammar), a shell-type interpolation sublanguage, and other languages
> > that stuck to the syntactic constructs allowed by Python and C++.  It
> > was a bear to work out, and I ended up throwing it away, anyway.  But
> > the point is, at the start of an interpolation context, the parser
> > would switch scanner and parser tables to the language assigned to the
> > scope of that interpolation context (associated with a particular
> > terminal introducing that context in the "current" parser table).  So
> > while parsing language A, "${" might introduce an interpolation
> > context for language B, "$!{" for language C, "$[" for language D,
> > etc.  As long as the new scanner or parser could discriminate the
> > closing terminal as ending the sublanguage program and returning to
> > language A context, it should work.
> >
> > Anyway, for that purpose, I wanted a grammar that would be flexible
> > enough that I could just switch the bindings for the actions and
> > mapping of terminals, not change the whole grammar, so I would only
> > need to do the grammar analysis once.  That being said, I never
> > actually showed it could be done with multiple real terminals for a
> > single meta-terminal.  That is, in the previous paragraph there might
> > have been a  "meta-terminal" "START_INTERPOLATION_CONTEXT" that would
> > expand to 3 concrete terminals (in the grammar for language A)
> > "START_INTERPOLATION_B", "START_INTERPOLATION_C",
> > "START_INTERPOLATION_D", so the parser would have to know which of
> > those concrete terminals was being reduced to choose the right action.
> > I've been waiting for the details to rot from my memory so I can start
> > from scratch on a concrete grammar.
>
> ANTLR's lexer modes gives you a similarly powerful capability, FWIW.
>
> > Aside from being useful for generic templating purposes, Such a
> > generic grammar would be of use for the purpose Daniel described, i.e.
> > a layer of abstraction usable for almost any modern language, even in
> > polyglot texts.
>
> Arbitrary language composition has been the holy grail for a while, yes?

I'm not sure what you mean, but I was just answering Björn's question
about the context.

> GLR grammars are closed under composition too. Making it easier to
> define tree-sitter grammars and lexers that refer to each other would be
> nice.  At this point, though, I think it's more important to finish the
> task of making tree-sitter-based modes as usable and Emacs-y as
> traditional ones than to imagine new meta-parser
> description abstractions.

I'm probably not being explicit enough.  I only brought this up in
response to your comment a few messages ago:

> > > > > Some Emacs modes could ship with .js grammars sourced from upstream
> > > > > editor-neutral projects.  Other modes might just build tree sitter 
> > > > > parse
> > > > > tables in elisp using something vaguely like SMIE syntax.  Both styles
> > > > > of mode would be customizable by end users, and we'd (because, I'm a
> > > > > broken record, vendor vendor vendor) we'd maintain compatibility 
> > > > > without
> > > > > mysterious AST-change-related breakages.

The relevant part of what I wrote above is identifying a grammar of
symbols for syntactic-semantic concepts that can stand in relation to
the concrete AST nodes produced by tree-sitter (or whatever) the same
way the symbols (categories) of syntax-tables relate to characters.
So elisp authors could parameterize their code over those abstract
syntactic categories rather than the concrete AST nodes from
tree-sitter.  Isn't that what you were talking about here?

The grammar I wrote was quite large, and could capture things like a
constructor expression being in the middle of two assignments so it
was both a destructuring bind (lvalue) and an rvalue.  Fortunately I
really can't recall the details of the one I derived before, but
overall the categories would encompass:
  - C++ syntactic constructs [ covers a lot ]
  - generators and data comprehensions (Python and its functional
forebears for these)
  - interpolation (shell and query languages)
That covers most of the ground I'm familiar with.  It won't cover
unrestricted TeX, but that language's syntax is dynamic, so what can
be done?
Unlike syntax-table categories, I think each concrete language would
have to index the generic symbols to cover the specific types in that
language, i..e. a QUOTED_LITERAL might have entries for raw strings
and strings with an escape syntax, as well as other types that might
be available.

>
> The point I keep trying to make is that you can't safely update a
> foo-ts-mode tree sitter grammar without updating the corresponding
> foo-ts-mode Lisp.  They're tightly coupled.  They're not separate
> programs.  Same goes for nvim or whatever using TS grammars.
> Even distribution packagers understand the futility of consolidating
> dependencies with unstable interfaces.

Currently that is the case.  Good luck negotiating that.  I already
stated my preference, but it requires developing a binary data
descriptor type for dealing with arbitrary in-memory data structures.
Maybe now that pure space is close to removal I'll take a stab at it.
It's the kind of thing that should really be used in implementing
extensible redumping for pdumper.  And redumping was never going to
get accepted with pure-space involved, AFAICT.

Lynn



reply via email to

[Prev in Thread] Current Thread [Next in Thread]