emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: I created a faster JSON parser


From: Eli Zaretskii
Subject: Re: I created a faster JSON parser
Date: Sat, 09 Mar 2024 08:52:36 +0200

> From: Herman, Géza <geza.herman@gmail.com>
> Cc: Herman Géza <geza.herman@gmail.com>,
>  emacs-devel@gnu.org
> Date: Fri, 08 Mar 2024 21:22:13 +0100
> 
> > Is there a reason for you to want it to be 64-bit type on a 64-bit
> > machine?  If the only bother is efficiency, then you can use 'int'
> > without fear.  But if a 64-bit machine will need the range of
> > values beyond INT_MAX (does it?), then I suggest to use ptrdiff_t.
>
> The only reason is if I use a 64-bit number on a 64-bit platform, 
> then the fast path will be chosen more frequently. So it makes 
> sense to use a register-sized integer here.

Then either ptrdiff_t or EMACS_INT should do what you want.

> Yes, it seems that EMACS_UINT is good for my purpose, thanks for 
> the suggestion.

Are you sure you need the unsigned variety?  If EMACS_INT fits the
bill, then it is a better candidate, since unsigned arithmetics has
its quirks.

> > The jansson code required encoding/decoding strings to make sure
> > we submit to jansson text that is always valid UTF-8.
> 
> I tried to use the jansson parser with a unicode 0x333333 
> character in a string, and it didn't work, it fails with 
> (json-parse-error "unable to decode byte... message.

Well, I didn't say trying an arbitrary codepoint will demonstrate the
issue.  Some codepoints above 0x10FFFF indeed cannot be passed to
jansson.

It's okay if the initial version of this parser only handles the
Unicode range and errors out otherwise; we could extend it if needed
later.  But the error message should talk specifically about invalid
character or something, not just a generic "parse error".

> Also, I see that json-parse-string calls some utf8 encoding related
> function before parsing, but json-parse-buffer doesn't (and it
> doesn't do anything encoding related thing in the callback, it just
> calls memcpy).

This is a part I was never happy about.  But, as I say above, we can
get to handling these rare cases later.

> So based on these, does it have any benefit of supporting these?

Yes, definitely.  But it isn't urgent.

> Out of curiosity, what are these extra characters used for?

Raw bytes and characters from charsets that are not (yet) unified with
Unicode.

> What is the purpose of the odd special 2-byte encoding of 8-bit
> characters (I mean where the 1st byte is C0/C1)? Why don't just use
> the regular utf-8 encoding for these values?

I think it's for efficiency: a 2-byte encoding takes much less space
than the 6-byte encoding (using superset of UTF-8) would take.
Imagine the case where a large byte-stream is inserted into a
multibyte buffer, before decoding it, something that happens a lot
when visiting non-ASCII files or reading from a network sub-process.

The regular UTF-8 encoding cannot be used for the raw bytes, because
then we will be unable to distinguish between them and the Unicode
codepoints of the same value.  For example, a raw byte whose value is
160 decimal (A0 hex) will be indistinguishable from U+00A0 No-Break
Space character.  This is why the "codepoint" corresponding to raw
byte 160 is 0x3FFFA0, see BYTE8_TO_CHAR.

Once again, we can extend the parser for codepoints outside of the
Unicode range later.  For now, it's okay to reject them with a
suitable error.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]