|
From: | Mark Cave-Ayland |
Subject: | Re: [PATCH v3 05/11] esp: introduce esp_fifo_pop_buf() and use it instead of fifo8_pop_buf() |
Date: | Thu, 1 Apr 2021 11:51:44 +0100 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 |
On 01/04/2021 10:34, Philippe Mathieu-Daudé wrote:
On 4/1/21 9:49 AM, Mark Cave-Ayland wrote:The const pointer returned by fifo8_pop_buf() lies directly within the array used to model the FIFO. Building with address sanitisers enabled shows that if theTypo "sanitizers"
Ha. It's definitely "sanitiser" here in the UK (UK English) as opposed to "sanitizer" (US English). I don't really mind either way, but I can fix this if it needs a v4 following Paolo's comments.
caller expects a minimum number of bytes present then if the FIFO is nearly full, the caller may unexpectedly access past the end of the array.Why isn't it a problem with the other models? Because the pointed buffer is consumed directly?
Yes that's correct, which is why Fifo8 currently doesn't support wraparound. I haven't analysed how other devices have used it but I would imagine there would be an ASan hit if it were being misused this way.
Introduce esp_fifo_pop_buf() which takes a destination buffer and performs a memcpy() in it to guarantee that the caller cannot overwrite the FIFO array and update all callers to use it. Similarly add underflow protection similar to esp_fifo_push() and esp_fifo_pop() so that instead of triggering an assert() the operation becomes a no-op.This is OK for your ESP model. Now thinking loudly about the Fifo8 API, shouldn't this be part of it? Something prototype like: /** * fifo8_pop_buf: * @do_copy: If %true, also copy data to @bufptr. */ size_t fifo8_pop_buf(Fifo8 *fifo, void **bufptr, size_t buflen, bool do_copy);
That could work, and may even allow support for wraparound in future. I suspect things would become clearer after looking at the other Fifo8 users to see if this is worth an API change/alternative API.
ATB, Mark.
[Prev in Thread] | Current Thread | [Next in Thread] |