lwip-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-devel] [bug #49218] pbuf_clen() overflow as a result of tcp_wr


From: Ambroz Bizjak
Subject: Re: [lwip-devel] [bug #49218] pbuf_clen() overflow as a result of tcp_write concatenation
Date: Sat, 22 Oct 2016 17:29:53 +0200

Hi David,

I'm happy you could base this on my old patch, good work! I hope it
gets accepted.
Although, I'm myself no longer too interested in lwIP because I've
started development of a new TCP/IP stack [1].

Regards,
Ambroz

[1] https://github.com/ambrop72/aprinter/tree/ipstack/aprinter/ipstack

On Sat, Oct 22, 2016 at 5:16 PM, D.C. van Moolenbroek <address@hidden> wrote:
> Alright, thanks - and, done, as follow-up to bug #46290. I for one would be
> very happy if this were merged after all. As for complexity considerations:
> I'd say the patch is best viewed with ignored whitespace changes, because it
> looks worse than it is.
>
> Ambroz, I hope this new shape is acceptable to you, too!
>
> David
>
> On 10/19/2016 20:03, Dirk Ziegelmeier wrote:
>>
>> David,
>>
>> if you make the patch apply to git head please file it at savannah.
>> Depending on the complexity Simon might rethink of applying it.
>>
>> C
>> iao
>> Dirk
>>
>>
>> On Wed, Oct 19, 2016 at 7:08 PM, D.C. van Moolenbroek <address@hidden
>> <mailto:address@hidden>> wrote:
>>
>>     Hello Ambroz,
>>
>>     Thank you, much appreciated! Functionally speaking, that is exactly
>>     what I intended to make. The form of the patch could probably use a
>>     bit of cleanup, e.g. it does not apply cleanly to lwip-current and
>>     possibly never has. Also, I would agree with Simon that this is not
>>     actually resolving a bug, and I fear that filing it as a bugfix may
>>     have started off the discussion on the wrong foot. It is still a
>>     very worthy feature in my opinion, and I do hope that a slightly
>>     amended version of the patch, which I'd be happy to file later on,
>>     will be reconsidered for inclusion. However, that is probably best
>>     left to a post-2.0.0 discussion. Either way, you've just saved me
>>     quite some time and effort here, so again, thank you. :)
>>
>>     Regards,
>>     David
>>
>>     On 10/2/2016 10:33, Ambroz Bizjak wrote:
>>
>>         Hi all,
>>
>>         My patch in bug https://savannah.nongnu.org/bugs/?46290
>>         <https://savannah.nongnu.org/bugs/?46290> (rejected by
>>         Simon) should solve this issue. It makes tcp_write detect when
>> newly
>>         passed non-copy (ref) data is contiguous to the previous, and in
>>         that
>>         case the previous pbuf will be extended instead of a new pbuf
>>         created.
>>
>>         Best regards,
>>         Ambroz
>>
>>         On Fri, Sep 30, 2016 at 1:28 PM, D.C. van Moolenbroek
>>         <address@hidden <mailto:address@hidden>> wrote:
>>
>>             To be honest I'm not entirely sure what scenario you are
>>             sketching there, or
>>             what you are comparing it to? To make sure we're talking
>>             about the same
>>             thing, allow me to elaborate what I have in mind and then
>>             please feel free
>>             to point out flaws in my logic :)
>>
>>             Take the scenario of a 4-byte chunk and then a 256-byte
>>             chunk of data being
>>             sent by an application:
>>
>>             application:
>>             - write(fd, "...", 4) // chunk #1
>>
>>             my side:
>>             - allocate a 512-byte RAM pbuf: my_pbuf
>>             - remote-copy-in requested 4 bytes to &my_pbuf->payload[0]
>>             - call tcp_write(&my_pbuf->payload[0], 4, noflags)
>>
>>             lwip side, in tcp_write():
>>             - allocate a REF pbuf
>>             - point REF pbuf to (&my_pbuf->payload[0], 4)
>>             - enqueue REF pbuf as part of unsent segment
>>
>>             application:
>>             - write(fd, "...", 256) // chunk #2
>>
>>             my side:
>>             - remote-copy-in requested 256 bytes to &my_pbuf->payload[4]
>>             - call tcp_write(&my_pbuf->payload[4], 256, noflags)
>>
>>             lwip side, in tcp_write(ptr, len):
>>             - allocate a REF pbuf
>>             - point REF pbuf to (&my_pbuf->payload[4], 256)
>>             - enqueue REF pbuf as part of unsent segment
>>
>>             At this point there are two reference pbufs attached to the
>>             same unsent
>>             segment, and those two reference pbufs will end up at the
>>             NIC as well. Now,
>>             with the kind of merging I am proposing, sending the second
>>             chunk of data
>>             would proceed like this:
>>
>>             application:
>>             - write(fd, "...", 256) // chunk #2
>>
>>             my side:
>>             - remote-copy-in requested 256 bytes to &my_pbuf->payload[4]
>>             - call tcp_write(&my_pbuf->payload[4], 256, noflags)
>>
>>             lwip side, in tcp_write(ptr, len):
>>             - extend the last REF pbuf on the last unsent segment from
>>               (&my_pbuf->payload[0], 4) to (&my_pbuf->payload[0], 260)
>>
>>             And ultimately only that one reference pbuf will end up at
>>             the NIC.
>>
>>             If I were to use WRITE_COPY, I would first have to do a
>>             remote-copy-in of
>>             the data to.. somewhere temporary, after which I would hand
>>             the data to
>>             tcp_write(), which would allocate an oversized RAM pbuf once
>>             and fill that
>>             with the two chunks of data. Without WRITE_COPY, I have thus
>>             saved copying
>>             260 bytes at the cost of having lwIP allocate a single REF
>>             pbuf. Given that
>>             my expectation is that most writes are actually large, this
>>             would generally
>>             be a tradeoff between copying chunks of 512 bytes of data
>>             and allocating REF
>>             pbufs. I'd expect/hope the latter is faster..
>>
>>             David
>>
>>             On 9/30/2016 12:54, Dirk Ziegelmeier wrote:
>>
>>
>>                 I still don't understand why you do that. You need to
>>                 get a pbuf_ref
>>                 from somewhere, initialize it, append it to the pbuf
>>                 chain, in case of
>>                 sending iterate through that chain to create a
>>                 consecutive buffer for
>>                 the MAC (=copy the data), dechain the pbuf and put it
>>                 back into some
>>                 pbuf pool.
>>                 This is more effective/faster than just copying around a
>>                 few bytes and
>>                 increasing the pbuf->len/tot_len?
>>
>>                 Dirk
>>
>>                 On Fri, Sep 30, 2016 at 12:45 PM, David van Moolenbroek
>>                 <address@hidden
>>                 <mailto:address@hidden>
>>                 <mailto:address@hidden
>>
>>                 <mailto:address@hidden>>> wrote:
>>
>>                     Follow-up Comment #6, bug #49218 (project lwip):
>>
>>                     > And that was my point: I don't expect you have a
>>                 scatter-gather-MAC
>>                 that
>>                     supports TX frames with that many chunks. Being like
>>                 that, you
>>                     probably end up
>>                     copying all bytes per CPU anyway. Thus, I don't see
>>                 the advantage of
>>                     doing it
>>                     with ref pbufs. On the contrary, it's probably less
>>                 performant.
>>
>>                     That is correct, and that is the other side of why I
>>                 think the ideal
>>                     solution
>>                     is to merge multiple consecutive buffer-local
>>                 references. In that
>>                     case we do
>>                     eliminate an extra memory copy for all data in such
>>                 circumstances,
>>                 while
>>                     keeping the number of chunks very low. As such I'll
>>                 most probably be
>>                     adding
>>                     support for that locally in any case. If you think
>>                 that is (in
>>                     principle)
>>                     something that is worthwhile might also be merged
>>                 into lwIP itself,
>>                     I'd be
>>                     happy to make a proper patch out of it and submit it..
>>
>>
>>                 _______________________________________________________
>>
>>                     Reply to this item at:
>>
>>                       <http://savannah.nongnu.org/bugs/?49218
>>                 <http://savannah.nongnu.org/bugs/?49218>
>>                     <http://savannah.nongnu.org/bugs/?49218
>>                 <http://savannah.nongnu.org/bugs/?49218>>>
>>
>>                     _______________________________________________
>>                       Message sent via/by Savannah
>>                       http://savannah.nongnu.org/
>>
>>
>>
>>
>>             _______________________________________________
>>             lwip-devel mailing list
>>             address@hidden <mailto:address@hidden>
>>             https://lists.nongnu.org/mailman/listinfo/lwip-devel
>>             <https://lists.nongnu.org/mailman/listinfo/lwip-devel>
>>
>>
>>         _______________________________________________
>>         lwip-devel mailing list
>>         address@hidden <mailto:address@hidden>
>>         https://lists.nongnu.org/mailman/listinfo/lwip-devel
>>         <https://lists.nongnu.org/mailman/listinfo/lwip-devel>
>>
>>
>>
>>     _______________________________________________
>>     lwip-devel mailing list
>>     address@hidden <mailto:address@hidden>
>>     https://lists.nongnu.org/mailman/listinfo/lwip-devel
>>     <https://lists.nongnu.org/mailman/listinfo/lwip-devel>
>>
>>
>
>
> _______________________________________________
> lwip-devel mailing list
> address@hidden
> https://lists.nongnu.org/mailman/listinfo/lwip-devel



reply via email to

[Prev in Thread] Current Thread [Next in Thread]