[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [lwip-users] Setting the IP_DF flag on my UDP socket not working
From: |
address@hidden |
Subject: |
Re: [lwip-users] Setting the IP_DF flag on my UDP socket not working |
Date: |
Mon, 4 Nov 2019 21:36:00 +0100 |
User-agent: |
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 |
Am 29.10.2019 um 01:10 schrieb address@hidden:
Hello,
I am working on an application where I am streaming data over UDP using
lwip sendto().I am trying to set the don't fragment flag on my UDP
socket as follows:
int enable = 1;
setsockopt(sock, SOL_SOCKET, IP_DF, reinterpret_cast<const void
*>(&enable), static_cast<socklen_t>(sizeof enable));
Where did you get the idea to do it like that?
We try to follow the opengroup standard for socket functions. In other
words, try what you want on Linux/BSD (etc) and if it works, use that
code on lwIP.
However, I can tell you right now that this is not currently supported.
Adding support should not be that hard, however, and is probably a good
idea.
Care to open a task on our savannah bugtracker once you found out the
correct parameters to setsockopt?
Regards,
Simon
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- Re: [lwip-users] Setting the IP_DF flag on my UDP socket not working,
address@hidden <=