|
From: | Bill Auerbach |
Subject: | RE: [lwip-users] Recommendations needed for API design |
Date: | Fri, 19 Feb 2010 09:46:20 -0500 |
Marcus, 3 of our products use the RAW API with an RTOS. I did something
like you suggest. One task does nothing but process lwIP by polling for
incoming packets and checking a timer to call lwIP timing functions. The task
is a loop that runs forever. Any call in this task to an lwIP function is
enclosed in a semaphore. All callbacks from these calls are done in this lwIP
thread so making calls back into lwIP is OK at this time and the semaphore is
still locked. Any other thread calling into lwIP uses the lwIP semaphore. Those
tasks will wait until the callback or lwip timer functions are completed. This
has worked flawlessly and has proven to be very efficient. Bill From:
address@hidden
[mailto:address@hidden On Behalf
Of Marcus Bäckman Hello, I am currently porting lwip for our ppc hardware, and have
so far succesfully been using the raw TCP API. The requirements for the application interface is a socket
interface without the blocking functionality and I have some questions
regarding its design. The stack will be running in a mulithreaded environment with
certain restrictions, for example: only one thread a time will use a
certain socket at a time. I would prefer not to have any dependency towards
the operating system, and the approach is to make further restrictions on the
application to guarantee safe multithreaded usage. Here is my thoughts of the general approach: - Each socket has a dedicated area for buffering incoming
pbuf's. - Transmission data will be queued and handled in a
seperate thread which transmits pending data and processes incoming data. What are your thoughts on this approach? Would it be
easier you just abandon it in favor for the current socket/netconn API ? Is it multithread safe to use pbuf_free() on a pbufs
(PBUF_RAW from PBUF_POOL) from one context, and in another context
use pbuf_alloc() (PBUF_RAW)? Regards, Marcus |
[Prev in Thread] | Current Thread | [Next in Thread] |