lwip-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lwip-devel] [patch #5960] Enable multithread send/recv operations on sa


From: Jonathan Larmour
Subject: [lwip-devel] [patch #5960] Enable multithread send/recv operations on same socket on TCP netconns
Date: Thu, 24 May 2007 00:17:51 +0000
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.10) Gecko/20070223 Fedora/1.5.0.10-1.fc5 Firefox/1.5.0.10

Follow-up Comment #22, patch #5960 (project lwip):

Re comment #15: 

In general condition variables and mutexes are better, as they allow priority
inversion to be prevented (if the OS implements it). That's impossible with
semaphores. So in general, they are better.

Anyway, what I was thinking about, and to be honest I was only intending to
mention it in passing, is that you can remove a lot of the mbox-based message
passing with the TCP/IP thread entirely if there are per-connection mutexes
and condition variables. The mutexes are used to protect the connection, and
that includes even from the TCP/IP thread. An extra requirement would be that
the api_lib functions do not block when holding the mutex, except on the
condition variable.

Then you do away with much of the message passing stuff. Threads operate
directly on shared data, rather than messages in a mailbox. To get data, you
lock the mutex and just e.g. take data out of a queue (no need for mbox). The
TCP/IP thread also locks the mutex when changing state about the connection.
And the condition variable is used to wake up potentially waiting threads.
You can't do this sort of thing easily with a semaphore as you can't choose
what thread you wake up with the semaphore. You can "broadcast" a condition
variable though - every thread then wakes up and checks for the condition it
was waiting for. Most times there will only be one anyway of course.

But I don't think I can seriously propose this now really - it's pretty much
a complete rewrite. It would be nice though!

> I see the semaphore as a global event -> tcpip_thread has 
> finished processing my request. And for that, you only need one
> semaphore per thread.

That's not how I see it. I see it as "protecting" the connection. More
happens than just the tcpip_apimsg call, and its NULL response. Most of the
netconn functions are like this: to protect from multiple threads, you need
more.
e.g. netconn_recv also has race conditions on changes to conn->err,
conn->pcb.tcp->state, conn->recv_avail, and conn->recvmbox (if the connection
gets closed). All this needs protecting from other threads. It's not only a
case of which thread was intended to get the NULL to the conn->mbox.

    _______________________________________________________

Reply to this item at:

  <http://savannah.nongnu.org/patch/?5960>

_______________________________________________
  Message sent via/by Savannah
  http://savannah.nongnu.org/





reply via email to

[Prev in Thread] Current Thread [Next in Thread]