gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] DBus in GNUnet?


From: Christian Grothoff
Subject: Re: [GNUnet-developers] DBus in GNUnet?
Date: Thu, 07 Nov 2013 11:53:16 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20131005 Icedove/17.0.9

On 11/07/2013 10:22 AM, Andrew Cann wrote:
>> My main issue with it at the time was its awkward (and not very stable) C
>> APIs, and the fact that I did not want to drag in GLib as a
> requirement for
>> GNUnet (which is required for the high-level C binding).
> 
> The page for the dbus C API says "If you use this low-level API directly,
> you're signing up for some pain". I'm not sure why this is, I've worked with
> the low-level API before (but never the GLib API) and not had any
> problems. The
> C API is also fairly stable nowadays.

Yes, my argument was about _historical_ reasons, I suspect the current
state is better.

>> DBus also raises questions of security (see #2887), as it is not clear
> to me
>> how file access controls can be mapped nicely to DBus (but I've simply not
>> looked into this).
> 
> User/group level access controls can be set with dbus.

See my comment on IRC.  User/Group ACL != User/Group FS-style, as I had
to learn with #2887.

>> Also, using DBus would not simplify as much as you suggest; parsing the
>> configuration is important, as it is the only way we can run multiple
> peers
>> on one machine for testing. Also, as there is more in the
> configuration than
>> just the connection information, you'd hardly get around parsing it in the
>> long term anyway.
> 
> This is true, any language bindings you write would still have to have the
> ability to parse the config file. My point was though that for simply
> interacting with the services you wouldn't even have to write language
> bindings. A python script could, for example, just `import dbus` then start
> talking directly to the services.

Well, I'd not want to encourage that kind of behavior anyway ;-).
But still, simpler client code is always good.

>>  For example, do you know if there is a performance impact (latency,
>>  throughput or number of concurrent connections that can be open
> system-wide)
>>  when using DBus vs. UNIX domain sockets?
> 
> DBus uses UNIX domain sockets internally. Any data that gets sent along dbus
> passes through two sockets: one on the way to the daemon, then another
> back out
> to the client. So there is a slight performance impact.

Really? I had the impression that DBus was able to do direct
client-to-client communication as well from the docs (they write: "Also,
the message bus is built on top of a general one-to-one message passing
framework, which can be used by any two apps to communicate directly
(without going through the message bus daemon).").  So would this
feature then somehow not be used for some reason?

> Other than that,
> it's a
> binary protocol so there's almost no impact on throughput or extra time to
> construct and parse messages. 

Right, I'm more worried about the time to establish the connection.
When we run 4,000 peers with 60,000 processes on one GNU/Linux box,
every system call and every buffer counts.  For example, we used
to start with 64k IPC buffers (so we would never have to grow them).
But that was causing massive memory overheads (128k for one process's
IPC translates to 1 MB for a GNUnet peer with just 10 processes, which
translate to 4-8 GB for an experiment, which is WAY too much given that
we run 4000 peers in 12 GB RAM total today).  So memory overheads,
additional system calls and limits to the number of system-wide parallel
connections are my main concerns.

> Also, on linux at least, there are plans
> to move
> dbus into the kernel so performance shouldn't be a problem after that. On
> Windows there is currently a bug open
> (https://bugs.freedesktop.org/show_bug.cgi?id=71297) because dbus can only
> support up to 64 simultaneous connections due to a limitation in older
> versions
> of Windows (pre-Vista). That bug also suggests that there is no such
> limitation
> on linux.

Well, it suggests it is not that low.  We've had to bump kernel
parameters like the number of concurrent processes on Debian for
our experiments, and a limit of 32k might not seem worth mentioning
to some users...

>> I don't see an easy path for a gradual transition either...
> 
> You could add dbus support to services individually one-by-one. Take a
> service,
> give it a dbus interface, then migrate all the other services to use its
> dbus
> interface instead of its socket interface, then remove the socket interface
> altogether. Then do that for each service. It would definitely take a
> while but
> it wouldn't involve breaking everything for a long period of time.

We'd at least need to have support for both mechanisms in UTIL for a
while.  But I guess that could be OK.

> If I started submitting patches to add dbus support to some service
> would you
> accept them?

I don't like having two ways to do things in the code, so this would
have to be done with the goal of replacing UNIX with DBus entirely
eventually.  Because of that, I'd really like to see

* better understanding of the group access control issue
* performance evaluation (memory overhead, CPU impact);
  i.e. we could compare creating 100,000 IPC channels and
  between 4,000 processes and sending 10,000,000 messages
  (of different sizes, say 4-32k bytes, in both directions
  query-response style).
* some sample modifications for client and service code (i.e. for
  just the DHT service)

If that all looks good, I think you might have a convincing
argument to me (and the other developers!).  But if the
performance is bad (on the CPU, I might be OK with 2x if the
code is significantly nicer, but on memory, 2x would totally
not be acceptable).

I hope you understand why I'm not just giving you a green light
on this --- this would really be a major change and I'd really
want to be sure that this is the right direction.

Happy hacking!

Christian

Attachment: 0x48426C7E.asc
Description: application/pgp-keys


reply via email to

[Prev in Thread] Current Thread [Next in Thread]