lwip-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-devel] mem(p) allocator change request


From: Kaos
Subject: Re: [lwip-devel] mem(p) allocator change request
Date: Tue, 30 Mar 2004 10:10:23 +0200
User-agent: Mozilla Thunderbird 0.5 (Windows/20040207)

address@hidden wrote:


but my question is what you mean with "externally
allocated pools"?


To be more precise, pools defined outside the memp module.

Dynamically or statically?
Statically wouldn't change the current situation much, dynamically
on the other hand would let the application use the space when not
needed by lwIP, but then the mem allocator would have to be able to
tell the system when it needs a new pool.


What I would like is to let the stack rely on another (extern)
pool allocator taking care of allocations and freeing of memory
for arbitrary sizes (I use a number of pools with different block
sizes to minimize the waste of unused space per block).


This is a different approach. I would like to use the memp
allocator if there isn't another one available.

True.


I think we should provide access to the external
allocator in the port/architecture file cc.h.

E.g. #define lwip_platform_pool_alloc() etc,
and let memp export lwip_pool_alloc() either as the
default implementation, or as the platform specific one.

This would solve the whole issue (that I'm arguing for).


I don't get this part. Why would each module need to provide pools?


I want to remove the pool from the memp module,
and let memp (or another module) only do the (de)allocations.

Sounds reasonable.



Sub allocate? feels like reimplementing yet another allocator here..?


Yes, I might want to be able to chop up things even further,
though it's probably silly.

I don't quite see the point of doing this, but then I'm not fluent in
the TCP/IP implementation either.


Hope this clarifies things,

Yup :o) keeps the discussion alive too.

Regards,
Andreas






reply via email to

[Prev in Thread] Current Thread [Next in Thread]