|
From: | Mathias Zenger |
Subject: | AW: [lwip-users] LWIP size |
Date: | Thu, 1 Oct 2009 07:09:31 +0200 |
-----Ursprüngliche Nachricht-----Thanks Bill, good idea.
Von: address@hidden [mailto:address@hiddenIm Auftrag von Dany Thiffeault
Gesendet: Mittwoch, 30. September 2009 17:42
An: Mailing list for lwIP users
Betreff: Re: [lwip-users] LWIP sizeI'm still trying to figure out how to generate the MAP file with AVRStudio 2.1...
On Tue, Sep 29, 2009 at 5:30 PM, Bill Auerbach <address@hidden> wrote:
Check you MAP file and find out where RAM has been allocated. It’s easy to allocate too many PBUF_POOLs and run out of memory that way.
Bill
From: lwip-users-bounces+bauerbach=arrayonline.com@nongnu.org [mailto:lwip-users-bounces+bauerbach=arrayonline.com@nongnu.org] On Behalf Of Dany Thiffeault
Sent: Tuesday, September 29, 2009 3:56 PM
To: Mailing list for lwIP users
Subject: [lwip-users] LWIP size
Hi,
I would like to know what is the expected (approx.) size of the lwip stack. I use sequential configuration. My problem is that on my AVR32, I only have 64kbytes of SRAM and for some reasons, it is full. I'm trying to run my application using FreeRTOS and lwip. When I create tasks, my last one always fail on the malloc.
So, I'm assuming that FreeRTOS and lwip take a significant amount of space on my AVR32 in SRAM because my tasks have those sizes (presented in order of creation):
1- lwip Main task: 512
2- Startup task (my own): 512
3- FreeRTOS Scheduler: 256
4- Ethernet task (my own): 512
5- Ethif (lwip): 256
6- Diags task (my own): 512
The last one is the one that fails. So, out of 64kbytes, my tasks take much less memory. I another project not using lwip, I was able to create 3 tasks of 1024, 4096 and 4096 of size. So, I presume lwip takes a significant amount of space.
If yes, how could I tweak the config to reduce the size taken?
Best regards,
DownyTif.
_______________________________________________
lwip-users mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/lwip-users
[Prev in Thread] | Current Thread | [Next in Thread] |