[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [avr-libc-dev] 1.8.1?
From: |
Georg-Johann Lay |
Subject: |
Re: [avr-libc-dev] 1.8.1? |
Date: |
Fri, 30 Nov 2012 10:04:53 +0100 |
User-agent: |
Thunderbird 2.0.0.24 (Windows/20100228) |
Weddington, Eric schrieb:
Hi Johann,
I'm understand your rant, but from my experience, I can't share in
it.
More comments inline below.
Sorry to say that, but supporting each device in the compiler /
binutils is bit of --censored--.
Y'know it's been this way for over 10 years. And other AVR toolchain
developers before you have given a lot of thought about how to make
it easier, with no good solution yet. If you have a better solution
that will be accepted by the GCC project, where the cost of
implementing the solution doesn't outweigh the benefits, then by all
means, let us know.
Fact is that the compiler does actually nothing with -mmcu=device except
- mapping it to -mmcu=core
- define a macro
- call the linker with special options or files like crtdevice.o
Conclusion is that you can do all this by hand provided
- you know the right options
- you have crtdevice.o
This boils down to
- reading some documentation, provided it's there and comprehensible
- you know how to build crtdevice.o from a generic gcrt.S
The most work in switching to a new device will be
- adjusting the application code to the new device
- getting the right avr/io.h header and subheaders
- setting the right options
which is
- you have to adjust your code, anyway
- avr/io.h is just text
Waiting years for a distribution if all boils down to a text file is
surely frustrating and delays time-to-market for no good reason.
On the other side, and from my experience, I can tell that device
headers are a very critical point in the tools because
- it can be lot of work to write them
- sometimes you must auto-generate them. I had to deal with
devices with more than 20000 (twenty thousand) of SFRs and
provide all the bitfield stuff (more than 10^6 of them) and
addresses and function units (several hundred timers, hundreds
of CAN message objects etc.) from device descriptions > 50MB.
- device descriptions from the vendors are bogus, sometimes the bugs
will coexist in the data sheet because the sheet and the device
XML-or-whatever are built from the same bogus base.
- you cannot drive full tests of these headers
- even the hardware vendors don't know how a correct header
must look like, in particular if bitfields are involved which
are terribly unspecified if volatile comes into play.
But after all, it's just text and waiting for a binary distribution if
text files is all you need, is a delay that can be avoided by smart
header management
- hardware vendor provides the header
- hardware vendor supports header distributor with
-- professional support if there are problems with the descriptions
-- hardware vendor can test the headers to improve quality and find
problems in the descriptions
Will GCC ever support all 1000 or 2000 or how many thousands of ARM
devices you find on the market? From different vendors, with
different instruction sets, different packages, different voltages,
different memory, different internal peripherals, different
footprints -- you name it?
No.
First off, are you sure that it would be in the thousands of
combinations? Or is that just exaggeration? Would the true number of
combinations actually only reach the hundreds?
And even then, why not support it?
Because it is slow.
If a new device is there and even if you add the one-liner to the tools
immediately, it will take you around 1 1/2 or 2 years until you can use
the new, official release
- the compiler release cycle is slow
- extensions are only accepted in stage I (currently we have III)
- the compiler will have to wait for binutils and must be rolled
with the right binutils release
If you have a patch now, you can add it in stage I which starts spring
2013 which will be released spring 2014 -- under the assumption that we
strictly stick to the GCC policy and Binutils support is available and
the maintainers agree that the new dependency from binutils is appropriate.
And someone must be willing to change the tools and approve a change. I
can imagine that some port maintainer rejects a patch that adds 3 new
devices if the only difference is military, automotive or industrial.
Are professional developers that use ARM + GCC blocked therefore?
No.
They just add startup code, perhaps a linker script to describe
memory on their boards and compile for their architecture.
That's it.
Sure. And you're speaking about those with a Linux background, who
are capable and willing to build the toolchain from scratch. You have
But that's the point: There is *no need* to build the tools!
Neither on Windows, neither on Linux, neither on Mac OS or wherever.
to remember that the vast majority of our users (and this is backed
up by actual surveys) are from a Windows background, which from my 10
years of experience in this area:
1. Don't know how to build the GCC toolchain from scratch (even
though instructions do exist)
Not mandatory and I explained already under the assumption core support
is there.
XMEGA is one example where core support is there in distributions, but
people are just waiting for headers and a set of options that do nothing
except trigger other options they just as well could set by hand.
2. Don't want to build the GCC toolchain from scratch
Ditto.
3. Don't really want to mess with understanding the source code of
the toolchain
Not needed.
4. Don't want to even look at the toolchain source code much less
change it
Not needed.
5. And they want one-stop-shopping: They don't want to have to get
tools from all over the place. They want a working, full system, all
in one place.
This is comfortable, yes. Customers always want more than everything
before yesterday ;-) If the boss is fine with that solution, then use
it. But if you can have a much faster approach -- it's the choice of
the customer.
So even though you can rant about "professional developers", in
reality, it just doesn't work like that.
I still wonder why. These developers are smart and diligent and don't
fear to learn new things.
Fact is: If you want to get the best product you'll have to use superior
approaches in every part of the production / development process.
If your analyze reveals that the critical part is the tools support, you
can switch to a different tool vendor or think about other solutions.
Having no alternative or rejecting alternatives because they need more
knowledge is always dangerous because if your way does not work your are
blocked and stuck.
This is trivial insight for any project management. If they ignore
that, nobody can help them with getting faster to their solutions.
Of course, you can wait until the device pops up in the toolchain.
It's comfortable but slow, but as a professional you might consider
the bit more work, bit less comfortable, but much fast approach
explained in
http://gcc.gnu.org/wiki/avr-gcc#Supporting_.22unsupported.22_Devices
99.9% of what you need is the device header, maybe readily
available in AVR Libc, from the hardware vendor or by adjusting a
sister device's header to your needs.
The remaining 0.01% is setting the right command line options.
And if you do not like that either, hey, it's all free software!
Contribute to the tools!
Joerg Wunsch and I have been trying to get people to help contribute
to the open source toolchain for 10 years (or more, in the case of
Joerg). Our experience has been that there are extremely few people
who:
- Have the desire
- Have the skills
- Have the time
- And are willing to do it.
Those are the constraints, and not many people fit within it. You're
one of those extremely few.
Changing the compiler or binutils is a 2nd approach besides using the
right options and headers. I don't think the general user wants to take
that way.
However, I wonder why Atmel does not make the best of it.
I know you know what line to add to Binutils so that the assembler
accepts -mmcu=atxmegafoo.
I know you know what line to add to GCC so that the compiler accepts
-mmcu=atxmegafoo.
All what follows from that is auto-generated these days: The accepted
options and the help screen, the multilib matches, the stack and flash
size, the instruction set, the name of the startup code and architecture
to call the linker and even the user-level PDF and HTML documentation --
all from one magic line.
The situation with Binutils is similar and I still don't understand why
Binutils need device support at all. What they need is ISA support. We
could map each device to its ISA and pass that information explicitly
instead of compressing it in -mmcu=compressed-data.
The problem is that not even the Atmel support could tell me what device
supports what feature:
me: what device supports xyz? (instruction, core bug, RAM size, ...)
ticket: read the data sheet of abc.
me: what I need is xyz -> all abc with xyz, not abc -> xyz
ticket: sorry
If that information was available there is no need to add or use
-mmcu=device in binutils. We could just as well -mmcu=core -mdes if
that device has DES and omit -mdes, otherwise. It would work all the
same like -mno-skip-bug today. 10 - 20 lines of specs magic (specs
function instead of dreaded specs string) and a true / false DES entry
for each device. Similar for load-modify-store like LAS or whatever.
But if not even the hardware vendor can tell, well...
What's even more frustrating for me, is to see how successful the
Arduino project is in attracting volunteers that actually do good
work. I don't blame them really. They're building a system on top of
the toolchain: a translator, many different libraries, etc. But, the
skill set needed for Arduino is, arguably, not as difficult as it is
to work on the toolchain itself. The toolchain can be very
intimidating to a lot of people in the open source community. As a
consequence many people shun the projects, wanting to leave it up to
the "professionals".
For GCC, a new device is a one-liner -- provided core support is
available which is the case for XMEGA. For Binutils, a new device
is a one-liner. I am not sure what's more expensive: Hangin
around for 3 years or add 2 lines to the tools?
The fact that many people do it (i.e. wait around) should tell you
something. Honestly, I'm not surprised. Still saddened, of course,
that there aren't more volunteers to help with the toolchain.
Björn Haase from Bosch contributed the complete relaxing framework
so that they can use devices with more then 64KiWorks flash.
And other professionals do really wait 3 years or more for a
one-liner?
Y'know, sometimes, yes. It's not the one-liner that is intimidating
(though, honestly, all the crud you have to learn sometimes just to
add one line is annoying), it's rebuilding the tools *for a Windows
Yes, that's true.
Adding a trivial change to the compiler, for example fixing a typo in a
comment, will take you quite some effort if you go through all what's
needed for the first time.
However, that's no difference with any other field. Just imagine you
want to compile and run a trivial, void main C program for the first time:
You will have to learn the syntax of C, download a compiler or even
gigabytes of IDE and install it, learn how to manage projects, how to
compile that stuff and cope with the toolchain / IDE to select device
and programmer and cpu clock and optimization and debug info, learn how
to read error messages and warnings and how to fix them, find out how to
wire and use the programmer, the debugger, the simulator, how to wire
and supply the AVR hardware, maybe convince the OS to give access to the
USB or whatever, many more areas you can stumble just to run a trivial
program that does *nothing*.
Yet people do it, not few of them even for hobby and without a
professional background.
Thus, the complexity of the stuff cannot be the only reason.
host* and making it all work. And even then, you have to deal with a
dependency tree for building the tools. It's not just building GCC,
it's building binutils, MPFR, MPC. It's setting up a build
It's a common mistake and source of trouble to build GMP, MPFR, MPC by
hand and not to use in-tree builds and the recommended way as described
on the GCC pages.
environment: learning MinGW/MSYS (which has gotten better over the
years, but I still find it confusing sometimes). If you've never used
Unix/Linux, learning bash and make. Learning a little about running
configure scripts, and God forbid, learning some things about
autotools. If you're dealing with patches, learning the diff and
patch utilities and patch formats. Oh, and getting a build
environment set up to build the documentation, if you really want to
do it right. That can take a bit of doing too. I'm sure there are
other things which I've forgotten...
You *have* to understand it from the perspective of an engineer who
is mostly familiar with Windows, and not Unix/Linux. The learning
curve is just too steep. Most people have other priorities for their
own projects and work. They will happily use the open source tools,
but getting involved any deeper is a completely different beast. And
most people just won't do it. That's why we can probably count the
number of volunteers that have contributed significantly to the open
source AVR toolchain on barely two hands, and that's over 10 years.
This means it is essential to use the resources in a smart and
non-abrasive way. I noted on that in the past already, you may call it
rant...
It's easy for Atmel to do any changes to their tools repository. Easy
from the administration perspective w.r.t. changing the code base.
Now suppose we have a full feature support in 4.8 and Atmel wants to
close the 4.7 gap that opens between 4.8 and their 4.6.
They will observe that they can throw all their patches away. Some
because they are no more needed. Some more because patch will barf and
the work must be redone from scratch.
Rework from scratch is costly and frustrating.
Johann
- Re: [avr-libc-dev] 1.8.1?, (continued)
- Re: [avr-libc-dev] 1.8.1?, Weddington, Eric, 2012/11/29
- Re: [avr-libc-dev] 1.8.1?, Erik Walthinsen, 2012/11/30
- Re: [avr-libc-dev] 1.8.1?, Weddington, Eric, 2012/11/30
- Re: [avr-libc-dev] 1.8.1?, Erik Walthinsen, 2012/11/30
- Re: [avr-libc-dev] 1.8.1?, Weddington, Eric, 2012/11/30
- Re: [avr-libc-dev] 1.8.1?, Joerg Wunsch, 2012/11/30
- Re: [avr-libc-dev] 1.8.1?, Weddington, Eric, 2012/11/30
- Re: [avr-libc-dev] 1.8.1?, Erik Walthinsen, 2012/11/30
- Re: [avr-libc-dev] 1.8.1?, Georg-Johann Lay, 2012/11/29
- Re: [avr-libc-dev] 1.8.1?, Weddington, Eric, 2012/11/29
- Re: [avr-libc-dev] 1.8.1?,
Georg-Johann Lay <=
- Re: [avr-libc-dev] 1.8.1?, Joerg Wunsch, 2012/11/30
- Re: [avr-libc-dev] 1.8.1?, Erik Walthinsen, 2012/11/30
- Re: [avr-libc-dev] 1.8.1?, Weddington, Eric, 2012/11/30
- Re: [avr-libc-dev] 1.8.1?, Erik Walthinsen, 2012/11/30