tinycc-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Tinycc-devel] TCC could reach the same level of GCC and Clang (LLVM) wh


From: rempas
Subject: [Tinycc-devel] TCC could reach the same level of GCC and Clang (LLVM) when you write good code?
Date: Tue, 18 Jul 2023 15:19:07 +0200 (CEST)

>From what I know, TCC doesn't do any optimizations (or does it?), however,
after running some simple, small algorithms, it seems that TCC has the same
runtime performance with GCC and Clang. From what I have heard, both GCC
and Clang do code analyses and change the resulted code with a more optimized
version. I'm pretty sure that TCC doesn't do that but I would expect that TCC 
does
at least things like Register allocation cause the code would be hundreds of 
times
slower than the other two. Recursion is another thing that I know TCC doesn't 
applyoptimization to.

To add on that, I have tried to compile complete programs with TCC vs GCC and 
the
runtime performance was something like a 30% loss. So nothing like TCC been 3 
times
slower like it would be in some specific smaller code tests.

So with that been said, my question would be the following, If we were to write 
good code,
would TCC be able to constantly get more than 80% of the runtime of other 
compilers that
do analyze and change the code?

If that's the case, then I prefer to use TCC as a backend and build a better 
ecosystem. I
understand how in the era of C without the spread of the internet and the 
communication,
we have now, we would need a common effort of people that know optimization 
tricks.But now, we can create and share libraries better and easier than ever! 
We can create more
advance programming languages with better mata-programming features, we can 
easily
share and improve (contribute) code like never before! So it makes sense to 
have someone
dedicate to something and optimize it as much as possible. Tbh, isn't that what 
people do
anyways? Wasn't the UNIX philosophy to "do one thing and do it well"? So why 
have the compiler
to change your bad code rather than learning how to write quality code yourself?

And, You're gonna want to write inline assembly if you want the best 
performance anyways,
so why having to massively sacrifice compile times for 30% improvement in most 
NOT low-level software? I suppose you wouldn't write a physics engine with it 
but it can be a great idea to
create software. What are you guys saying? Am I just been to optimistic and 
doing big claims
or what that be practically possible? Of course, I haven't run lots of tests 
and I don't know
a lot about compiler specific (that cannot happen at code level) optimizations 
so some of
you may have already thought about that. But if my theory can be practically 
applied, think
about the possibilities! And well, we can have a tool that scans the code and 
changes for its
quality and suggests modification (or either applies it itself) to make it run 
faster. Rather than
having the compiler do that every time, we can choose to do that every 2-3 
version or after
a specific period of time (like every 3 months). It's the best of both world.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]