15Fermer17
Lionel DebrouxLe 20/04/2010 à 08:09
e.g. the couple dozen name collisions between examples
A complete non-problem. [...]

You have a narrow-minded view of examples, serving purely as documentation purposes (for a significant number of areas of the headers / library). While being examples is their primary purpose, they can also be used for testing purposes - manual testing first, potentially automated testing later (remember the topic where we posted about making a brainstorming for special instruction sequences ala WTI for programs sending messages to the emulator ?).
Using the examples for testing purposes yielded great results, 4 sets of problems as I wrote above: do what you want in your software, but we are going to keep using them.
As for you, Lionel, you at least did contribute stuff, a long time ago, but you did it in a form which is extremely hard to integrate into the existing project and never bothered trying to improve this situation.

On the part I emphasised, you're outright lying. While it's a fact that my efforts remained unfinished (and therefore not useful to you) until after the end of my studies in CS (and a rewrite in a programming language much more appropriate for the task, which I learnt outside of school in-between), I did try.

On your side, you don't want to use the results produced by the 100-ish SLOC of Perl code I wrote in 2008, waiting for some hypothetical perfect rewrite of documentation tools (which, albeit being TIGCC/GCC4TI-specific, have worked rather well enough) which would solve several shortcomings, and add a new output format. Despite the informal spec, I have participated in a nascent rewrite (ask konrad and MathStuf if you don't believe me) of tools matching the current Update* workflow (though the "documentation-to-headers" approach could be questioned, most projects are usually using the opposite "code-to-documentation" approach), which didn't get far.
On my side, I have used the results produced by that small Perl code, and this has made it possible to integrate a number of documentation snippets. This is a much more pragmatic approach.
Optimizing away 2 or 4 bytes from some library function is of almost no practical use

Remember, in AMS native programs, startup code such as SAVE_SCREEN, the overly used __set_in_use_bit code, or other program support code such as e.g. the internal F-Line emulator, get duplicated across the vast majority of programs (that's what some people term "the stub of _nostub programs"). And we didn't just save "2 or 4 bytes": -16 bytes on SAVE_SCREEN (the code uses fewer registers, reducing side effects on other startup code bits), -10 bytes on the internal F-Line emulator.
So do library functions, but they are used by a lower proportion of programs: -22 bytes for OSV*Timer, -2 bytes for _rowread, -2 bytes for the VTI detection. Not to mention sprite routines, on which you never bothered to integrate even changes that I contributed to TIGCC at the same time they were being made in ExtGraph, in 2002-2003, let alone Joey Adams' 2005 modifications + feature extensions.
On the one hand, you're pushing people for optimizing their programs for size, but on the other hand, you aren't working even on the extremely low-hanging fruit. Do what I say, which is not what I do.
It would be much more useful to work on e.g. the compiler

But it's much harder, and:
where you can save hundreds if not thousands of bytes on many programs.

That's assuming that the optimization on the m68k target is improving as the GCC version number is increasing... and it isn't. Definitely not in terms of speed ( e.g. http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40454 ), and our own experience with GCC 4.0 and 4.1 hasn't been problem-free at all. Patrick can write more if he wants.
There are more pressing issues than spending a lot of time upgrading to a new compiler version (which is known to be likely to yield worse results), and then spending another lot of time testing.

One of the primary goals wrt. testing would be to do a better job testing than you did yourself with GCC 4.0 and 4.1 before proposing them to users... which shouldn't be too hard, given how low your standards were. You're trolling all the time against "Micro$oft", but you used their "methods".

It's also quite sad that my LZMA launcher work got little to no interest, when it would save kilobytes of archive memory for many programs.

Due to its large footprint, the only place where the LZMA decompression code could make real sense would be if embedded into a generic launcher (ttstart or SuperStart). And its massive slowness (15-20 seconds to decompress a ~60 KB program, i.e. more than an order of magnitude slower than ttunpack and nearly an order of magnitude slower than ttunpack-super-duper-small) makes it significantly less desirable.
Sure, none of the ASM wizards seriously tried to optimize that code - but the raw size of the ASM decompression code, between 2 and 3 times larger than that of the initial ttunpack code, certainly acts as a deterrent.