22Fermer24
Lionel DebrouxLe 20/04/2010 à 20:42
Your Perl script 1. is unsafe, as it doesn't fully understand the file format

Bullshit: the output of UpdateInclude before and after the change is identical (and yes, I renamed the output folder containing the files generated before the change, created and initialized a new output folder, launched UpdateInclude, and executed diff after that).
and 2. does entirely the wrong thing, adding hardcoded header references instead of removing them, i.e. the exact opposite of what we want.

As I've already written you in a previous occurrence of this discussion, modifying the script (which I publicly posted on the corresponding GCC4TI ticket, http://trac.godzil.net/gcc4ti/ticket/3 ) to remove header references (when we have - if ever - a tool capable of working correctly without header references) instead of adding them is trivial.
IOW, your "it will create more work in the long-term" argument is totally irrelevant for that change wink
That just shows that you guys either are not competent enough or didn't spend enough time on it.

Tell that to MathStuf and konrad.
More than a year ago, I sent them a small patchset that greatly improved parsing of .hs* files (much fewer files were unparseable after that). AFAICS, it didn't get integrated, and no other commits have been made to the tree since then.
The only convincing use case you gave for a rewrite, switching from CHM to some Qt Doc format (thereby reducing the number of computers that can read the documentation out of the box, BTW), was motivated by the fact that you are unable to generate CHM files yourself, because you refuse using Windows.
Notice that while I don't have that problem myself (I've multiple legal licenses of Windows, for the infrequent process of building the Delphi tools and making a CHM file), and while it hasn't been proved to us that the "doc-to-headers" approach is the most sound one (when rewriting a tool completely, even this kind of fundamental things can be re-evaluated), I nevertheless participated in the rewrite of a new documentation tool wink
What also made it possible is that you have just added your own snippets because you think they are perfect. They're not. There are plenty of things needing fixing in them. So they need to be split into small reviewable pieces (see e.g. the LKML patch submission guidelines to get an idea of how I expect them to look) which can be proofread, tested where they include code (e.g. address hacks), fixed and merged individually.

Get a clue about 1) the way I merged a subset of the snippets and 2) the fate of most Address/Value Hacks (as you're aware, we already went through that discussion months ago, see topics/115787-gcc4ti-manifesto#15 ).
In fact, you definitely didn't test the sprite routines, as those didn't work at all in your release.My GCC updates at least worked on the programs I tested them on!

You're re-posting the same crap that I wrote to be false in a previous occurrences of a discussion on that particular point...
I already wrote that I definitely did test them (and they did work for me, obviously), using, as you're aware, Joey Adams' exhaustive tester program. The problem was, as you're aware, an interface mismatch between the definitions of the sprite routines' prototypes embedded in the program (I kept that of Joey's routines), and those (without explicit register names...) of the headers. As you're aware, I caught the bug myself before someone else reported it, and as you're aware, I made the erratum on the GCC4TI site.
For example, I did some tuning of my own: [...], additional 68k peepholes which improve both size and speed.

Also known as the root cause of a newer build of TIGCC-GCC turning a valid program (ebook) into a crasher (while, IIRC, not being enough to counter the size increase due to reduced optimization yielded by other changes to the compiler). What's more, that particular peephole is less powerful than the corresponding one in GTC.
But peepholes in general are definitely useful, it's a fact.
And again, if you're planning to stay on GCC 4.1 forever, that's quite short-term thinking.

As shown in the corresponding ticket ( http://trac.godzil.net/gcc4ti/ticket/39 ), nothing is set in stone in a way or another.



ACK for lzma saving space more space than I remembered (though the sample of programs you applied it on was pretty small).
That does not, however, change the fact that multiple specific launchers (pstarters) stink in at least two aspects:
* space savings: ttstart is less than 15% larger than the pstarter containing the same ppg decompression routine, and 50% or so larger if ttstart contains the faster ppg decompression routine. It follows that ppg pstarters take up more space than a ppg ttstart as soon as there's more than one on a given calculator;
* compatibility with newer models (the joy of pstarters that don't work on 89T, while the compressed program that they launch could - Zeljko/TICT's advint is one of them)
while providing no ease-of-use advantage wrt. SuperStart's home screen line integration.
And besides home screen line integration, SuperStart itself has two technical advantages over ttstart and pstarters, namely near-zero RAM consumption in operation (as opposed to 1 KB), and being a (small) FlashApp enables it, in most cases, not to take space in archive memory.
But we've already gone through that discussion many times.
And its massive slowness (15-20 seconds to decompress a ~60 KB program, i.e. more than an order of magnitude slower than ttunpack and nearly an order of magnitude slower than ttunpack-super-duper-small) makes it significantly less desirable.
That's a startup-only cost and has no effects whatsoever on runtime speed, i.e. gameplay (for games) or usability (for utilities).

You're stating an extremely obvious fact here. En français, "tu enfonces une porte ouverte" (~you're breaking in through an open door).
And what ? That it's a startup-only cost does not shelve another fact, also relevant to users, that it's massively slow.
Remember, once upon a time, we were criticizing the slowness of the ttunpack-super-duper-small routine unconditionally used in the specific launchers generated by TIGCC. And that routine was much smaller and much faster than the LZMA routine is, so guess what happens with LZMA...
Also, remember that criticism about the slowness of ttunpack-super-duper-small stopped when Samuel Stearley made a breakthrough that doubled its speed without changing its size. After that, I clearly remember writing something along the lines of "it's much more acceptable now": indeed, the fact was, ttunpack-super-duper-small had become nearly as fast as the routine used before Greg Dietsche, I, and then Samuel Stearley worked on it, while being more than twice smaller. And I've simultaneously stopped requesting the feature of providing users with the choice between the small and the fast (more than twice larger, more than twice faster) decompression routines in TIGCC.