27Fermer29
Kevin KoflerLe 21/04/2010 à 04:30
Lionel Debroux (./23) :
Bullshit: the output of UpdateInclude before and after the change is identical (and yes, I renamed the output folder containing the files generated before the change, created and initialized a new output folder, launched UpdateInclude, and executed diff after that).

That only proves that it doesn't break on the current contents of the .hs? files, it could still choke on future valid files. (But the fact that you verified this does make me more confident about possibly using your scripts as a temporary stopgap solution.)
As I've already written you in a previous occurrence of this discussion, modifying the script (which I publicly posted on the corresponding GCC4TI ticket, http://trac.godzil.net/gcc4ti/ticket/3 ) to remove header references (when we have - if ever - a tool capable of working correctly without header references) instead of adding them is trivial.

I guess we could also make a version which only strips out the header where it is the current header? (So we could run your script, move the docs we want to move, then run the modified one.)
IOW, your "it will create more work in the long-term" argument is totally irrelevant for that change wink

The very work of creating the script was "more work", if you consider the long run where the rewrite needs to happen anyway.
The only convincing use case you gave for a rewrite, switching from CHM to some Qt Doc format (thereby reducing the number of computers that can read the documentation out of the box, BTW), was motivated by the fact that you are unable to generate CHM files yourself, because you refuse using Windows.

Also because I need to generate the Free format anyway for KTIGCC for *nix, so we might just as well also use it with KTIGCC/W32 which is going to come at some point. I really don't want to have to fight with the platform-specific HTML Help API and #ifdefs! Right now, I can only generate the CHM sources and then I need to run a converter I wrote as a quick hack to convert this to the Qt Assistant ADP format (which isn't even the latest one, I'll need to update the stuff to support the new QCH format). That converter is extremely slow. It would be much nicer to be able to generate the intended target format(s) (the tools should be flexible and allow more than one) directly.

In addition, the existing tools are in Delphi, which forces me to use WINE to run them and which makes it extremely hard to do any changes to them (I haven't made a single change to those tools because of this). It also means it is impossible to regenerate the documentation with only built-from-source Free Software, you need prebuilt Delphi binaries.
Get a clue about 1) the way I merged a subset of the snippets

You merged them using your Perl script to handle the cross-references, which I consider to be a bad solution because it leaves the docs with more redundant crap after running it.
and 2) the fate of most Address/Value Hacks

Just dropping those is a poor solution. In fact it's the most useful part of your contributions, most of the prototypes are already there in unknown.h and the documentation is just documentation, it can be consulted directly from contrib.zip (and some of the functions are also documented in TI's PDF).
I already wrote that I definitely did
test them (and they did work for me, obviously), using, as you're aware, Joey Adams' exhaustive tester program. The problem was, as you're aware, an interface mismatch between the definitions of the sprite routines' prototypes embedded in the program (I kept that of Joey's routines), and those (without explicit register names...) of the headers.

So you didn't test them in the obvious way (which is how I'd have tested them): create a new project, add #include <tigcclib.h> and draw a sprite!
As you're aware, I caught the bug myself before someone else reported it,

But too late, you already released it.

And you didn't catch the 2 broken optimizations you submitted to TIGCC in the past, my users did.
For example, I did some tuning of my own: [...], additional 68k peepholes which improve both size and speed.
Also known as the root cause of a newer build of TIGCC-GCC turning a valid program (ebook) into a crasher

A bug which has been fixed eons ago (within a day of you reporting it!) and which was only in GCC builds which were explicitly labeled as prereleases for testing purposes (and which was present only for about 10 days!). It was never in any official (beta or otherwise) release of TIGCC. You seem to have completely misunderstood the purpose of a pre-beta testing prerelease. Unlike you, I believe in public development and public testing. That was what those prereleases were for. The bug also didn't end up in any released eBook Reader. Yet you keep repeating it.

You (and the community as a whole) were expected to download those testing builds, try them out on your own programs and report any issues with it. You did. I was supposed to fix those issues and make fixed testing builds available ASAP. And that's exactly what I did (within hours of the bug report!). So where's your problem? This is how public testing works! It should be obvious that I can't test the whole TI-89/89Ti/92+/V200 software base all by myself! More testing would just have delayed getting the fixes out.

In addition, do I really have to remind you that you were the one repeatedly pestering me to write those peepholes until I did it?
What's more, that particular peephole is less powerful than the corresponding one in GTC.

You're free to write a better one. Now that you are the most active developer of GCC4TI, it's time to put your coding hands where your mouth is!
And again, if you're planning to stay on GCC 4.1 forever, that's quite short-term thinking.

As shown in the corresponding ticket ( http://trac.godzil.net/gcc4ti/ticket/39 ), nothing is set in stone in a way or another.

But the facts are that nobody is working on this in your supposedly "active" project.
ACK for lzma saving space more space than I remembered (though the sample of programs you applied it on was pretty small).
That does not, however, change the fact that multiple specific launchers (pstarters) stink in at least two aspects:
* space savings: ttstart is less than 15% larger than the pstarter containing the same ppg decompression routine, and 50% or so larger if ttstart contains the faster ppg decompression routine. It follows that ppg pstarters take up more space than a ppg ttstart as soon as there's more than one on a given calculator;
* compatibility with newer models (the joy of pstarters that don't work on 89T, while the compressed program that they launch could - Zeljko/TICT's advint is one of them)
while providing no ease-of-use advantage wrt. SuperStart's home screen line integration.
And besides home screen line integration, SuperStart itself has two technical advantages over ttstart and pstarters, namely near-zero RAM consumption in operation (as opposed to 1 KB), and being a (small) FlashApp enables it, in most cases, not to take space in archive memory.But we've already gone through that discussion many times.

But custom launchers are the only way to have a compressed program which "works out of the box" without having to install a kernel-like app such as SuperStart.
And what ? That it's a startup-only cost does not shelve another fact, also relevant to users, that it's massively slow.

The program is just as fast as always. Its startup is slow, but that's not a practical problem at all.
Remember, once upon a time, we were criticizing the slowness of the ttunpack-super-duper-small routine unconditionally used in the specific launchers generated by TIGCC. And that routine was much smaller and
much faster than the LZMA routine is, so guess what happens with LZMA...

Thankfully, me threatening to switch to LZMA stopped the complaints about ttunpack-small. tongue

That threat also helped getting the pucrunch licensing issues straightened out. smile