1

Hi guys,

I recently started using gcc4ti for developing Punix (I previously used TIGCC), and I was wondering what changes were made from TIGCC. I couldn't find any documentation specific to gcc4ti or anything else that lists the changes. I've only read that there are linker changes/fixes in gcc4ti that haven't been committed to tigcc yet, but I don't know what has changed. Maybe I just haven't looked hard enough....

Edit: I'm primarily interested in changes made to the toolchain itself (compiler, linker, etc), rather than the libraries that accompany the compiler, since I'm not using the libraries in Punix anyway.

2

There's basically nothing new. I recommend to switch back to TIGCC, which is still actively maintained, there just wasn't anything important enough to do a new release for.
avatar
Mes news pour calculatrices TI: Ti-Gen
Mes projets PC pour calculatrices TI: TIGCC, CalcForge (CalcForgeLP, Emu-TIGCC)
Mes chans IRC: #tigcc et #inspired sur irc.freequest.net (UTF-8)

Liberté, Égalité, Fraternité

3

Hi Christopher smile

A number of changelogs have been updated as part of the GCC4TI 0.96 Beta 10 release (r1346) and after that release (r1363).

So far, in GCC4TI, we have spent more time working on:
* the things that need most work, e.g. better-featured and more portable build/packaging scripts, the examples, and the library;
* features which are, er, used by a higher number of developers than the Flash OS support is.
I hope you understand that smile

There are more toolchain-related commits than those ones, but only a subset of them can be of some interest to you, as an OS programmer. In the main branch (these are the commits that happened in GCC4TI, you already know about the toolchain-related fixes that happened in TIGCC in the more than two years between TIGCC 0.96 Beta 8 and GCC4TI 0.96 Beta 9):
* r1265 "Add Flash OS library." (so that people don't have to fetch it from an obscure place on Kevin's website - this kind of things, even if small, belongs to the development environment);
* r1272 "Add implementation and documentation of special __ld_bss_even_end symbol, resolved to the first even address after the end of the BSS section.";
* maybe r1325 "GCC 4.1.2-tigcc-4: Fix a 64-bit compatibility problem in the AMS float support, which triggered harmless warnings in the generated assembly code.".
AFAICS, none of those has been backported to TIGCC. But then, Kevin has done pretty little on TIGCC in the past 15 months since GCC4TI was publicly announced.

There are two development branches for ld-tigcc that are useful to you (and others), but need more work:
* the ld-tigcc-optimization branch, which aims at making the linker faster, noticeably so for programs larger than several dozen kilobytes. The only data structures used in ld-tigcc are arrays and linked lists ( topics/108648-ld-tigcc-flash-os-bss-special/4#108 ), and as a result, ld-tigcc spends 10 minutes building PedroM, 90% of which are spent traversing linked lists ( topics/108648-ld-tigcc-flash-os-bss-special/4#111 )...
* the ld-tigcc-flashos-improvements branch (and its doc-related branch), which contain a number of improvements to the Flash OS support made by Patrick's modifications to improve Flash OS support. That's what he's been using for PedroM for a while.
avatar
Membre de la TI-Chess Team.
Co-mainteneur de GCC4TI (documentation en ligne de GCC4TI), TIEmu et TILP.
Co-admin de TI-Planet.

4

!call PpHd
--- Call : PpHd appelé(e) sur ce topic ...
, BTW.
avatar
Membre de la TI-Chess Team.
Co-mainteneur de GCC4TI (documentation en ligne de GCC4TI), TIEmu et TILP.
Co-admin de TI-Planet.

5

Lionel Debroux (./3) :
* the ld-tigcc-flashos-improvements branch (and its doc-related branch), which contain a number of improvements to the Flash OS support made by Patrick's modifications to improve Flash OS support. That's what he's been using for PedroM for a while.


You'll need this for Punix. It adds one option :

--flash-os-bss-start=$(FLASH_OS_BSS_START)
(usually FLASH_OS_BSS_START=0x5B00 )
which defines the start in RAM, of the BSS section allowing you to define global variable in .c files as usual.

It adds also new symbols:
__ld_archive_start: just after the end of the code, rounded to 64K
__ld_bss_even_end: just after the end of ram BSS section, rounded to 2.

6

Lionel Debroux (./3) :
* the things that need most work, e.g. better-featured and more portable build/packaging scripts

That's completely useless for the user. And I don't see those changes as being needed or useful, my scripts work fine.
the examples,

The changes you did to the examples are only of extremely marginal use, especially to an experienced developer.
and the library;

That part is arguably useful, but how much have you really changed? AFAIK, not much.
* r1265 "Add Flash OS library." (so that people don't have to fetch it from an obscure place on Kevin's website - this kind of things, even if small, belongs to the development environment);

The reason it is not shipped with TIGCC is that it is experimental and incomplete. Nobody is interested in making it more complete. GCC4TI also hasn't done any such work.

IMHO things like the initial vector table (even if it's only composed of undefined references the OS has to fill in), common hardware initialization which any OS has to do and FlashROM unprotection/writing routines which pretty much any OS will need as well really belong into the Flash OS library, not into every single OS. Arguably even the whole contents of the 24 KB startup sector belong into the common library. But as PpHd actively refused to let me use any code from PedroM under the TIGCCLIB license, all such code will need to be rewritten.

Another part of Flash OS support in TIGCC which is also incomplete, and where GCC4TI also didn't do anything to make it more complete, is linker support for file formats: there's no support for exporting the current ??u format, only the obsolete/"outputbin" tib format which TI last used for AMS 1.05 in 1999. But as the file output code is currently duplicated in the Delphi IDE (and the Delphi tigcc.exe which uses the same code as the IDE), fixing this would involve also changing stuff in the Delphi code, or replacing the Delphi IDE entirely. (Note: ExtendeD gave me permission to reuse (under the GPL) the code from the tib2xxu executable which ships with FreeFlash, so you don't have to start from scratch if you're interested in doing that work, you can start from this code. This permission does of course not extend to any other part of FreeFlash.)

So there's a reason, or even two reasons, why flashos.a is an optional, experimental addon to TIGCC. You just added it to GCC4TI without fixing any of the actual issues.
* r1272 "Add implementation and documentation of special __ld_bss_even_end symbol, resolved to the first even address after the end of the BSS section.";

This can be done without in several ways (runtime computation, adding a char pad; variable if the length turns out odd, using __ld_bss_end+1 if the length turns out odd etc.), I really don't see how this is so urgent, or even a needed feature at all (though I don't object to it in principle).
* maybe r1325 "GCC 4.1.2-tigcc-4: Fix a 64-bit compatibility problem in the AMS float support, which triggered harmless warnings in the generated assembly code.".

"Harmless warnings" says it all. The resulting generated object code is perfectly fine. This is a very harmless bug. Not a showstopper at all.
AFAICS, none of those has been backported to TIGCC. But then, Kevin has done pretty little on TIGCC in the past 15 months since GCC4TI was publicly announced.

I'm sorry, but I consider CalcForgeLP to be more important for end users, so I'm spending most of my "TI calculator" time on getting the CalcForge software into a releasable state (which implies to do renames properly, not like you who still illegally use names such as "TIGCCLIB" or "tigcc.a").

In addition, the GCC4TI SVN has also not been touched for 9 months!
There are two development branches for ld-tigcc that are useful to you (and others), but need more work:

"Need more work" says it all: those work branches are not ready for user consumption.
* the ld-tigcc-optimization branch, which aims at making the linker faster, noticeably so for programs larger than several dozen kilobytes. The only data structures used in ld-tigcc are arrays and linked lists ( topics/108648-ld-tigcc-flash-os-bss-special/4#108 ), and as a result, ld-tigcc spends 10 minutes building PedroM, 90% of which are spent traversing linked lists ( topics/108648-ld-tigcc-flash-os-bss-special/4#111 )...

10 minutes on what hardware? Certainly not the current 32 nm Core i7 CPUs which didn't even exist in 2008 when this was measured.

And are your optimizations anywhere near done? Are they even started yet? I see nothing at all in your SVN repository! The branch hasn't been touched for 16 months (!) and these are your "tree" operations: http://trac.godzil.net/gcc4ti/browser/branches/ld-tigcc-optimization/tree.h?rev=1288. You can hardly get more fake. You haven't optimized anything at all in that branch so far. I can easily create development branches and promise the moon like you do, but it won't make this any more real for our users. At least the KTIGCC 2 HEAD which you call vaporware can actually be compiled (against KDE 4, as it promises) and run (it even compiles with cross-MinGW, though it certainly has more bugs there as there are still some unixisms in the code), it just has a couple showstopper feature regressions I didn't get around to work on because my time was spent on higher-priority projects (university projects, CalcForge renames etc.). And you didn't do any work on KTIGCC (which BTW you have to rename to KGCC4TI if you want to do releases of that codebase).
* the ld-tigcc-flashos-improvements branch (and its doc-related branch), which contain a number of improvements to the Flash OS support made by Patrick's modifications to improve Flash OS support. That's what he's been using for PedroM for a while.

This work is also extremely incomplete (and also hasn't been touched in your SVN for 16 months):
PpHd (./5) :
You'll need this for Punix. It adds one option :

--flash-os-bss-start=$(FLASH_OS_BSS_START)
(usually FLASH_OS_BSS_START=0x5B00 )which defines the start in RAM, of the BSS section allowing you to define global variable in .c files as usual.

While this is nice in theory, in practice it works quite poorly because the required work on optimization in the assemblers and the linker was not done: destination operands cannot be optimized by the linker (because the assemblers don't emit sufficient information for that into the object file), so whenever an absolute address is used as a destination operand, it will be coded in 4 bytes instead of 2 (which also slows down the code, in addition to making it larger).

The good old technique of using things like:
.equ variable1,0x5B00
.equ variable2,0x5B02

or:
#define variable1 *(short*)0x5B00
#define variable2 *(long*)0x5B02

(which PedroM had been using just fine for years) will still generate much more efficient code than your incompletely implemented syntactic sugar.

In addition, your patch also lacks IDE (and tprbuilder) support and support in the Delphi tigcc.exe for the added linker switch.
It adds also new symbols:__ld_archive_start: just after the end of the code, rounded to 64K

This can simply be hardcoded with an equate or #define.
__ld_bss_even_end: just after the end of ram BSS section, rounded to 2.

And I already elaborated on the "usefulness" of this one above, not to mention that it's only useful at all for Flash OSes in the presence of your incomplete --flash-os-bss-start feature.
avatar
Mes news pour calculatrices TI: Ti-Gen
Mes projets PC pour calculatrices TI: TIGCC, CalcForge (CalcForgeLP, Emu-TIGCC)
Mes chans IRC: #tigcc et #inspired sur irc.freequest.net (UTF-8)

Liberté, Égalité, Fraternité

7

That [improving the build scripts] 's completely useless for the user.

Sure, stopping the build process at the first error (to prevent incomplete builds going unnoticed from the user, in the middle of the pile of terminal output created by the build process, and creating trouble later) is completely useless for the user.
So is automatic handling (application, protection against reapplication) of the TIGCC patch to GCC and binutils.
And so is the support for creating a SFX archive ready for use on the same OS flavour.
Yeah, sure wink
And I don't see those changes as being needed or useful

Such a broad generalization would mean that in your opinion, easing the process of making a release (by adding cross-compilation support) is unneeded or useless ?
That line could also mean that you're dismissing the GCC4TI patches without having sufficiently of a clue...


Kevin, everybody already knows that you don't like the idea of GCC4TI (which wouldn't exist in the first place if you were not such a roadblock to advancing a number of areas of the state of the art of a TI-68k development environment), the GCC4TI contributors (especially me), and the fact that people are using GCC4TI instead of TIGCC. You don't need to emphasize about that on each occasion, and you certainly don't need to blurt out stupidities...

multiple occurrences of "<FlashOS-related code> has not been touched for ... months in the GCC4TI SVN"multiple occurrences of "<FlashOS-related code> is incomplete"

As I wrote:
So far, in GCC4TI, we have spent more time working on:
[...]* features which are, er, used by a higher number of developers than the Flash OS support is.

It means what it reads wink
A faster linker only matters a lot to OS programmers; it would not be that much of a life-changer for programmers of 32-64 KB programs, which have always lived with 2-5 seconds, depending on their hardware, in the linking stage.
avatar
Membre de la TI-Chess Team.
Co-mainteneur de GCC4TI (documentation en ligne de GCC4TI), TIEmu et TILP.
Co-admin de TI-Planet.

8

And this is exactly why I haven't worked on this either, yet you're quick to accuse me of that… And it doesn't make sense to advertise a development branch which changes absolutely nothing for a user.
avatar
Mes news pour calculatrices TI: Ti-Gen
Mes projets PC pour calculatrices TI: TIGCC, CalcForge (CalcForgeLP, Emu-TIGCC)
Mes chans IRC: #tigcc et #inspired sur irc.freequest.net (UTF-8)

Liberté, Égalité, Fraternité

9

Lionel Debroux (./7) :
Sure, stopping the build process at the first error (to prevent incomplete builds going unnoticed from the user, in the middle of the pile of terminal output created by the build process, and creating trouble later) is completely useless for the user.So is automatic handling (application, protection against reapplication) of the TIGCC patch to GCC and binutils.

All this is irrelevant once you have the stuff built, which just takes reading and following the instructions. And besides, end users should be using binary packages anyway (and blame Romain for discontinuing his debs, there were already no TIGCC packages in his repo when I finally removed it for being unmaintained when he left; the full story: 1. those packages had been broken for months and he didn't care to fix them despite multiple nags from me, who received complaints from Debian users about this, 2. he blamed the fact that my build system is not autotools for all the problems and 3. he refused to accept that TIGCC is not a Debian-native package and that he needs to do the packaging separately from the upstream source code, so he just removed the packages :-/).
And so is the support for creating a SFX archive ready for use on the same OS flavour.

SFX archives are a very poor substitute for proper packages. In fact, I even plan to discontinue my binary tarballs as of the next release. They tend to be very distro-specific anyway, at least for the first 6 months or so after a release, as no other distro ships an as new glibc as Fedora. (The glibc release schedule is explicitly synchronized to Fedora's, given that the glibc maintainer works for Red Hat.) And Fedora users are better off using the RPMs.
Such a broad generalization would mean that in your opinion, easing the process of making a release (by adding cross-compilation support) is unneeded or useless ?

Yes. Your changes don't actually make it easier for me to do a release. They don't solve the actual problems I have, like having to get the Delphi parts built somehow. In fact, even you have to boot into that inferior OS to build those parts, they cannot be cross-built. Other reasons I didn't do a Beta 9 release include unsolved release blockers, which aren't solved in your tree either, such as issues with the __attribute__((may_alias)) usage in ld-tigcc with current GCC, and the mere fact that there are no fixes which are important enough to rush out a release. The "problems" you're solving haven't been an obstacle for me to make a release at all.
avatar
Mes news pour calculatrices TI: Ti-Gen
Mes projets PC pour calculatrices TI: TIGCC, CalcForge (CalcForgeLP, Emu-TIGCC)
Mes chans IRC: #tigcc et #inspired sur irc.freequest.net (UTF-8)

Liberté, Égalité, Fraternité

10

All this is irrelevant once you have the stuff built

The irrelevant thing is your comment: we're talking precisely about these scripts that build stuff, not about what happens "once you have the stuff built".
And besides, end users should be using binary packages anyway. SFX archives are a very poor substitute for proper packages. [...] [Binary tarballs] tend to be very distro-specific anyway

In the real world, many users cannot use proper binary packages, because these don't exist, due to extreme fragmentation across distros in packaging schemes, and few people investing time in making a package for a specialist tool.
That state of fact is precisely why build scripts with error handling (and even, in a second step, a proper unified build system for *nix, at the very least Makefile-based) are useful.
3. he refused to accept that TIGCC is not a Debian-native package and that he needs to do the packaging separately from the upstream source code, so he just removed the packages :-/).

I'd rather say 3. he made a mistake once or twice in the usage of that highly sucky CVS tool, committed the Debian files where he shouldn't have, and you used this as a pretext to remove his commit access (no matter he was already working on TIGCC running on *nix before you came in) => no wonder he didn't keep packaging your stuff after that.
Yes. Your changes don't actually make it easier for me to do a release.

Don't be so centered on yourself. Even in case they'd be useless to you (I can't see how having cross-compilation support built-in the scripts, as opposed to having to run it on a tool-by-tool basis, would not help, but whatever), they're obviously useful to us, and that's what matters.
They don't solve the actual problems I have, like having to get the Delphi parts built somehow.

Well, you're the one to blame if this is a problem for you wink
If you weren't such a pain to deal with, more people - starting with myself - would still be working with you. It takes years, but people get tired of someone refusing many ideas (technical matters, and life in general, are not black-and-white, they're shades of gray, see http://tichessteamhq.yuku.com/reply/32850/t/ttunpack-decompress-gray.html#reply-32850 ), disregarding user input (removing VTI support...), and otherwise mishandling a project that used to be cooperative.
avatar
Membre de la TI-Chess Team.
Co-mainteneur de GCC4TI (documentation en ligne de GCC4TI), TIEmu et TILP.
Co-admin de TI-Planet.

11

Lionel Debroux (./10) :
The irrelevant thing is your comment: we're talking precisely about these scripts that build stuff, not about what happens "once you have the stuff built".

But that's precisely why those changes are useless in common usage.
I'd rather say 3. he made a mistake once or twice in the usage of that highly sucky CVS tool, committed the Debian files where he shouldn't have,

Committing the debian directories was not a mistake, it was intentional as per his commit messages. The other unrelated (and even more unwanted) changes he committed at the same time were the mistake. And he managed to do this not once, but twice! (If you screw up once, you need to fix your processes so you don't do it again. He still kept his unsafe practices such as building directly in his checkout rather than in an export.) And this after I told him right from the start that he shouldn't be committing debian directories to the upstream package in the first place (those directories were unwelcome themselves too), and reiterated this after the first accident. He just didn't care.
and you used this as a pretext to remove his commit access (no matter he was already working on TIGCC running on *nix before you came in)

It was not a pretext. He abused his commit privileges to commit unapproved changes, even to KTIGCC which has always been my project and not his. And he hadn't used them for anything other than those unwanted changes for years!
=> no wonder he didn't keep packaging your stuff after that.

I don't see this as "no wonder" at all, but as "my way or the highway" control-freakiness. Packaging metadata does not belong into the upstream SCM, period. But he only cared about what was convenient for him, he didn't give a darn about best practices or about the trouble caused to me (pollution of my cvs2cl-generated KTIGCC changelogs, stale debian directories at release time etc.).
Don't be so centered on yourself. Even in case they'd be useless to you (I can't see how having cross-compilation support built-in the scripts, as opposed to having to run it on a tool-by-tool basis, would not help, but whatever), they're obviously useful to us, and that's what matters.

You can't blame me for not doing changes which are not useful to me to a process only I run (cross-compiling EXE binaries of TIGCC).
If you weren't such a pain to deal with, more people - starting with myself - would still be working with you. It takes years, but people get tired of someone refusing many ideas (technical matters, and life in general, are not black-and-white, they're shades of gray, see http://tichessteamhq.yuku.com/reply/32850/t/ttunpack-decompress-gray.html#reply-32850 ), disregarding user input (removing VTI support...), and otherwise mishandling a project that used to be cooperative.

Yawn, always the same bogus accusations. A maintainer can't say yes to everything, there are often decisions some people do not like, often for irrational reasons (e.g. there's no rational reason to prefer VTI to TiEmu or Emu-TIGCC, as those have all the features VTI had and more, including a C debugger!), but which are necessary for maintaining the project's quality and/or long-term maintainability.
avatar
Mes news pour calculatrices TI: Ti-Gen
Mes projets PC pour calculatrices TI: TIGCC, CalcForge (CalcForgeLP, Emu-TIGCC)
Mes chans IRC: #tigcc et #inspired sur irc.freequest.net (UTF-8)

Liberté, Égalité, Fraternité

12

Yawn, always the same bogus accusations.

And the same bogus reply of yours to our points...
there's no rational reason to prefer VTI to TiEmu

You're lying: multiple persons have given you one rational reason for using VTI instead of TIEmu: old computers.
It's not like it was hard to keep, in the Delphi IDE, VTI support alongside TIEmu...
maintaining the project's quality

Well, facts show that you have rather low standards when it comes to the quality of TIGCC.
Merely compiling the examples showed no less than four sets of problems, one of which was due to an untested modification made by you 4 years ago (by "set" of problems, I mean that e.g. the couple dozen name collisions between examples are counted as a single set, and so are the compilation warnings).
The buggy bsearch and the stupid shellsort implementation, which didn't have an example, didn't get caught either until recently.
and/or long-term maintainability.

Discouraging former contributors and potential contributors alike is not an appropriate way to achieve this goal wink
The project's quality doesn't worsen immediately (in the long-term, it will due to bit-rotting) - but it won't improve either. Hardly anything gets done in TIGCC - not even things that you're always nagging other people for, such as optimizing for size (here, the library functions).
avatar
Membre de la TI-Chess Team.
Co-mainteneur de GCC4TI (documentation en ligne de GCC4TI), TIEmu et TILP.
Co-admin de TI-Planet.

13

christop, if you need a more complete answer, don't hesitate to ask.

14

Don't hesitate indeed, you know my e-mail address. smile
avatar
Mes news pour calculatrices TI: Ti-Gen
Mes projets PC pour calculatrices TI: TIGCC, CalcForge (CalcForgeLP, Emu-TIGCC)
Mes chans IRC: #tigcc et #inspired sur irc.freequest.net (UTF-8)

Liberté, Égalité, Fraternité

15

Lionel Debroux (./12) :
e.g. the couple dozen name collisions between examples

A complete non-problem. When have I ever promised that the examples can be sent to the calculator at the same time? That's not at all what the examples are for! Those examples are documentation, more precisely code snippets, not programs which are useful by themselves. At most you'll want to run them once on an emulator to see what the code does and then discard the savestate. I don't see why they need unique on-calc names at all.
Discouraging former contributors and potential contributors alike is not an appropriate way to achieve this goal wink

It's not my fault that some people (and this includes the whole GCC4TI team) can't get over their egos for the long-term benefit of the project!

Some of those people never tried to actually contribute, just to impose their design choices. As for you, Lionel, you at least did contribute stuff, a long time ago, but you did it in a form which is extremely hard to integrate into the existing project and never bothered trying to improve this situation.
The project's quality doesn't worsen immediately (in the long-term, it will due to bit-rotting) - but it won't improve either. Hardly anything gets done in TIGCC - not even things that you're always nagging other people for, such as optimizing for size (here, the library functions).

Optimizing away 2 or 4 bytes from some library function is of almost no practical use. It would be much more useful to work on e.g. the compiler, where you can save hundreds if not thousands of bytes on many programs.

It's also quite sad that my LZMA launcher work got little to no interest, when it would save kilobytes of archive memory for many programs.
avatar
Mes news pour calculatrices TI: Ti-Gen
Mes projets PC pour calculatrices TI: TIGCC, CalcForge (CalcForgeLP, Emu-TIGCC)
Mes chans IRC: #tigcc et #inspired sur irc.freequest.net (UTF-8)

Liberté, Égalité, Fraternité

16

e.g. the couple dozen name collisions between examples
A complete non-problem. [...]

You have a narrow-minded view of examples, serving purely as documentation purposes (for a significant number of areas of the headers / library). While being examples is their primary purpose, they can also be used for testing purposes - manual testing first, potentially automated testing later (remember the topic where we posted about making a brainstorming for special instruction sequences ala WTI for programs sending messages to the emulator ?).
Using the examples for testing purposes yielded great results, 4 sets of problems as I wrote above: do what you want in your software, but we are going to keep using them.
As for you, Lionel, you at least did contribute stuff, a long time ago, but you did it in a form which is extremely hard to integrate into the existing project and never bothered trying to improve this situation.

On the part I emphasised, you're outright lying. While it's a fact that my efforts remained unfinished (and therefore not useful to you) until after the end of my studies in CS (and a rewrite in a programming language much more appropriate for the task, which I learnt outside of school in-between), I did try.

On your side, you don't want to use the results produced by the 100-ish SLOC of Perl code I wrote in 2008, waiting for some hypothetical perfect rewrite of documentation tools (which, albeit being TIGCC/GCC4TI-specific, have worked rather well enough) which would solve several shortcomings, and add a new output format. Despite the informal spec, I have participated in a nascent rewrite (ask konrad and MathStuf if you don't believe me) of tools matching the current Update* workflow (though the "documentation-to-headers" approach could be questioned, most projects are usually using the opposite "code-to-documentation" approach), which didn't get far.
On my side, I have used the results produced by that small Perl code, and this has made it possible to integrate a number of documentation snippets. This is a much more pragmatic approach.
Optimizing away 2 or 4 bytes from some library function is of almost no practical use

Remember, in AMS native programs, startup code such as SAVE_SCREEN, the overly used __set_in_use_bit code, or other program support code such as e.g. the internal F-Line emulator, get duplicated across the vast majority of programs (that's what some people term "the stub of _nostub programs"). And we didn't just save "2 or 4 bytes": -16 bytes on SAVE_SCREEN (the code uses fewer registers, reducing side effects on other startup code bits), -10 bytes on the internal F-Line emulator.
So do library functions, but they are used by a lower proportion of programs: -22 bytes for OSV*Timer, -2 bytes for _rowread, -2 bytes for the VTI detection. Not to mention sprite routines, on which you never bothered to integrate even changes that I contributed to TIGCC at the same time they were being made in ExtGraph, in 2002-2003, let alone Joey Adams' 2005 modifications + feature extensions.
On the one hand, you're pushing people for optimizing their programs for size, but on the other hand, you aren't working even on the extremely low-hanging fruit. Do what I say, which is not what I do.
It would be much more useful to work on e.g. the compiler

But it's much harder, and:
where you can save hundreds if not thousands of bytes on many programs.

That's assuming that the optimization on the m68k target is improving as the GCC version number is increasing... and it isn't. Definitely not in terms of speed ( e.g. http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40454 ), and our own experience with GCC 4.0 and 4.1 hasn't been problem-free at all. Patrick can write more if he wants.
There are more pressing issues than spending a lot of time upgrading to a new compiler version (which is known to be likely to yield worse results), and then spending another lot of time testing.

One of the primary goals wrt. testing would be to do a better job testing than you did yourself with GCC 4.0 and 4.1 before proposing them to users... which shouldn't be too hard, given how low your standards were. You're trolling all the time against "Micro$oft", but you used their "methods".

It's also quite sad that my LZMA launcher work got little to no interest, when it would save kilobytes of archive memory for many programs.

Due to its large footprint, the only place where the LZMA decompression code could make real sense would be if embedded into a generic launcher (ttstart or SuperStart). And its massive slowness (15-20 seconds to decompress a ~60 KB program, i.e. more than an order of magnitude slower than ttunpack and nearly an order of magnitude slower than ttunpack-super-duper-small) makes it significantly less desirable.
Sure, none of the ASM wizards seriously tried to optimize that code - but the raw size of the ASM decompression code, between 2 and 3 times larger than that of the initial ttunpack code, certainly acts as a deterrent.
avatar
Membre de la TI-Chess Team.
Co-mainteneur de GCC4TI (documentation en ligne de GCC4TI), TIEmu et TILP.
Co-admin de TI-Planet.

17

PpHd (./5) :
Lionel Debroux (./3) :
* the ld-tigcc-flashos-improvements branch (and its doc-related branch), which contain a number of improvements to the Flash OS support made by Patrick's modifications to improve Flash OS support. That's what he's been using for PedroM for a while.


You'll need this for Punix. It adds one option :

--flash-os-bss-start=$(FLASH_OS_BSS_START)
(usually FLASH_OS_BSS_START=0x5B00 )
which defines the start in RAM, of the BSS section allowing you to define global variable in .c files as usual.

It adds also new symbols:
__ld_archive_start: just after the end of the code, rounded to 64K
__ld_bss_even_end: just after the end of ram BSS section, rounded to 2.


Yes, this is the kind of change I'm interested in. Thanks.

Are static variables (both local to a function and local to a source file (or "compilation unit")) also stored in the BSS section? From what I understand about C, they should be.

It'll take me some time to completely convert my code to use real global variables from the pseudo-global variables (members inside a single "globals" struct named "G") that TIGCC forced upon my OS. It will allow me to interact with C more easily from assembly, and I can easily rewrite some time-critical code in asm (most notably the audio driver's interrupt handler). Actually, I can probably convert a module at a time, and declare "G" as a regular global variable. I would also have to change the start of the heap, but those are the only changes that I see as necessary.

18

Also (this is off-topic) can anyone really edit someone else's post? I see an "editer" link on every post, and clicking on one gives me a page to edit the post. I haven't actually submitted any changes to anyone's post (I'm a nice guy tongue) so I don't know if it would let me submit changes.

19

Are static variables (both local to a function and local to a source file (or "compilation unit")) also stored in the BSS section? From what I understand about C, they should be.

If they're not explicitly initialized, they will be in the BSS section.
If they're explicitly initialized and not const, they will be in the .data section (-> RAM).
If they're explicitly initialized and const, they will be in the .rodata section (-> Flash).
Also (this is off-topic) can anyone really edit someone else's post? I see an "editer" link on every post, and clicking on one gives me a page to edit the post. I haven't actually submitted any changes to anyone's post (I'm a nice guy tongue ) so I don't know if it would let me submit changes.

Nope, it wouldn't let you submit changes, since you don't have moderator power smile
(On this section of yAronet, Folco, PpHd and myself have a green "+" signaling local moderator status, and Godzil has a green "@" signaling local admin status. The global admins have a red "@"; yAro has the blue "@" signaling "owner" privileges. Boo also has a blue "@", but it's a bot signaling the number of connected people, alerting people by private messages upon a ! call, starting forked topics, etc.)


Since you have an ancient computer, adding an actual tree implementation to the ld-tigcc-optimization branch would speed up linking quite a lot for you.
We were considering red-black trees, since they stay quite well balanced (height is <= 2 * optimal height given by an AVL tree) at a limited computational cost. There's a GPL'ed implementation of rbtrees in Linux, and I found some auxiliary functions building on the Linux implementation through Koders.com.
We'll help you with testing, if not more.
avatar
Membre de la TI-Chess Team.
Co-mainteneur de GCC4TI (documentation en ligne de GCC4TI), TIEmu et TILP.
Co-admin de TI-Planet.

20

I'd suggest using AVL trees with doubly chain-linked nodes. That would allow to use the same list operations for read-only access, only insertions and removals would have to use tree operations, and of course the few portions which do lookups by position would benefit from switching to tree operations, all the remaining code could stay as is. Basically, I'd use the trees as an additional data structure for fast positional lookup, not as a replacement for our linked lists. It's faster to traverse a list than to traverse a tree linearly.

(I'll reply to the remaining stuff later, I have an IRC meeting to attend now.)
avatar
Mes news pour calculatrices TI: Ti-Gen
Mes projets PC pour calculatrices TI: TIGCC, CalcForge (CalcForgeLP, Emu-TIGCC)
Mes chans IRC: #tigcc et #inspired sur irc.freequest.net (UTF-8)

Liberté, Égalité, Fraternité

21

Lionel Debroux (./16) :
You have a narrow-minded view of examples, serving purely as documentation purposes (for a significant number of areas of the headers / library). While being examples is their primary purpose, they can also be used for testing purposes - manual testing first, potentially automated testing later (remember the topic where we posted about making a brainstorming for special instruction sequences ala WTI for programs sending messages to the emulator ?).

Still, it doesn't make sense to complain about a "set of problems" (those on-calc name conflicts) which is purely due to trying to use the examples for something they were not designed for.
As for you, Lionel, you at least did contribute stuff, a long time ago, but you did it in a form which is extremely hard to integrate into the existing project and never bothered trying to improve this situation.

On the part I emphasised, you're outright lying. While it's a fact that my efforts remained unfinished (and therefore not useful to you) until after the end of my studies in CS (and a rewrite in a programming language much more appropriate for the task, which I learnt outside of school in-between), I did try.
On your side, you don't want to use the results produced by the 100-ish SLOC of Perl code I wrote in 2008

Sorry, that's not what I thought of as "trying to improve this situation". I was thinking about improving the actual patchset, splitting it into pieces which can be individually reviewed and merged.
waiting for some hypothetical perfect rewrite of documentation tools (which, albeit being TIGCC/GCC4TI-specific, have worked rather well enough) which would solve several shortcomings, and add a new output format.

Indeed, that would be the proper solution. Your Perl script 1. is unsafe, as it doesn't fully understand the file format and 2. does entirely the wrong thing, adding hardcoded header references instead of removing them, i.e. the exact opposite of what we want.
Despite the informal spec, I have
participated in a nascent rewrite (ask konrad and MathStuf if you don't believe me) of tools matching the current Update* workflow (though the "documentation-to-headers" approach could be questioned, most projects are usually using the opposite "code-to-documentation" approach), which didn't get far.

That just shows that you guys either are not competent enough or didn't spend enough time on it. I'll probably end up having to write it all myself as usual. sad And then people ask why I tend to work alone so much… roll
On my side, I have used the results produced by that small Perl code, and this has made it possible to integrate a number of documentation snippets.

What also made it possible is that you have just added your own snippets because you think they are perfect. They're not. There are plenty of things needing fixing in them. So they need to be split into small reviewable pieces (see e.g. the LKML patch submission guidelines to get an idea of how I expect them to look) which can be proofread, tested where they include code (e.g. address hacks), fixed and merged individually.
This is a much more pragmatic approach.

s/pragmatic/short-term/g roll
You never care about the long term benefits, you're always happy with quick hacks which are less work in the short term, but will lead to much more work in the long run. If you believe there will be no "long run", as you have given to understand on several occasions when I pointed out your lack of long-term vision, then why are you maintaining that project in the first place? In the short term, the existing TIGCC 0.96 Beta 8 is just fine.
Remember, in AMS native programs, startup code such as SAVE_SCREEN, the overly used __set_in_use_bit code, or other program support code such as e.g. the internal F-Line emulator, get duplicated
across the vast majority of programs (that's what some people term "the stub of _nostub programs"). And we didn't just save "2 or 4 bytes": -16 bytes on SAVE_SCREEN (the code uses fewer registers, reducing side effects on other startup code bits), -10 bytes on the internal F-Line emulator.So do library functions, but they are used by a lower proportion of programs: -22 bytes for OSV*Timer, -2 bytes for _rowread, -2 bytes for the VTI detection. Not to mention sprite routines, on which you never bothered to integrate even changes that I contributed to TIGCC at the same time they were being made in ExtGraph, in 2002-2003, let alone Joey Adams' 2005 modifications + feature extensions.

That code which ends up everywhere is also code which risks breaking a huge number of programs if it's incorrect and so necessitates a lot of QA. It requires checking the entire startup code to see if the changes in register usage don't cause trouble, as sometimes startup sections depend on each other's results to save space. It should also be tested in multiple configurations. So when I ask "what testing have you done?" and I get a "none, it's obviously correct", also showing that the patch author is unfamiliar with the workings of the TIGCCLIB startup code (which is extremely sensitive to register assignments), that doesn't sound confidence-inspiring at all.

Now if the patch fixes some actual bug, I might close an eye on the testing requirements (you have already noticed that I don't believe testing to be the solution to all problems), but for a minor optimization, the benefits are extremely low, so I don't want to take any risks. I have already been burned twice by an "optimization" which broke things (in both cases, it was submitted by you), and GCC4TI has been burned by one too (which was again yours, putting your "broken optimization" count at 3). It's better to have negligeably larger, but perfectly working code than to have "optimized" code which doesn't work.
On the one hand, you're pushing people for optimizing their programs for size, but on the other hand, you aren't working even on the extremely low-hanging fruit. Do what I say, which is not what I do.

Because those "extremely low-hanging fruit" are the ones with the highest risk of regressions, and they save so little space that they have little to no practical benefits. And I'm not convinced you tested those changes adequately before merging them. In fact, you definitely didn't test the sprite routines, as those didn't work at all in your release. Yet you're always quick to accuse me of not testing stuff, even when I cannot test it myself due to not having the required hardware.
It would be much more useful to work on e.g. the compiler

But it's much harder, and:
where you can save hundreds if not thousands of bytes on many programs.
That's assuming that the optimization on the m68k target is improving as the GCC version number is increasing... and it isn't.

Work on the compiler is not necessarily limited to upgrading to a new version (but of course that's part of it). For example, I did some tuning of my own: tuning 68k instruction costs under -Os so GCC uses multiplication and division instructions instead of longer shift and add sequences, using linear test and jump sequences for sparse switches instead of balanced trees under -Os (because trees require more jumps and thus more space), additional 68k peepholes which improve both size and speed.
Definitely not in terms of speed ( e.g. http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40454 ), and our own experience with GCC 4.0 and 4.1 hasn't been problem-free at all. Patrick can write more if he wants.

My size measurements have shown program sizes going down on the vast majority of programs.
There are more pressing issues than spending a lot of time upgrading to a new compiler version (which is known to be likely to yield worse results), and then spending another lot of time testing.

Well, I don't consider the same issues "pressing" as you do at all, sorry. Saving 2 bytes in some library function sure isn't "pressing". Upgrading GCC would also automatically bring us features users (including you!) have been asking for, such as being able to set optimization switches per file in the IDE, without even having to touch the IDE. Instead, we'd get #pragmas for this purpose which work across all platforms supported by GCC, so a much better solution than some custom UI in the IDE significantly complicating its code and the TPR project format.

And again, if you're planning to stay on GCC 4.1 forever, that's quite short-term thinking.
One of the primary goals wrt. testing would be to do a better job testing than you did yourself with GCC 4.0 and 4.1 before proposing them to users... which shouldn't be too hard, given how low your standards were.

… says the author and committer of the sprite routine "optimizations" which didn't work at all! rotfl

My GCC updates at least worked on the programs I tested them on!
You're trolling all the time against "Micro$oft", but you used their "methods".

Unlike them, I didn't label my testing versions "releases". smile And anyway, not everything they do is necessarily bad.
Due to its large footprint, the only place where the LZMA decompression code could make real sense would be if embedded into a generic launcher (ttstart or SuperStart).

This is nonsense. The compression is so much stronger that for any program of sufficient size, where "sufficient" is around 20 KB uncompressed, the saved space in the PPG already compensates the larger launcher. E.g., after adding the launcher size, my Backgammon game is already smaller when compressed with LZMA than with pucrunch. (Let's call pucrunch by its name, "ttpack" is just pucrunch.)
And its massive slowness (15-20 seconds to decompress a ~60 KB program, i.e. more than an order of magnitude slower than ttunpack and nearly an order of magnitude slower than ttunpack-super-duper-small) makes it significantly less desirable.

That's a startup-only cost and has no effects whatsoever on runtime speed, i.e. gameplay (for games) or usability (for utilities).
christop (./17) :
It'll take me some time to completely convert my code to use real global variables from the pseudo-global variables (members inside a single "globals" struct named "G") that TIGCC forced upon my OS. It will allow me to interact with C more easily from assembly, and I can easily rewrite some time-critical code in asm (most notably the audio driver's interrupt handler). Actually, I can probably convert a module at a time, and declare "G" as a regular global variable. I would also have to change the start of the heap, but those are the only changes that I see as necessary.

A better solution than your structure which works with the current TIGCC is to have an assembly header of equates:
.equ var1,0x5B00
.equ var2,0x5B02
.equ var3,0x5B06
…

and to .include that into all .s files and asm(".include that into all .c files. Then you can just declare your variables as extern in a C header. Alternatively, you could have a C header of the form:
#define var1 (*(short*)0x5B00)
#define var2 (*(long*)0x5B02)
#define var3 (*(unsigned short*)0x5B06)
…

and you could generate the assembly header (or even both a GNU as and an A68k header) from that C header by a simple sed invocation.

That way, you don't need the syntactic sugar in the linker and you'll automatically get the smaller and faster short references when you use those variables as destination operands, which doesn't work with PpHd's current linker patch.
avatar
Mes news pour calculatrices TI: Ti-Gen
Mes projets PC pour calculatrices TI: TIGCC, CalcForge (CalcForgeLP, Emu-TIGCC)
Mes chans IRC: #tigcc et #inspired sur irc.freequest.net (UTF-8)

Liberté, Égalité, Fraternité

22

christop (./17) :
Yes, this is the kind of change I'm interested in. Thanks.
Are static variables (both local to a function and local to a source file (or "compilation unit")) also stored in the BSS section? From what I understand about C, they should be.


Provided that they are not initialised to a value, yes.

The only problem is that if you write:
int x = 2;
as a global, x will be mapped to the ROM code, making it impossible to change.

23

Your Perl script 1. is unsafe, as it doesn't fully understand the file format

Bullshit: the output of UpdateInclude before and after the change is identical (and yes, I renamed the output folder containing the files generated before the change, created and initialized a new output folder, launched UpdateInclude, and executed diff after that).
and 2. does entirely the wrong thing, adding hardcoded header references instead of removing them, i.e. the exact opposite of what we want.

As I've already written you in a previous occurrence of this discussion, modifying the script (which I publicly posted on the corresponding GCC4TI ticket, http://trac.godzil.net/gcc4ti/ticket/3 ) to remove header references (when we have - if ever - a tool capable of working correctly without header references) instead of adding them is trivial.
IOW, your "it will create more work in the long-term" argument is totally irrelevant for that change wink
That just shows that you guys either are not competent enough or didn't spend enough time on it.

Tell that to MathStuf and konrad.
More than a year ago, I sent them a small patchset that greatly improved parsing of .hs* files (much fewer files were unparseable after that). AFAICS, it didn't get integrated, and no other commits have been made to the tree since then.
The only convincing use case you gave for a rewrite, switching from CHM to some Qt Doc format (thereby reducing the number of computers that can read the documentation out of the box, BTW), was motivated by the fact that you are unable to generate CHM files yourself, because you refuse using Windows.
Notice that while I don't have that problem myself (I've multiple legal licenses of Windows, for the infrequent process of building the Delphi tools and making a CHM file), and while it hasn't been proved to us that the "doc-to-headers" approach is the most sound one (when rewriting a tool completely, even this kind of fundamental things can be re-evaluated), I nevertheless participated in the rewrite of a new documentation tool wink
What also made it possible is that you have just added your own snippets because you think they are perfect. They're not. There are plenty of things needing fixing in them. So they need to be split into small reviewable pieces (see e.g. the LKML patch submission guidelines to get an idea of how I expect them to look) which can be proofread, tested where they include code (e.g. address hacks), fixed and merged individually.

Get a clue about 1) the way I merged a subset of the snippets and 2) the fate of most Address/Value Hacks (as you're aware, we already went through that discussion months ago, see topics/115787-gcc4ti-manifesto#15 ).
In fact, you definitely didn't test the sprite routines, as those didn't work at all in your release.My GCC updates at least worked on the programs I tested them on!

You're re-posting the same crap that I wrote to be false in a previous occurrences of a discussion on that particular point...
I already wrote that I definitely did test them (and they did work for me, obviously), using, as you're aware, Joey Adams' exhaustive tester program. The problem was, as you're aware, an interface mismatch between the definitions of the sprite routines' prototypes embedded in the program (I kept that of Joey's routines), and those (without explicit register names...) of the headers. As you're aware, I caught the bug myself before someone else reported it, and as you're aware, I made the erratum on the GCC4TI site.
For example, I did some tuning of my own: [...], additional 68k peepholes which improve both size and speed.

Also known as the root cause of a newer build of TIGCC-GCC turning a valid program (ebook) into a crasher (while, IIRC, not being enough to counter the size increase due to reduced optimization yielded by other changes to the compiler). What's more, that particular peephole is less powerful than the corresponding one in GTC.
But peepholes in general are definitely useful, it's a fact.
And again, if you're planning to stay on GCC 4.1 forever, that's quite short-term thinking.

As shown in the corresponding ticket ( http://trac.godzil.net/gcc4ti/ticket/39 ), nothing is set in stone in a way or another.



ACK for lzma saving space more space than I remembered (though the sample of programs you applied it on was pretty small).
That does not, however, change the fact that multiple specific launchers (pstarters) stink in at least two aspects:
* space savings: ttstart is less than 15% larger than the pstarter containing the same ppg decompression routine, and 50% or so larger if ttstart contains the faster ppg decompression routine. It follows that ppg pstarters take up more space than a ppg ttstart as soon as there's more than one on a given calculator;
* compatibility with newer models (the joy of pstarters that don't work on 89T, while the compressed program that they launch could - Zeljko/TICT's advint is one of them)
while providing no ease-of-use advantage wrt. SuperStart's home screen line integration.
And besides home screen line integration, SuperStart itself has two technical advantages over ttstart and pstarters, namely near-zero RAM consumption in operation (as opposed to 1 KB), and being a (small) FlashApp enables it, in most cases, not to take space in archive memory.
But we've already gone through that discussion many times.
And its massive slowness (15-20 seconds to decompress a ~60 KB program, i.e. more than an order of magnitude slower than ttunpack and nearly an order of magnitude slower than ttunpack-super-duper-small) makes it significantly less desirable.
That's a startup-only cost and has no effects whatsoever on runtime speed, i.e. gameplay (for games) or usability (for utilities).

You're stating an extremely obvious fact here. En français, "tu enfonces une porte ouverte" (~you're breaking in through an open door).
And what ? That it's a startup-only cost does not shelve another fact, also relevant to users, that it's massively slow.
Remember, once upon a time, we were criticizing the slowness of the ttunpack-super-duper-small routine unconditionally used in the specific launchers generated by TIGCC. And that routine was much smaller and much faster than the LZMA routine is, so guess what happens with LZMA...
Also, remember that criticism about the slowness of ttunpack-super-duper-small stopped when Samuel Stearley made a breakthrough that doubled its speed without changing its size. After that, I clearly remember writing something along the lines of "it's much more acceptable now": indeed, the fact was, ttunpack-super-duper-small had become nearly as fast as the routine used before Greg Dietsche, I, and then Samuel Stearley worked on it, while being more than twice smaller. And I've simultaneously stopped requesting the feature of providing users with the choice between the small and the fast (more than twice larger, more than twice faster) decompression routines in TIGCC.
avatar
Membre de la TI-Chess Team.
Co-mainteneur de GCC4TI (documentation en ligne de GCC4TI), TIEmu et TILP.
Co-admin de TI-Planet.

24

Lionel Debroux (./19) :
Are static variables (both local to a function and local to a source file (or "compilation unit")) also stored in the BSS section? From what I understand about C, they should be.

If they're not explicitly initialized, they will be in the BSS section.
If they're explicitly initialized and not const, they will be in the .data section (-> RAM).If they're explicitly initialized and const, they will be in the .rodata section (-> Flash).

PpHd (./22) :

Provided that they are not initialised to a value, yes.

The only problem is that if you write:
int x = 2;
as a global, x will be mapped to the ROM code, making it impossible to change.


Which is it? Are initialized global variables put in the .data section, and is the .data section in RAM or FlashROM?

If an initialized global variable is in RAM, where is its initialized value stored? Would I have to write startup code to copy the initial values from FlashROM to the appropriate locations in RAM?

On the other hand, if the .data section is stored in FlashROM, are there plans to add an option to the linker to put it in RAM, like with BSS?

25

christop (./24) :

Which is it? Are initialized global variables put in the .data section, and is the .data section in RAM or FlashROM?

If an initialized global variable is in RAM, where is its initialized value stored? Would I have to write startup code to copy the initial values from FlashROM to the appropriate locations in RAM?
On the other hand, if the .data section is stored in FlashROM, are there plans to add an option to the linker to put it in RAM, like with BSS?


They are put in .data section which is stored in flash for a FlashOS target (Mainly for historical reasons where .text and .data sections where more or less used to store code).
Because of the fact that we are talking about a Flash Os which should assume that the RAM is not initialized when it boots and its job to initialize everything,
I don't think it is possible to initialize a value stored in .data for a flash os, in a simple way; and I even don't know if it has a meaning.
Even if we put an option to put the .data section in RAM, it won't work without the help of the written OS so what can we do?
Maybe duplicating the data in RAM and in flash, and let the OS initialize it with the requested values at startup by exporting additional symbols by the linker?
For me, the simpler thing is to write an initialize function which initializes the variables to their values at startup, which is a more elegant design too.

26

PpHd (./22) :
Provided that they are not initialised to a value, yes.

The only problem is that if you write:
int x = 2;as a global, x will be mapped to the ROM code, making it impossible to change.

That's another missing feature in your patch. (But this one can be added incrementally after the BSS support.) We really need to support separate .data and .rodata, where .data is treated similarly to .bss, but initialized by copying a block stored next to .text and .rodata, for FlashROM code (and probably also as an option for RAM code, to have a fully ISO C compatible mode, without the "globals retain value" glitch).
avatar
Mes news pour calculatrices TI: Ti-Gen
Mes projets PC pour calculatrices TI: TIGCC, CalcForge (CalcForgeLP, Emu-TIGCC)
Mes chans IRC: #tigcc et #inspired sur irc.freequest.net (UTF-8)

Liberté, Égalité, Fraternité

27

[I'll reply to ./23 later because it's long.]
PpHd (./25) :
Because of the fact that we are talking about a Flash Os which should assume that the RAM is not initialized when it boots and its job to initialize everything,
I don't think it is possible to initialize a value stored in .data for a flash os, in a simple way; and I even don't know if it has a meaning.
Even if we put an option to put the .data section in RAM, it won't work without the help of the written OS so what can we do? Maybe duplicating the data in RAM and in flash, and let the OS initialize it with the requested values at startup by exporting additional symbols by the linker?

That's yet another reason why flashos.a should contain some common initialization code (which would also initialize the hardware), an idea which you are vehemently against for no particular reason.
avatar
Mes news pour calculatrices TI: Ti-Gen
Mes projets PC pour calculatrices TI: TIGCC, CalcForge (CalcForgeLP, Emu-TIGCC)
Mes chans IRC: #tigcc et #inspired sur irc.freequest.net (UTF-8)

Liberté, Égalité, Fraternité

28

Lionel Debroux (./23) :
Bullshit: the output of UpdateInclude before and after the change is identical (and yes, I renamed the output folder containing the files generated before the change, created and initialized a new output folder, launched UpdateInclude, and executed diff after that).

That only proves that it doesn't break on the current contents of the .hs? files, it could still choke on future valid files. (But the fact that you verified this does make me more confident about possibly using your scripts as a temporary stopgap solution.)
As I've already written you in a previous occurrence of this discussion, modifying the script (which I publicly posted on the corresponding GCC4TI ticket, http://trac.godzil.net/gcc4ti/ticket/3 ) to remove header references (when we have - if ever - a tool capable of working correctly without header references) instead of adding them is trivial.

I guess we could also make a version which only strips out the header where it is the current header? (So we could run your script, move the docs we want to move, then run the modified one.)
IOW, your "it will create more work in the long-term" argument is totally irrelevant for that change wink

The very work of creating the script was "more work", if you consider the long run where the rewrite needs to happen anyway.
The only convincing use case you gave for a rewrite, switching from CHM to some Qt Doc format (thereby reducing the number of computers that can read the documentation out of the box, BTW), was motivated by the fact that you are unable to generate CHM files yourself, because you refuse using Windows.

Also because I need to generate the Free format anyway for KTIGCC for *nix, so we might just as well also use it with KTIGCC/W32 which is going to come at some point. I really don't want to have to fight with the platform-specific HTML Help API and #ifdefs! Right now, I can only generate the CHM sources and then I need to run a converter I wrote as a quick hack to convert this to the Qt Assistant ADP format (which isn't even the latest one, I'll need to update the stuff to support the new QCH format). That converter is extremely slow. It would be much nicer to be able to generate the intended target format(s) (the tools should be flexible and allow more than one) directly.

In addition, the existing tools are in Delphi, which forces me to use WINE to run them and which makes it extremely hard to do any changes to them (I haven't made a single change to those tools because of this). It also means it is impossible to regenerate the documentation with only built-from-source Free Software, you need prebuilt Delphi binaries.
Get a clue about 1) the way I merged a subset of the snippets

You merged them using your Perl script to handle the cross-references, which I consider to be a bad solution because it leaves the docs with more redundant crap after running it.
and 2) the fate of most Address/Value Hacks

Just dropping those is a poor solution. In fact it's the most useful part of your contributions, most of the prototypes are already there in unknown.h and the documentation is just documentation, it can be consulted directly from contrib.zip (and some of the functions are also documented in TI's PDF).
I already wrote that I definitely did
test them (and they did work for me, obviously), using, as you're aware, Joey Adams' exhaustive tester program. The problem was, as you're aware, an interface mismatch between the definitions of the sprite routines' prototypes embedded in the program (I kept that of Joey's routines), and those (without explicit register names...) of the headers.

So you didn't test them in the obvious way (which is how I'd have tested them): create a new project, add #include <tigcclib.h> and draw a sprite!
As you're aware, I caught the bug myself before someone else reported it,

But too late, you already released it.

And you didn't catch the 2 broken optimizations you submitted to TIGCC in the past, my users did.
For example, I did some tuning of my own: [...], additional 68k peepholes which improve both size and speed.
Also known as the root cause of a newer build of TIGCC-GCC turning a valid program (ebook) into a crasher

A bug which has been fixed eons ago (within a day of you reporting it!) and which was only in GCC builds which were explicitly labeled as prereleases for testing purposes (and which was present only for about 10 days!). It was never in any official (beta or otherwise) release of TIGCC. You seem to have completely misunderstood the purpose of a pre-beta testing prerelease. Unlike you, I believe in public development and public testing. That was what those prereleases were for. The bug also didn't end up in any released eBook Reader. Yet you keep repeating it.

You (and the community as a whole) were expected to download those testing builds, try them out on your own programs and report any issues with it. You did. I was supposed to fix those issues and make fixed testing builds available ASAP. And that's exactly what I did (within hours of the bug report!). So where's your problem? This is how public testing works! It should be obvious that I can't test the whole TI-89/89Ti/92+/V200 software base all by myself! More testing would just have delayed getting the fixes out.

In addition, do I really have to remind you that you were the one repeatedly pestering me to write those peepholes until I did it?
What's more, that particular peephole is less powerful than the corresponding one in GTC.

You're free to write a better one. Now that you are the most active developer of GCC4TI, it's time to put your coding hands where your mouth is!
And again, if you're planning to stay on GCC 4.1 forever, that's quite short-term thinking.

As shown in the corresponding ticket ( http://trac.godzil.net/gcc4ti/ticket/39 ), nothing is set in stone in a way or another.

But the facts are that nobody is working on this in your supposedly "active" project.
ACK for lzma saving space more space than I remembered (though the sample of programs you applied it on was pretty small).
That does not, however, change the fact that multiple specific launchers (pstarters) stink in at least two aspects:
* space savings: ttstart is less than 15% larger than the pstarter containing the same ppg decompression routine, and 50% or so larger if ttstart contains the faster ppg decompression routine. It follows that ppg pstarters take up more space than a ppg ttstart as soon as there's more than one on a given calculator;
* compatibility with newer models (the joy of pstarters that don't work on 89T, while the compressed program that they launch could - Zeljko/TICT's advint is one of them)
while providing no ease-of-use advantage wrt. SuperStart's home screen line integration.
And besides home screen line integration, SuperStart itself has two technical advantages over ttstart and pstarters, namely near-zero RAM consumption in operation (as opposed to 1 KB), and being a (small) FlashApp enables it, in most cases, not to take space in archive memory.But we've already gone through that discussion many times.

But custom launchers are the only way to have a compressed program which "works out of the box" without having to install a kernel-like app such as SuperStart.
And what ? That it's a startup-only cost does not shelve another fact, also relevant to users, that it's massively slow.

The program is just as fast as always. Its startup is slow, but that's not a practical problem at all.
Remember, once upon a time, we were criticizing the slowness of the ttunpack-super-duper-small routine unconditionally used in the specific launchers generated by TIGCC. And that routine was much smaller and
much faster than the LZMA routine is, so guess what happens with LZMA...

Thankfully, me threatening to switch to LZMA stopped the complaints about ttunpack-small. tongue

That threat also helped getting the pucrunch licensing issues straightened out. smile
avatar
Mes news pour calculatrices TI: Ti-Gen
Mes projets PC pour calculatrices TI: TIGCC, CalcForge (CalcForgeLP, Emu-TIGCC)
Mes chans IRC: #tigcc et #inspired sur irc.freequest.net (UTF-8)

Liberté, Égalité, Fraternité

29

That only proves that it doesn't break on the current contents of the .hs? files, it could still choke on future valid files.

That remains to be seen. And the likelihood of this happening is reduced by the fact I'm careful not to introduce any file that does not have explicit header references. Just for completeness, once in a while, I'm running the script, and so far, I haven't had a single line changed by the script.
BTW, I had already posted, in http://trac.godzil.net/gcc4ti/ticket/3 and IIRC somewhere else, how I had checked the operation of the script...
I guess we could also make a version which only strips out the header where it is the current header?

The script does not contain any licensing or even any copyright information. Do What The Fuck You Want To with it.
I wrote as a quick hack to convert this to the Qt Assistant ADP format (which isn't even the latest one, I'll need to update the stuff to support the new QCH format). That converter is extremely slow.

A full run of the Update* programs (the longest one being by far UpdateInclude) and your converter takes about three minutes on my computer. And it's all single-threaded stuff executed sequentially. Not sure that qualifies for "extremely slow". Remember, that's less (on the same computer) than the multi-threaded compilation + test suite of the infrastructure I've intermittently been working on for two years in my previous job.
It also means it is impossible to regenerate the documentation with only built-from-source Free Software, you need prebuilt Delphi binaries.

We agree to disagree on the priority of the task of killing every non-free bit from a program tree.
In fact it [address/value hacks] 's the most useful part of your contributions

These hacks used to be useful... in 2002-2003, when they were made, i.e. 2-3 years after the last AMS 1.xx versions (most hacks are aimed at AMS 1.xx). We're now in 2009-2010, i.e. ten years (!!) after the last AMS 1.xx versions, so it's pretty obvious that they're less useful nowadays. Thus, only a subset of them gets re-tested and added.
most of the prototypes are already there in unknown.h

Again, get a clue. Previously known prototypes are a minority in the updates I committed to GCC4TI SVN.
and the documentation is just documentation, it can be consulted directly from contrib.zip (and some of the functions are also documented in TI's PDF).

They can consult that file, but in practice, they won't, because it's very-little-known. And known to be incomplete, at that.
So you didn't test them in the obvious way (which is how I'd have tested them): create a new project, add #include <tigcclib.h> and draw a sprite!

Testing required a new project indeed, since the examples on sprite routines are so scarce in the documentation.
You seem to have completely misunderstood the purpose of a pre-beta testing prerelease

Quit being stupid: I misunderstand the concept of testing prereleases so much that I've been making two of them for TILP II 1.14, the first one being compiled with MSVC and the second one being compiled with cross-MinGW.
But the facts are that nobody is working on this in your supposedly "active" project.

And nobody is working on it in your supposedly "actively maintained" project, which is, as a matter of fact, far less worked on than GCC4TI.
Its startup is slow, but that's not a practical problem at all.

Users do disagree about that, I've mentioned the complaints about old versions of ttunpack-super-duper-small despite it being much faster than the lzma decompressor wink
avatar
Membre de la TI-Chess Team.
Co-mainteneur de GCC4TI (documentation en ligne de GCC4TI), TIEmu et TILP.
Co-admin de TI-Planet.

30

It takes me 4 seconds to even just type the program name to run, and I'm sure many users are even slower than me at typing. So I don't see why waiting a few seconds is such a big issue.
avatar
Mes news pour calculatrices TI: Ti-Gen
Mes projets PC pour calculatrices TI: TIGCC, CalcForge (CalcForgeLP, Emu-TIGCC)
Mes chans IRC: #tigcc et #inspired sur irc.freequest.net (UTF-8)

Liberté, Égalité, Fraternité