Jump to content


  • Posts

  • Joined

  • Last visited

Everything posted by DigitalMonk

  1. (was a duplicate, 'coz it wasn't clear that the Submit had worked the first time)
  2. Just a comment from an ancient ComSci memory: Have you considered "worst fit" allocation? It sounds stupid on the face of it, but is used by some allocators because it causes less heap fragmentation than "best fit". "Best fit" tends to leave little tiny unallocated pieces all over the heap. Finding the block is also easy/fast, 'coz it's at the top of the heap. And there's always "first fit", which might be the fastest, with fragmentation performance somewhere in between, but that tends to be a compromise solution that no one likes (it isn't that much faster, at best) I've always wondered about a "perfect or worst fit" allocator that would allocate a perfectly-sized block if one was available, and allocate from the worst-fit otherwise. That should truly provide minimal fragmentation for any given workload. I just haven't sat down and worked out the actual space and performance hit of having to maintain a block-size hash table for the perfect-fit step... And, on small computers, this is a fairly serious tradeoff decision, because both space and time are tightly constrained. On a side note, I strongly suspect that any text-heavy application would benefit more from the implementation of a "rope" variable type (structure, whatever -- don't get too hung up on the word "type", I'm just an old C++ programmer and think that way) that could be used instead of strings. Cutting apart and reassembling strings is one of the fastest ways to fill and fragment your heap. A rope is made of multiple strings (get it? ), and either each piece contains a pointer to the next piece (which precludes re-using any piece in multiple ropes, so it's less useful) or a separate list of pointers to all of the strings. Concatenation then only requires allocation of the pointer-list and doesn't involve copying text at all. This means that string literals inside your code are used directly in place, even when combined into larger ropes, saving even more memory. Of course, you can't use any of the underlying OS I/O routines directly with these (although that would be a cool Kernal extension, to my mind), but it should be a relatively small wrapper to walk through a pointer-list calling the OS routine for each string. (I wish I could take any credit whatsoever for the rope idea, but it's been an extended C++ "type" for a very long time. Hmmm... Just looked this up, and they're a bit fancier than I described, with the benefit of being faster for more operations because of binary tree optimizations: https://www.geeksforgeeks.org/stl-ropes-in-c/ )
  3. Purely for clarification: COMAL is Public Domain software (at least for PET and C64). What follows is an early morning rant/lament, and I wouldn't blame anyone at all for simply ignoring it. But I had to get it out of my head. As for better-than-BASIC, there are so many reasons beyond even the obvious structured loops and named subroutines. Code entry is spectacular - as soon as you hit Enter, it does the syntax check and if it finds an error it puts the cursor at the site of the error and gives a USEFUL error message (astounding!). It also is able to take in (at least in some cases) BASIC code and convert it to its own syntax. When you list, you get back formatted and semantically indented code. Pure printing-to-screen speed is about the same as BASIC 2.0. POKEing around is roughly 3x faster than BASIC 2.0. Internal processing logic is noticeably faster, but I haven't timed it for specifics. Any looping construct is faster because the jump destinations are located once in a prepass immediately before running so the interpreter isn't searching through the code to find a jump target every pass through the loop. This is sort of cross-thread to the "if you were CEO of Commodore", but I really wish that the VIC could have had COMAL instead of BASIC. The reason for using BASIC 2.0 (as opposed to 4.0 or later) was (to my understanding) because M$ was still young in the 70s and had licensed 2.0 to Commodore for use on any number of products, whereas 4.0 would have required new licensing fees for the new machines. So Tramiel went the cheap route, which was absolutely a winning strategy as history has shown. As I am writing this, it's driving me insane because I can't find a release date for the first PET COMAL-80 versions, so I can't be certain that it existed in a 6502+PET Kernal compatible form prior to the VIC's release, or even the 64's release. So maybe it's a non-starter. But damn, if it was available, even in the PET style (no graphics, sound, or sprite commands), it would be vastly better than BASIC 2.0... Overall, this is probably just one of those annoying what-if's for me. The language was available early enough (1974 or '75 for bigger computers), but I don't know if the 6502+Kernal form was ready in time. And, of course, it would have had to have been ready early enough to get integrated into the design, so that pushes back at least into '80, making almost a certainty that COMAL-80 couldn't have made it into the VIC. Just one more year! But "one more year" is enough to go from the VIC to the 64, or to miss your market entirely if you hold off, so you have to go with what you can use. I know that. I just wish that they (the Danish creators of the language) had moved COMAL from the larger computers onto microcomputers as soon as they (microcomputers) appeared, even if it meant just implemented the language as it was in 1978 (instead of defining a new version, which pushed everything back by 2 years -- a lifetime, as mentioned above). If it could have gotten a foothold on the PET, been "good enough" for the VIC (possibly with later "patch" cartridges (or ROM chips for the brave-hearted)), and then maybe COMAL-80 (without graphics/sound/sprites) built into the C64. I mean, the C64 0.14 version was available on disk in 1983... So close... I hate BASIC 2.0. Really, I hate pretty much all BASICs that lack structured loops and named subroutines. But I especially hate BASIC 2.0. And that comes from it being the only language I knew how to use from '81 until I went to college in '89. So I used it a lot, not knowing how much it sucked, and not knowing how much it was needlessly destroying my brain and ingraining horrible programming style and habits. I remember seeing the COMAL ads back then, but then I didn't realize how badly I was being abused and didn't understand how much better life would be with COMAL. I did get and try to use ProMAL for awhile, but as a compiled language with its own shell on a floppy-based C64, it was just too painful. So, I guess the point of all that rambling is that if I had played with COMAL early enough and known about the Commander X16 project early enough, I would have pushed for it as the built-in language. To correct one more flaw of history and also potentially to avoid having to license any ROMs from anyone. At least, it would have avoided licensing the BASIC ROMs, and I'm kind of assuming the KERNAL ROMs had to be written more or less from scratch anyway. But I hadn't, and I didn't, so I couldn't, and thus BASIC survives to ruin even more lives...
  4. My apologies if I'm repeating other's statements... I read several posts, but not all three pages. Summary of my thoughts: Keep the C64 as your holy grail machine / biome until the Amiga comes out. Keep development and expansions active Learn why it is succeeding so well and use those lessons going forward Do not pointlessly divide and dilute your message (C16 and Plus-4 both? Why?) Make _one_ super cheap computer (break-even or slight loss pricing) to capture the home market with something that has a clear upgrade path TO the C64 biome Do something like @BruceMcF's ideas for the polished C128 to give people a real reason to upgrade FROM the C64 when they need more power (without losing their investment, and while getting an improvement even for their older software) Put out the A500/A1000/A2000 more or less as they did. This seemed to work well Don't try to be PC compatible (see reasons below). Be BETTER than the PC. It was still early enough to pull that off, if you were dedicated... Don't sit back and pat yourself on the back for so damn long. Keep pushing forward and design some new hardware expansions to provide meaningful forward paths Specifically, DO NOT LET the IBM PC-compatible market surpass you in audio/visual capabilities, when that has been your one indisputable knockout capability If you can't come out with AGA until it's so late that you could've bought a random $25 video card to do the same job, then just admit that you're incompetent and sell all the rights to somebody who actually cares about the Amiga while there's still at least a slim chance of turning things around. Don't wait and drive the name 6 feet underground and then sell it when the whole line is already dead... (Sorry if my anger at Commodore in the later years gets too hot. I loved my Amigas, but I spent years furious at Commodore for just letting things slip away...) Extended discussion / explanation: I don't see any way to have improved the success of the C64 itself. It's productive lifetime was insanely long in a period of mass incompatibility (between vendors, between models, between upgrades -- basically anything you bought was a lock-in). I do really like @BruceMcF's suggestions of giving the C128's C64 mode access to the other 64kB of RAM, making it look like a GeoRAM or REU or other "standard" C64 RAM expansion. Building in a fastloader would have been wonderful as well. Both would need a way to turn them off for troublesome programs, just like you would sometimes not be able to use their true C64 equivalents with some software, but that could be as easy as GO64 vs GO64+, or GO64 vs SAFE64 I don't think that Commodore could have maintained any meaningful market share by adding DOS or Windows compatibility. Even IBM couldn't do that . And especially on non-Intel CPUs -- Microsoft themselves tried that with NT, and they couldn't swing it either. If I think of the C128 as having been properly polished, I would see that as a bridge towards the Amiga and moving forward into more powerful machines. This still being early enough that a lot of people still didn't understand the true value of a computer, I can also see the wisdom of making _one_ model of super cheap entry computer -- possibly even to sell at break-even or slightly loss-leader prices, with the intent to saturate the market and get as many people interested as possible. But only _one_. Not three. Or even two. That only serves to dilute the market, confuse your customer, and complicate your manufacturing/distribution chain. And compatibility should have been seriously considered. Not hard-core compatibility -- there's no way that a C16 could reasonably be expected to run C64 software, but for the love of all that's holy, why would you change the joystick port connector? There was a healthy 3rd party market for joysticks, and everybody had their own favorites, so it was fundamentally stupid to cut that entire market and try to lock people into Commodore-only joysticks (and then to release such a horrible painful one at that...) OK, so they wanted to emphasize business use -- again, kind of blind. Yes, some people (like the video rental store in my home town) used the C64 for business, but if you're looking to saturate the mass home market, that's going to be game-centric, and that should have been obvious by the 90s. So make a little gaming machine that could also be used by the curious to program their own little games, and make it clear what the path forward to C64 or C128 would be. Let them keep their investment of external hardware and BASIC programs (so BASIC has to be compatible, and as much AV IO as possible), even if they can't migrate assembly programs (or maybe strongly encourage all C16 software to be BASIC software to make that transition possible for the majority of software (you can't block out assembly, obviously, but put the argument forward towards software creators)). Once into the Amiga world, Commodore held its own for awhile. TV signal compatibility made it a shoe-in for video production work, and it was a good game machine as well. The primary failing I saw as a user was that Commodore seemed to just be resting on its designs. A500/A1000/A2000 were OK -- starter system with floppy and little RAM, medium system with more RAM, and professional system with RAM expansions, hard drives, and the possibility of DOS through the Bridgeboard (though, honestly, designing the Bridgeboard around an 8088 at that point in time seemed really stupid -- my friends had 286's minimum, and I think I had a 386 sitting on the side). Sound and video as good as and generally better than any competing system. It's all good and a great start. But then it took seemingly forever to improve any of those things. Video cards coming from 3rd party manufacturers who had to provide their own APIs because there was no standard to implement, so even if you wanted a 24bit video card, each one could only support a couple of programs. 7MHz CPUs across the board, and an OS that would crash or lock up if you put a 14MHz CPU in it (I had an accelerator and I had to remember to downclock it before doing any disk access). Eventually the A3000 jumped to a 32-bit core and 25MHz (IIRC), but still on the old audio/video hardware. I am aware that the original Amiga design was done out-of-house, originally pitched to Atari and rejected, then sold to Commodore. This makes me suspect that Commodore did not have the design talent to design hardware that would expand on the Amiga's capabilities, and by the time they could, everybody had passed them by. Amigas didn't get significant visual upgrades from Commodore until after everybody on a PC was already above and beyond what the AGA could do. The official Amiga hardware (and thus the software/OS support) was just stagnant for too many years. I'm not sure how they could have fixed this, other than to get better designers in-house.
  5. Whoops! Sorry for mis-using the term "pseudo-registers" when I was talking about "imaginary registers"... I haven't started X16 programming yet, so I was just thinking about imaginary registers LLVM-MOS uses for PET/VIC/C64/C128/Atari/etc.
  6. First, their focus has been on clang, not clang++, so I'm not sure how much C++ support is present (I would expect all the language features to be there because that's a front-end common thing, but I know that the runtime library doesn't exist because that's a backend supplied library and they haven't worked on it yet). I do want to start poking around with C++ language features, just to see how they go, but I want to get all my platforms working again first. Second, I'd swear that I've seen somewhere (thought it was this thread, but can't find it) that interrupt handlers couldn't be written yet because of an implementation detail about how they handle function calls... __BUT__ I've been trying to compare and contrast 5 different C compilers, so I could very easily be thinking of one of the others...
  7. Yeah... I was really hoping I could just slip by on those... A lot of them are BASIC workspaces that shouldn't matter much, but there are also KERNAL workspaces that would be very bad to stomp on. I'll have to break out all my ZP memory maps and compare them. I'm really glad that the number of pseudo-registers and their locations is completely configurable through just text files. Once I can get all my stuff running, I will make a cleanup pass to make sure I don't have "hackery" left sitting around, and then I'll definitely send a pull-request... Hmmm, gotta fork the repo inside GitHub first, probably, instead of just messing with it on my local machine
  8. *SMH* D'oh! Thank you for that... Just got jammed into my mental rut... OK, all four platforms at least build and link now. They load and (except for the VIC) have the correct BASIC SYS command waiting. Now I just have to be more careful about linker files and where I'm placing my fonts and graphics and where the stack goes, and so forth ('coz they CPU JAM "immediately" if I run them )
  9. I would be very interested in the details of your tweaks. Did you make an X16 target alongside the existing 64 target, or did you just modify the 64 files into X16 files? I'm trying to make my little game for the 128, 64, VIC, and PET, and they all put BASIC in different places... 64 works, of course. I'm trying to get 128 working next. My first attempt modified files directly in the 'build' directories. I copied the 64 source directory and renamed it to 128. I modified the ldscripts/link.ld to use the 1c01/1c0d addresses needed on the 128. I renamed 64.cfg to 128.cfg and tweaked the comments (the actual commands didn't appear to need modification). Got a valid PRG. Tried to autostart it in VICE and it exploded. Automounted it instead so that I could list it, and it was "7773SYS2061", so the basic header didn't autoadjust to the linker start point (I got lazy with KickC, because it generates the basic header on the fly). Realizing that I'd been hacking on output files instead of editing inputs, I moved out to the actual source code directories. Did the equivalent edits from above to the source. Then I adjusted various CMakeLists.txt files to include the new directory. I modified the lib/basic_header.s to use 7181 (1c0d) in the SYS command. Ran ninja to rebuild and I get ``` [0/1] /usr/bin/cmake -S/home/mac/games/llvm-mos-sdk -B/home/mac/games/llvm-mos-sdk/build CMake Error at cmake/modules/AddObjectFile.cmake:10 (add_library): add_library cannot create target "basic_header" because another target with the same name already exists. The existing target is created in source directory "/home/mac/games/llvm-mos-sdk/commodore/64/lib". See documentation for policy CMP0002 for more details. Call Stack (most recent call first): commodore/128/lib/CMakeLists.txt:6 (add_object_file) ``` I'm not much of a CMake or ninja user, just following steps and extrapolating what I can. I don't quite see why the 128's basic_header is conflicting with the 64's basic_header. They should be in separate directories. But they only have one target machine under each "brand" of computer, so there may be some assumption buried somewhere that I'm just missing. I think I looked at all the CMakeLists.txt from the root down and I can't see it, but that doesn't surprise me, really...
  10. They seem to be _VERY_ strict about there unit and integration testing. No Pull Requests are allowed unless they are covered by an existing test or include new ones. All code has to follow the LLVM coding and quality guidelines. None of that stops errors getting in, of course, but it should severely limit the "quick hack" kind of coding that leads to fix/re-fix/fix-again/no-this-time-really/argh commits... I am incredibly stoked that there are so many C compiler efforts out there now for 6502: cc65, of course, which is pretty rock solid but unfortunately generates (by far) the slowest/largest code. But it always works. gcc-6502 has the GCC front end goodness, but still some backend issues, and is pretty much dead unfortunately... KickC is quite active and the lead developer is responsive and helpful. Very cool if you want to mix and match with KickAssembler NutStudio has been mentioned in another thread here. I had good luck in my initial forays with it. He's not ready to release, but is open to beta testers LLVM-MOS which appears to be very serious about the whole effort
  11. Awwwww... Did you have to ruin my fantasy of hundreds of retro-enthusiasts frantically hacking towards getting this completed? Still, 2-3 a day is pretty good!
  12. I mentioned it was in active development, but just for a sense of scale: llvm-mos-linux-main github-actions released this 21 hours ago · 8911 commits to fd5a4cc2c8cb064afe6df5ccb436831ef8743bda since this release Almost 9000 commits in less than a day. Basically, if it's doing what you want, just use what you have. But if you have any issues, grab a new build coz they may have already fixed your problem...
  13. C is possible, there were at least three commercial C compilers back in the 80s. C++, well, maaaaaaaybe C++98ish. But just as a meaningless point of information, clang++ (the C++ compiler for LLVM-MOS) is 84.5MB. That's not it's memory footprint, just the executable size. Now, granted, clang, clang-13, and clang++ are all the same size, so I suspect that is one mega compiler/librarian/linker application for multiple similar languages, but it's waaaaaay beyond the 2MB for the big X16... But I love to see people tackling impossible odds. Frequently they found out that they're merely ludicrously difficult
  14. DO IT!!! Upon reflection: Oh lord... I mean, I suppose you could always cram the LLVM source code through LLVM-MOS. I don't know how huge the resulting PRG would be, since there is a LOT of logic in LLVM. I've looked into using a C compiler on the C64, and that was insane. You had to have either two or three floppy drives to even start, and all the steps were separate, and just argh... I would also like to mention, for those who might not be old enough to know, that back in the day a whole lot of commercial programming was cross-development as well. Programmers worked on minicomputers that crunched out binaries to test on the little home computers. Home programmers programmed on their computer 'coz it was the only thing they had and they were having fun, but once time and efficiency got into it, compilation moved off to bigger machines. So using LLVM's giant brain on a 32-core Ryzen to develop X16 code isn't as ridiculous as it might otherwise sound. It's just the modern version of what they used to do, and saves you tearing out (as much of) your hair.
  15. None of what I've been saying is meant as flame, though I'm sure it reads like that. I do get heated because of misunderstandings about what C++ is now, and because of how frequently those misunderstandings are repeated in public forums where people who are coming to learn just pick it up as "truth" and continue the problem. "all this complexity and abstraction"... C++ is only complex if you need it to be. Abstraction is a very useful tool to increase programmer efficiency. And neither needs to water down anything. All the heavy lifting of expanding out the abstractions/complexities happens at compile time. Then it gets optimized back down to just the parts you were using. Which you were going to be using no matter what language you used. And then that minimal pseudo code is converted to 6502 opcodes. With new compilers and libraries (and LLVM is the newest, pretty much), C++ has repeatedly beaten C at performance tests. And not because the runtime has some huge library component that wouldn't fit on a 6502, but because modern C++ compilers write better C than C programmers do. And they do it because they simultaneously get the benefit (from all that complexity and abstraction) of better understanding what the programmer was actually trying to do (ie, if I use the std::nth_element algorithm, the compiler knows much more about what I'm trying to do than if it was just looking at some for loops and conditionals) AND of being a tireless worker with nearly limitless concentration and memory who can see opportunities for code reuse, simplification, etc. Oh, and all that cool pre-computation that lets games and demos run so fast? In modern C++, the compiler automatically figures out if a string of execution -- even if it spans multiple function calls -- is actually a constant and can be performed at compile time so that the final result is just stomped directly into the opcode. Yes, an expert C programmer can outperform an average C++ programmer. But I suspect an expert C++ programmer could outperform an expert C programmer. And it's really about the averages anyway, if this is a learning computer, and in the average case, C++ gives an average programmer the benefits of an expert programmer under-the-hood. Nothing about the C++ experience would be "watered down." You don't write the same kinds of programs on an X16 that you write on a generic PC, but that doesn't mean that the tools that have been constantly improving for decades aren't a good fit. Anything that would require a heap or other "bloat" in C++ would require the exact same capability from C, but be much more likely to leak in C because C only has dumb pointers, while C++ provides dumb pointers, reference counted pointers, weak pointers, and unique pointers. "C++ is unwieldy" is an old trope that has been repeated for so long that many people don't even question it. But it simply isn't true. It comes from the time when C++ was basically just a hairy preprocessor in front of C code. Anything after C++11 is a completely different beast, and things are accelerating. One last thing I'm going to throw out there, and then I swear I'm going to try to stop... C++ isn't really about "Object Oriented Programming" any more. Sure, it's still got classes. But the originators of OOP figured out (after 20 years or so of people trying to work out the issues) that OOP doesn't deliver on its promises. OOP is also where all of the heap flail and bloat came from. So, when you look at what gets C++ programmers excited now, it's mostly about template metaprogramming -- making the compiler write the tedious dreck for you (which the optimizer then pares down to only the bits you actually used). If you think that's only for wizards or academia, look at KickAssembler, whose primary claim to fame is its extensive metaprogramming capabilities. Now, personally, while I am super stoked by the things you can do with metaprogramming, I will be the first to admit that C++'s syntax is ugly, and there are other languages out there that do it easier and more cleanly. But you REALLY can't get those compilers for specialized processors and systems, and most of them do require a hefty runtime. C++ remains one of the few languages that can give you every tool you could hope for and yet still run on a tiny constrained system (note that the LLVM-MOS guys made code for a VIC-20, so...) Rust is another, which also provides a lot of compile time guarantees about correct memory usage without requiring any runtime on the host, and someone has already shown using LLVM-MOS as a backend for Rust to generate a program on a 6502 machine. C++ remains a "system programming language", one of the few out there that meet the criteria of driving hardware at its lowest level. (I'm 50, and I've fought with lack of C++ in the embedded world for decades. And even when it was available, it would be the ancient C++98 variant, which did still have all the issues you're worried about. My life changed immensely when the embedded tools I have to use FINALLY introduced C++11 almost 10 years after it was ready. Fortunately for me, they've been a little zippier since then, and they're up to C++17 support. It still amazes me, when GCC and LLVM are freely available and more powerful than any proprietary compiler that these chip makers continue to put out their own garbage...) (Oh, and I would _NEVER_ suggest trying to write a C++ compiler to run ON the X16. That would be horrible.)
  16. Well, Micro-LISP exists for the C64, so doing it on the X16 should be easy... micro-lisp.pdf
  17. Thank you. And I feel the need to point out that the code generation in that particular effort is very messy. He uses an x86 compiler to generate x86 assembly, then rams that through his own x86->6502 translator. Which works, but all that x86 code thought that integers were 32 bit. The LLVM-MOS effort uses 16 bit as the default int, 8 bit chars, and 32 bit longs (which is the approach taken by many, many compilers in the 16 bit era). So the LLVM-MOS output will already be much cleaner, smaller, and faster than the code generated in that video.
  18. First off, the X16 is going to have 512kB or 2MB of RAM, right? C++ was absolutely used on DOS machines with that "little" memory. Yes, the paging nature adds some complexity, but it adds complexity for everybody. Once it's handled in the runtime library, we'd be able to (mostly) forget about it as application developers. We'd probably want some way to hint things to pack heap items into common heap pages for maximum efficiency, but that problem was addressed decades ago by overlay linkers. As for speed, that 8MHz 6502 is comparable to a 32MHz z80, so faster than any 8086 that was ever meaningfully fielded. PLEASE REMEMBER: Arduino's use C++ as their core language, and most of them have less memory than even the starting X16 is going to have. IN PARTICULAR, The ATmega328 chip found on the Uno has the following amounts of memory: Flash 32k bytes (of which .5k is used for the bootloader) SRAM 2k bytes EEPROM 1k byte Yep. 2kB of RAM and only 32kB of flash. I could pack __ANY__ Arduino Uno sketch, libraries and everything into just over half of a C64. And believe me, Arduino uses iostreams. Personally, I'd say being able to take the growing makerspace of Arduino hackers and bring them to the 6502 world would be a Good Thing(TM)
  19. It's also a bit of a moot point today, because the LLVM-MOS team is focusing on C. The C++ headers aren't there yet. But that's more an issue of having the time to put them in and test them all. No real reason why they wouldn't work. Unfortunately, that does mean that all those wonderful STL tools and libraries aren't available yet. But it's just a matter of time. And the C++ LANGUAGE features are there, so it's entirely possible that you could use Boost header-only libraries to get some heavy lifting done...
  20. You might be surprised by how little bloat is in modern iostream implementations. Memory allocation, copying, and deallocation are performance headaches for all computers, regardless of how much they have. The C++ standards team have spent a huge amount of time and effort making the core libraries lean and efficient, with minimal unnecessary copying and heap interaction. That being said, I personally don't care about iostream and never use it. C++ still provides stronger type checking, readily available data structures, and many benefits that I use all the time in small embedded projects. How many X16 are going to be doing text work where "I don't really care that much about formatting, just push this information out there"? If anything, I'd think a low resolution text screen would be a bigger problem, because you have to be so careful about exactly how you display everything (and iostreams is ridiculously annoying for formatting issues). And I definitely second Scott's comments. Even if I loved iostream, nothing in my project would benefit from having it... I just know that a lot of people hate C++ because "it's so bloated", and that just isn't the case any more in either space or speed.
  21. Is that a comment on compilation speed or C++ speed? If you're concerned about compilation speed, LLVM-MOS is currently much faster than KickC for my project. If you're concerned about C++ performance, that has been corrected and refuted for at least 10 years. High-level optimization kicks the crud out of low-level peephole optimization. LLVM is crazy fast to compile, AND it optimizes the heck out of the intermediate code that it generates, discarding all of the abstractualization overhead that simplistic C++ compilers in ye olden dayes created. What it hands off to the back end is already optimized enough that a straightforward 6502 code generation is sufficient to outperform most people's expectations (see the "Findings" section in the link above). And the LLVM-MOS guys haven't even started looking at "codegen level" optimizations yet.
  22. I'm going to leave it to the LLVM-MOS guys to give status (although, my experience suggests that they are actually further along than this page suggests): https://llvm-mos.org/wiki/Current_status Also, to answer "why?" and also performance/features kinds of questions: https://llvm-mos.org/wiki/Rationale (I find the "Findings" section at the bottom especially interesting)
  23. Just wanted to drive by and toss this out there: https://llvm-mos.org/wiki/Welcome https://github.com/llvm-mos I've been poking around on the beginnings of a game, something I wish I'd known enough to be able to do 35 years ago... I was using KickC, which is very cool, but I was running into too many issues and starting to spend more time on workarounds than on programming. Then, yesterday, I found out that there's an actively developed and complete 6502 backend for LLVM, which means you can do pretty much anything the LLVM frontends can do and then spit it out to your 6502. Library support may be challenging, of course. As of today, only the C64 (and Atari 800) has linker target files, but I've played with them (they're compatible with GCC ld linker scripts) and it shouldn't be too hard to create new ones for other targets. If you follow that link, you'll see that they've built programs for VIC-20, Apple IIe, C64, and even built a simple Rust program onto an Atari 800. Creating target files for the X16 shouldn't be particularly difficult. That having been said, don't expect any of the IO (printf, gets, files) to magically work out of the box today. This is in _ACTIVE_ development, and their focus is currently on C64. The backend 6502 codegen passes all LLVM unit tests (a few thousand), and that was announced in a post from just a few days ago. But if you're willing to just hammer the hardware with your own routines, well, it's pretty slick. I haven't gotten around to X16 programming yet, but I wanted to attach a screenshot of what I've been working on as compiled by LLVM-MOS's clang compiler for the C64. I've been building the project to cross compile for the PET, VIC-20, C64, and C128 with different graphics on each, so I should probably just add an X16 target
  24. Thank you for the recommendations. My problem is that I grew up in the era of joysticks, and I can't use a dpad for movement. I can use it as a menu, but I have significant mental lag for movement. I know that probably sounds insane for people even one or two years younger than I am... I've even noticed this on shooters. If I can configure movement onto a joystick and aim with my mouse, I'm much more dangerous than I am with keyboard/mouse. I'm not ecstatic about the fighting stick layout either, because I'm noticeably right handed and I find that I generally need more dexterity on movement than for firing or actions. (This reverses on first person shooters where aiming is such a fine detail process at high speed, but I don't expect frenetic first person action on the X16) Worse comes to worst, I'll just buy a snes connector and a digital joystick from some other system and wire it up.
  25. WOOT! I am so incredibly glad to see progress on the C front for 6502 after so (SO) many years of people (who don't understand zero page speed) saying "the 6502 just isn't suited for C". I mean, I appreciate all of the work that went into cc65, but even its developers say that its stack handling is inefficient. Not to take anything at all away from your project, but I also just recently found out about KickC, which outputs commented assembler for KickAssembler, which is a fairly reasonable way to start at high level code and then be able to dig down into assembly for critical path stuff without getting lost. My ultimate hope is that everybody can look at what the other projects are doing, and that the entire ecosystem will be better for it. I don't want one compiler to "win" or for all compilers to end up being exactly the same, I just know that different perspectives always provide more paths around obstacles, and going from one (mostly stalled) C compiler to three compilers, two of which are actively evolving, makes me very happy! (I was so desperate at one point that I even considered using the period C compilers running on the machine, but that is incredibly painful)
  • Create New...

Important Information

Please review our Terms of Use