Jump to content


  • Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

DigitalMonk's Achievements


Newbie (1/14)

  • Dedicated Rare
  • First Post Rare
  • Collaborator Rare
  • Week One Done
  • One Month Later

Recent Badges



  1. My apologies if I'm repeating other's statements... I read several posts, but not all three pages. Summary of my thoughts: Keep the C64 as your holy grail machine / biome until the Amiga comes out. Keep development and expansions active Learn why it is succeeding so well and use those lessons going forward Do not pointlessly divide and dilute your message (C16 and Plus-4 both? Why?) Make _one_ super cheap computer (break-even or slight loss pricing) to capture the home market with something that has a clear upgrade path TO the C64 biome Do something like @BruceMcF's ideas for the polished C128 to give people a real reason to upgrade FROM the C64 when they need more power (without losing their investment, and while getting an improvement even for their older software) Put out the A500/A1000/A2000 more or less as they did. This seemed to work well Don't try to be PC compatible (see reasons below). Be BETTER than the PC. It was still early enough to pull that off, if you were dedicated... Don't sit back and pat yourself on the back for so damn long. Keep pushing forward and design some new hardware expansions to provide meaningful forward paths Specifically, DO NOT LET the IBM PC-compatible market surpass you in audio/visual capabilities, when that has been your one indisputable knockout capability If you can't come out with AGA until it's so late that you could've bought a random $25 video card to do the same job, then just admit that you're incompetent and sell all the rights to somebody who actually cares about the Amiga while there's still at least a slim chance of turning things around. Don't wait and drive the name 6 feet underground and then sell it when the whole line is already dead... (Sorry if my anger at Commodore in the later years gets too hot. I loved my Amigas, but I spent years furious at Commodore for just letting things slip away...) Extended discussion / explanation: I don't see any way to have improved the success of the C64 itself. It's productive lifetime was insanely long in a period of mass incompatibility (between vendors, between models, between upgrades -- basically anything you bought was a lock-in). I do really like @BruceMcF's suggestions of giving the C128's C64 mode access to the other 64kB of RAM, making it look like a GeoRAM or REU or other "standard" C64 RAM expansion. Building in a fastloader would have been wonderful as well. Both would need a way to turn them off for troublesome programs, just like you would sometimes not be able to use their true C64 equivalents with some software, but that could be as easy as GO64 vs GO64+, or GO64 vs SAFE64 I don't think that Commodore could have maintained any meaningful market share by adding DOS or Windows compatibility. Even IBM couldn't do that . And especially on non-Intel CPUs -- Microsoft themselves tried that with NT, and they couldn't swing it either. If I think of the C128 as having been properly polished, I would see that as a bridge towards the Amiga and moving forward into more powerful machines. This still being early enough that a lot of people still didn't understand the true value of a computer, I can also see the wisdom of making _one_ model of super cheap entry computer -- possibly even to sell at break-even or slightly loss-leader prices, with the intent to saturate the market and get as many people interested as possible. But only _one_. Not three. Or even two. That only serves to dilute the market, confuse your customer, and complicate your manufacturing/distribution chain. And compatibility should have been seriously considered. Not hard-core compatibility -- there's no way that a C16 could reasonably be expected to run C64 software, but for the love of all that's holy, why would you change the joystick port connector? There was a healthy 3rd party market for joysticks, and everybody had their own favorites, so it was fundamentally stupid to cut that entire market and try to lock people into Commodore-only joysticks (and then to release such a horrible painful one at that...) OK, so they wanted to emphasize business use -- again, kind of blind. Yes, some people (like the video rental store in my home town) used the C64 for business, but if you're looking to saturate the mass home market, that's going to be game-centric, and that should have been obvious by the 90s. So make a little gaming machine that could also be used by the curious to program their own little games, and make it clear what the path forward to C64 or C128 would be. Let them keep their investment of external hardware and BASIC programs (so BASIC has to be compatible, and as much AV IO as possible), even if they can't migrate assembly programs (or maybe strongly encourage all C16 software to be BASIC software to make that transition possible for the majority of software (you can't block out assembly, obviously, but put the argument forward towards software creators)). Once into the Amiga world, Commodore held its own for awhile. TV signal compatibility made it a shoe-in for video production work, and it was a good game machine as well. The primary failing I saw as a user was that Commodore seemed to just be resting on its designs. A500/A1000/A2000 were OK -- starter system with floppy and little RAM, medium system with more RAM, and professional system with RAM expansions, hard drives, and the possibility of DOS through the Bridgeboard (though, honestly, designing the Bridgeboard around an 8088 at that point in time seemed really stupid -- my friends had 286's minimum, and I think I had a 386 sitting on the side). Sound and video as good as and generally better than any competing system. It's all good and a great start. But then it took seemingly forever to improve any of those things. Video cards coming from 3rd party manufacturers who had to provide their own APIs because there was no standard to implement, so even if you wanted a 24bit video card, each one could only support a couple of programs. 7MHz CPUs across the board, and an OS that would crash or lock up if you put a 14MHz CPU in it (I had an accelerator and I had to remember to downclock it before doing any disk access). Eventually the A3000 jumped to a 32-bit core and 25MHz (IIRC), but still on the old audio/video hardware. I am aware that the original Amiga design was done out-of-house, originally pitched to Atari and rejected, then sold to Commodore. This makes me suspect that Commodore did not have the design talent to design hardware that would expand on the Amiga's capabilities, and by the time they could, everybody had passed them by. Amigas didn't get significant visual upgrades from Commodore until after everybody on a PC was already above and beyond what the AGA could do. The official Amiga hardware (and thus the software/OS support) was just stagnant for too many years. I'm not sure how they could have fixed this, other than to get better designers in-house.
  2. Whoops! Sorry for mis-using the term "pseudo-registers" when I was talking about "imaginary registers"... I haven't started X16 programming yet, so I was just thinking about imaginary registers LLVM-MOS uses for PET/VIC/C64/C128/Atari/etc.
  3. First, their focus has been on clang, not clang++, so I'm not sure how much C++ support is present (I would expect all the language features to be there because that's a front-end common thing, but I know that the runtime library doesn't exist because that's a backend supplied library and they haven't worked on it yet). I do want to start poking around with C++ language features, just to see how they go, but I want to get all my platforms working again first. Second, I'd swear that I've seen somewhere (thought it was this thread, but can't find it) that interrupt handlers couldn't be written yet because of an implementation detail about how they handle function calls... __BUT__ I've been trying to compare and contrast 5 different C compilers, so I could very easily be thinking of one of the others...
  4. Yeah... I was really hoping I could just slip by on those... A lot of them are BASIC workspaces that shouldn't matter much, but there are also KERNAL workspaces that would be very bad to stomp on. I'll have to break out all my ZP memory maps and compare them. I'm really glad that the number of pseudo-registers and their locations is completely configurable through just text files. Once I can get all my stuff running, I will make a cleanup pass to make sure I don't have "hackery" left sitting around, and then I'll definitely send a pull-request... Hmmm, gotta fork the repo inside GitHub first, probably, instead of just messing with it on my local machine
  5. *SMH* D'oh! Thank you for that... Just got jammed into my mental rut... OK, all four platforms at least build and link now. They load and (except for the VIC) have the correct BASIC SYS command waiting. Now I just have to be more careful about linker files and where I'm placing my fonts and graphics and where the stack goes, and so forth ('coz they CPU JAM "immediately" if I run them )
  6. I would be very interested in the details of your tweaks. Did you make an X16 target alongside the existing 64 target, or did you just modify the 64 files into X16 files? I'm trying to make my little game for the 128, 64, VIC, and PET, and they all put BASIC in different places... 64 works, of course. I'm trying to get 128 working next. My first attempt modified files directly in the 'build' directories. I copied the 64 source directory and renamed it to 128. I modified the ldscripts/link.ld to use the 1c01/1c0d addresses needed on the 128. I renamed 64.cfg to 128.cfg and tweaked the comments (the actual commands didn't appear to need modification). Got a valid PRG. Tried to autostart it in VICE and it exploded. Automounted it instead so that I could list it, and it was "7773SYS2061", so the basic header didn't autoadjust to the linker start point (I got lazy with KickC, because it generates the basic header on the fly). Realizing that I'd been hacking on output files instead of editing inputs, I moved out to the actual source code directories. Did the equivalent edits from above to the source. Then I adjusted various CMakeLists.txt files to include the new directory. I modified the lib/basic_header.s to use 7181 (1c0d) in the SYS command. Ran ninja to rebuild and I get ``` [0/1] /usr/bin/cmake -S/home/mac/games/llvm-mos-sdk -B/home/mac/games/llvm-mos-sdk/build CMake Error at cmake/modules/AddObjectFile.cmake:10 (add_library): add_library cannot create target "basic_header" because another target with the same name already exists. The existing target is created in source directory "/home/mac/games/llvm-mos-sdk/commodore/64/lib". See documentation for policy CMP0002 for more details. Call Stack (most recent call first): commodore/128/lib/CMakeLists.txt:6 (add_object_file) ``` I'm not much of a CMake or ninja user, just following steps and extrapolating what I can. I don't quite see why the 128's basic_header is conflicting with the 64's basic_header. They should be in separate directories. But they only have one target machine under each "brand" of computer, so there may be some assumption buried somewhere that I'm just missing. I think I looked at all the CMakeLists.txt from the root down and I can't see it, but that doesn't surprise me, really...
  7. They seem to be _VERY_ strict about there unit and integration testing. No Pull Requests are allowed unless they are covered by an existing test or include new ones. All code has to follow the LLVM coding and quality guidelines. None of that stops errors getting in, of course, but it should severely limit the "quick hack" kind of coding that leads to fix/re-fix/fix-again/no-this-time-really/argh commits... I am incredibly stoked that there are so many C compiler efforts out there now for 6502: cc65, of course, which is pretty rock solid but unfortunately generates (by far) the slowest/largest code. But it always works. gcc-6502 has the GCC front end goodness, but still some backend issues, and is pretty much dead unfortunately... KickC is quite active and the lead developer is responsive and helpful. Very cool if you want to mix and match with KickAssembler NutStudio has been mentioned in another thread here. I had good luck in my initial forays with it. He's not ready to release, but is open to beta testers LLVM-MOS which appears to be very serious about the whole effort
  8. Awwwww... Did you have to ruin my fantasy of hundreds of retro-enthusiasts frantically hacking towards getting this completed? Still, 2-3 a day is pretty good!
  9. I mentioned it was in active development, but just for a sense of scale: llvm-mos-linux-main github-actions released this 21 hours ago · 8911 commits to fd5a4cc2c8cb064afe6df5ccb436831ef8743bda since this release Almost 9000 commits in less than a day. Basically, if it's doing what you want, just use what you have. But if you have any issues, grab a new build coz they may have already fixed your problem...
  10. C is possible, there were at least three commercial C compilers back in the 80s. C++, well, maaaaaaaybe C++98ish. But just as a meaningless point of information, clang++ (the C++ compiler for LLVM-MOS) is 84.5MB. That's not it's memory footprint, just the executable size. Now, granted, clang, clang-13, and clang++ are all the same size, so I suspect that is one mega compiler/librarian/linker application for multiple similar languages, but it's waaaaaay beyond the 2MB for the big X16... But I love to see people tackling impossible odds. Frequently they found out that they're merely ludicrously difficult
  11. DO IT!!! Upon reflection: Oh lord... I mean, I suppose you could always cram the LLVM source code through LLVM-MOS. I don't know how huge the resulting PRG would be, since there is a LOT of logic in LLVM. I've looked into using a C compiler on the C64, and that was insane. You had to have either two or three floppy drives to even start, and all the steps were separate, and just argh... I would also like to mention, for those who might not be old enough to know, that back in the day a whole lot of commercial programming was cross-development as well. Programmers worked on minicomputers that crunched out binaries to test on the little home computers. Home programmers programmed on their computer 'coz it was the only thing they had and they were having fun, but once time and efficiency got into it, compilation moved off to bigger machines. So using LLVM's giant brain on a 32-core Ryzen to develop X16 code isn't as ridiculous as it might otherwise sound. It's just the modern version of what they used to do, and saves you tearing out (as much of) your hair.
  12. None of what I've been saying is meant as flame, though I'm sure it reads like that. I do get heated because of misunderstandings about what C++ is now, and because of how frequently those misunderstandings are repeated in public forums where people who are coming to learn just pick it up as "truth" and continue the problem. "all this complexity and abstraction"... C++ is only complex if you need it to be. Abstraction is a very useful tool to increase programmer efficiency. And neither needs to water down anything. All the heavy lifting of expanding out the abstractions/complexities happens at compile time. Then it gets optimized back down to just the parts you were using. Which you were going to be using no matter what language you used. And then that minimal pseudo code is converted to 6502 opcodes. With new compilers and libraries (and LLVM is the newest, pretty much), C++ has repeatedly beaten C at performance tests. And not because the runtime has some huge library component that wouldn't fit on a 6502, but because modern C++ compilers write better C than C programmers do. And they do it because they simultaneously get the benefit (from all that complexity and abstraction) of better understanding what the programmer was actually trying to do (ie, if I use the std::nth_element algorithm, the compiler knows much more about what I'm trying to do than if it was just looking at some for loops and conditionals) AND of being a tireless worker with nearly limitless concentration and memory who can see opportunities for code reuse, simplification, etc. Oh, and all that cool pre-computation that lets games and demos run so fast? In modern C++, the compiler automatically figures out if a string of execution -- even if it spans multiple function calls -- is actually a constant and can be performed at compile time so that the final result is just stomped directly into the opcode. Yes, an expert C programmer can outperform an average C++ programmer. But I suspect an expert C++ programmer could outperform an expert C programmer. And it's really about the averages anyway, if this is a learning computer, and in the average case, C++ gives an average programmer the benefits of an expert programmer under-the-hood. Nothing about the C++ experience would be "watered down." You don't write the same kinds of programs on an X16 that you write on a generic PC, but that doesn't mean that the tools that have been constantly improving for decades aren't a good fit. Anything that would require a heap or other "bloat" in C++ would require the exact same capability from C, but be much more likely to leak in C because C only has dumb pointers, while C++ provides dumb pointers, reference counted pointers, weak pointers, and unique pointers. "C++ is unwieldy" is an old trope that has been repeated for so long that many people don't even question it. But it simply isn't true. It comes from the time when C++ was basically just a hairy preprocessor in front of C code. Anything after C++11 is a completely different beast, and things are accelerating. One last thing I'm going to throw out there, and then I swear I'm going to try to stop... C++ isn't really about "Object Oriented Programming" any more. Sure, it's still got classes. But the originators of OOP figured out (after 20 years or so of people trying to work out the issues) that OOP doesn't deliver on its promises. OOP is also where all of the heap flail and bloat came from. So, when you look at what gets C++ programmers excited now, it's mostly about template metaprogramming -- making the compiler write the tedious dreck for you (which the optimizer then pares down to only the bits you actually used). If you think that's only for wizards or academia, look at KickAssembler, whose primary claim to fame is its extensive metaprogramming capabilities. Now, personally, while I am super stoked by the things you can do with metaprogramming, I will be the first to admit that C++'s syntax is ugly, and there are other languages out there that do it easier and more cleanly. But you REALLY can't get those compilers for specialized processors and systems, and most of them do require a hefty runtime. C++ remains one of the few languages that can give you every tool you could hope for and yet still run on a tiny constrained system (note that the LLVM-MOS guys made code for a VIC-20, so...) Rust is another, which also provides a lot of compile time guarantees about correct memory usage without requiring any runtime on the host, and someone has already shown using LLVM-MOS as a backend for Rust to generate a program on a 6502 machine. C++ remains a "system programming language", one of the few out there that meet the criteria of driving hardware at its lowest level. (I'm 50, and I've fought with lack of C++ in the embedded world for decades. And even when it was available, it would be the ancient C++98 variant, which did still have all the issues you're worried about. My life changed immensely when the embedded tools I have to use FINALLY introduced C++11 almost 10 years after it was ready. Fortunately for me, they've been a little zippier since then, and they're up to C++17 support. It still amazes me, when GCC and LLVM are freely available and more powerful than any proprietary compiler that these chip makers continue to put out their own garbage...) (Oh, and I would _NEVER_ suggest trying to write a C++ compiler to run ON the X16. That would be horrible.)
  13. Well, Micro-LISP exists for the C64, so doing it on the X16 should be easy... micro-lisp.pdf
  14. Thank you. And I feel the need to point out that the code generation in that particular effort is very messy. He uses an x86 compiler to generate x86 assembly, then rams that through his own x86->6502 translator. Which works, but all that x86 code thought that integers were 32 bit. The LLVM-MOS effort uses 16 bit as the default int, 8 bit chars, and 32 bit longs (which is the approach taken by many, many compilers in the 16 bit era). So the LLVM-MOS output will already be much cleaner, smaller, and faster than the code generated in that video.
  15. First off, the X16 is going to have 512kB or 2MB of RAM, right? C++ was absolutely used on DOS machines with that "little" memory. Yes, the paging nature adds some complexity, but it adds complexity for everybody. Once it's handled in the runtime library, we'd be able to (mostly) forget about it as application developers. We'd probably want some way to hint things to pack heap items into common heap pages for maximum efficiency, but that problem was addressed decades ago by overlay linkers. As for speed, that 8MHz 6502 is comparable to a 32MHz z80, so faster than any 8086 that was ever meaningfully fielded. PLEASE REMEMBER: Arduino's use C++ as their core language, and most of them have less memory than even the starting X16 is going to have. IN PARTICULAR, The ATmega328 chip found on the Uno has the following amounts of memory: Flash 32k bytes (of which .5k is used for the bootloader) SRAM 2k bytes EEPROM 1k byte Yep. 2kB of RAM and only 32kB of flash. I could pack __ANY__ Arduino Uno sketch, libraries and everything into just over half of a C64. And believe me, Arduino uses iostreams. Personally, I'd say being able to take the growing makerspace of Arduino hackers and bring them to the 6502 world would be a Good Thing(TM)
  • Create New...

Important Information

Please review our Terms of Use