Jump to content


Popular Content

Showing content with the highest reputation since 06/29/21 in all areas

  1. 29 points
    Hello Everyone! I received the parts for the third Prototype on Tuesday evening and spent a good chunk of that night and a bit of yesterday morning getting it worked out. I had to get my code moved over to the ATTiny861 before the board would even power on. This turned out to be pretty easy now that the I2C header will also work as an Atmel ISP programming header. Of course, I'm pretty much going to make a mistake somewhere on a board of this size, and this time is no exception! Fortunately, they were easy to spot and the actual logic is working as it should. Or at least as I designed it. Two of my mistakes are visible in the pic, see if you can find them. One is just cosmetic, and the other is a bodge job on a chip which I will admit, is next to impossible to see in this photo. The less easy to spot issue is that I used the stand-by power to power the microcontroller, but I put pull-up resistors on the I2C lines (and SPI programming lines) to the system voltage and not the VSB. I did this on purpose as I didn't want to pull these lines high while programming the microcontroller, but need to pull them high for I2C. The net result is that leakage was happening backwards through these resistors when the data/clock lines were high. Enough to power on the LED on the motherboard with no other ICs plugged in. Took me a minute to figure that one out, but I think I will just throw a few more diodes in to protect this from happening. I suspect issues like this are sometimes why you may see a lone crusty resistor on an old PCB after years of use. Easy fixes all around! One last issue is that the parts sourcing scourge which has been affecting the world is also affecting TexElec! Yes, we can't get parts in for some of our products, and as time goes on, it seems like it may get worse before it gets better. And now, it has hit the X16 project! I am unable to get the FPGA and the DAC for the new Version 4 VERA, so we're still running V3. The main difference has to do with hardware deadlocks on the SD card, so functionally, it's fine. However, the lead times are a bit concerning. I'm looking into some other suppliers now, and hoping for the best. For now, here's a pic of the new machine, with no wires all over it! Take care! -Kevin
  2. 11 points
    Paging @Kevin Williams! Heya! Sorry to be that guy, but not all of us use FB for whatever reason. I heard the 3rd prototype board has been announced over on the FB discussion board. Any chance the passionate and devoted fan base here can get some info on it as well? I've only heard about it second hand but would love to get the deets from the source!
  3. 10 points
    Hey Everyone, Sorry for the delay on an update. It's been a hectic year once again and there never seem to be enough hours in the day. I'm not a huge FB fan either, but the reality is with our business, it's pretty key I keep on top of FB & the even more dreaded Twitter. I meant to post the same day, but for the brief second I tried, the site wouldn't let me login, and I just forgot to come back. However, I just posted a video of Attack of the Petscii robots on the X16. It's still the second proto, but I have it wired as the third board is designed. I did this for testing the new design before running it of-course, and David just wanted to make sure the game was still running ok. I am catching up on the thread and I will answer some questions here in a second. For the moment, the PCBs are 100% complete and should be shipping tomorrow. This means probably Friday or Monday they will be here and I should have all of the parts in-hand too. We're still not done with the Kernal, so this is not the end of the story by any stretch, but it should be pretty close to the end of the HW specs. Attack of the PETSCII Robots on the Commander X16 Prototype 3 - YouTube Thanks, -Kevin
  4. 8 points
    Just bringing over some related fun from a Discord channel:
  5. 7 points
    The reason it hasn’t been pushed is there are some hardware things having to do with keyboard and mouse handling which haven’t been tested on real hardware yet. However the R39 is there and can be compiled, and so long as you are using Kernal routines it should work. Personally I think R39 should be pushed and if it turns out we need to make some hardware changes we can push out an R40 release. Sent from my iPhone using Tapatalk
  6. 6 points
    (someone had to do it)
  7. 5 points
    Kevin posted this on Facebook (to which 66 comments were added; combination of smartest-people-in-the-room and snark); see images which I pasted below: He said: Hey everyone, Prototype #3 PCBs have been ordered! This board incorporates all of the fixes made to the second board, with some circuitry simplification, and the other changes I dicussed in past posts. I had been holding off for awhile as we may yet use a microcontroller for PS/2 Mouse and Keyboard control if using the 6522 doesn't pan out. I had already added an ATTiny84 to control power on/off & reset, so I moved to an ATTINY861 on this board to add enough legs for the PS/2 ports. I then added jumpers so we can select either one for testing. This will be removed from the final board, but I did try to get the layout closer to what I feel the final will look like. Other changes include: I moved from a 50 pin edge connector to a 60 pin! I was not a fan of using a 50 pin port as I was afraid folks might confuse it for an Apple II slot. Likewise, the 62 pin port is the same as an ISA card. Now, little to nothing that makes any sense will fit and I was able to add a few more pins from the CPU to the bus. This PCB is 4-Layer. I did this for a few reasons, but the primary was hoping to keep the noise level IE, RF emissions as low as possible. It now has a proper ground-plane and while the PCB cost does go up a bit in low quantity, it's not actually too bad once you start looking at 100+ PCBs. Took a wild stab at adding some EMI protection on the PS/2 and IEC port. Also added resettable fuses to main PCB to limit current flow.
  8. 5 points
    Yeah, I second that. Shouldn't this board really be the primary source of info about the X16, and Facebook secondary? Facebook doesn't like me, and the feeling is very mutual.
  9. 5 points
    Just to add a little more detail, in David’s first video he set the goal at under $99 but hopefully $50. While $50 may not be doable, don’t rule out us coming true on the first video’s mention of $99, for Phase 3 (X16E). Stranger things have happened…
  10. 4 points
    Scott! Wowzers! That's amazing to think about! Of course, there's already a lot of this sort of thing in 'the other direction' in the retro computer community. I have to take great care not to be a jerk when I see someone post on REDDIT about how their brand new VIC20 game blows away anything that ever existed at the time the machine was 'in general use' and how the original programmers must not have been any good etc., etc. Of course their new VIC20 program was put together using a cross development environment running on a modern machine with more than half a million TIMES more transistors; repeatedly (and instantly) compiled right into VICE or whatever emulator of choice for testing. The graphics were prepared using photoshop and illustrator and other utilities (having a combined code size equal to many thousands of 1541 disks); and thanks to the internet, with the benefit of having access to code libraries and disassembled code from virtually every 6502 based game app demo etc ever made! And don't get me wrong I don't begrudge folks from using those 'best available tools' for the job. But its the smug and self righteous comparisons in which they credit themselves as better programmers than the folks from 'back in the day' that tends grind my gears. I wonder what their game would have looked like if they HAD used only the same tools as the people who were programming on the old machines in the 1980s, using simple assemblers, graph paper and colored pencils for the drawings, and (on the VIC20 at least) saving their work on cassette tapes (where just the 'save' gave you time to go out and get a cup of coffee). Geez. See what I mean? I can be a real "get off my lawn" old coot when it comes to that topic!
  11. 4 points

    Version 0.0.5


    BASLOAD is best described as a BASIC tokenizer. It takes BASIC source code stored on disk in plain text format, and loads it into RAM. While loading the file, it's tokenzied so that it can be run by the built-in BASIC interpreter. The purpose of BASLOAD is to enhance the programming experience by letting you edit BASIC source code in the editor of your choosing without using line numbers. Instead of line numbers, named labels are defined as targets for GOTO and GOSUB statements. Instructions on how to use it and source code is found here: https://github.com/stefan-b-jakobsson/basload
  12. 4 points
    The bus pins may be a little misleading as some of them have the 65C816 names labeled. I am debating on exactly how to label the board, or whether or not I should at all. Even though the system is designed to be a 65C02 based machine, I designed it such that a 65C816 will work electrically in the board. The Kernal isn't a fan of the 816, so it would have to be running a different OS, but we wanted to make sure people had the option to do what they wanted with the system. When I moved to the 60 pin slot, I had enough room to move nearly all of the CPU lines over. So long story short, I should probably label the board based on the 02 names, but I've been focused on making sure you can still plug in a 816 with no HW mods needed other than swapping the chip, and the code of course. The Audio lines are inputs which are sent to the audio mixer, left & right channel. I know it's unlikely there will be a need for more sound chips with the VERA and the YM2151, but I thought someone might want to add a SID, or maybe put the SAA1099 back on later, etc. It will just allow you to pump audio in from an expansion card. I moved the pins to line up with the actual layout of the CPU to simplify routing. It's the beauty of nothing being carved in-stone.... Yet.
  13. 4 points
  14. 4 points
    I use Gimp, and that works just fine on any desktop platform.
  15. 3 points
    https://blog.davetcode.co.uk/post/21st-century-emulator/ My favorite / most horrifying part: (just to be clear, they're not serious)
  16. 3 points
    But popping a 65816 into the CPU socket won't add the "one extra chip", because it has to be in the motherboard. Popping a 65816 into the CPU socket in effect gives you a 65802 with slight bus incompatibilities (SYNC is replaced by VDA/VPA and IIRC the clock outputs are DNC and a reset input). And a bus mastering 65816 card would be pretty much the same thing, though more room for circuitry to bridge the bus incompatibilities ... including masking out the bank address from the data lines. I would be entirely unsurprised if the problem in writing VERA is the bank on the data bus followed by the data confusing Vera in a way that is not an issue with the 65C02 write cycle. Nor would I be surprised if the bus cycles for some of the chips "work" with the 6502 but just make it, and small variations in actual read or write delays associated with the transition between bank mode and data mode make the timing too tight to work. But if you can get a bus mastering card to work, you get the pcode interpreter with the accumulator in 8bit mode and indexes in 16bit mode, with ops ending with JMP NEXTOP or an eight byte NEXTOP macro: NEXTOP: INY : LDA 0,Y : TAX : JMP (OP1,X) The thing is: since it can run 6502 assembly code, can run the same pcode as the 6502, except faster, and can host a compatible ROMBASIC interpreter, except faster, and since assembly code can test whether it is running on a 6502 or 65816, it might actually work as a 3rd party enhancement. Then it would only breaks running CX16 code if people use any of the four individual-bit-addressed operations in their assembled code, so it just needs "enough" of an install base so people shy away from doing that.
  17. 3 points
    16-bit rotations are easily doable without loops because the bit that rotates “off” the register rotates into the carry flag, and the carry flag rotates into the register. so say you have a 16bit unsigned variable held at myword, and you want to multiply by 4. ASL myword ROL myword+1 ASL myword ROL myword+1 Do it a third time for myword * 8. ASL always shifts a zero into the LSB. ROL shifts the carry bit into the LSB. (that’s why it’s ASL on the low-order byte) so to finish your formula, after the three shifts: LDA #$10 CLC ADC myword STA myword LDA myword+1 ADC #$FC STA myword + 1 This can be done more efficiently but I wanted to use the most straightforward methodology as the example.
  18. 3 points
    Though it doesn't have to be future proof ... it's only a bridge until R39 becomes the baseline release. That's the part that has me in suspended animation ... I am not interested in putting code into a version test that I am going to want to strip out again once R39 is the public release.
  19. 3 points
  20. 3 points
  21. 3 points
    I couldn't agree more.
  22. 3 points
    Thank you. The change to the bank registers alone is more than enough reason to publish R39. We all know and accept that the hardware will not perfectly match the emulator, but I think it's still important to stay up to date with the latest ROM changes and hardware changes we do know about. Combined with the other changes, it's more important at this point to have the latest code than the "best" code, IMO.
  23. 3 points
  24. 3 points
    I've played with my new The C64. It's not bad. Either the lag on the USB joystick makes games I used to play harder, or my age. But I don't feel older most of the time, so it must be the hardware.
  25. 3 points
    When I look at the latest board just posted a few hours ago: We can find a socket labeled "YM2151" at the bottom right. So I'm pretty sure the YM2151 will be in the final design - as is the VERA PSG/PCM. Personally I'm pretty pleased with that .
  26. 3 points
    The last prototype had a YM2151 and both PSG and PCM in the VERA, just like the emulator. The SAA1099 is no longer on it, and was never emulated, anyway. Kevin made no mention of any changes to the sound for the third prototype, so it looks pretty set right now. The FAQ is a bit ambiguous, mostly due to staleness, just like the 8BG videos are very old now, the last coming out right after the first prototype was up and running. The only difference in the sound since the R38 emulator is a change to the addresses for using the YM2151, which is reflected in the most recent commit to the repo, which will eventually R39.
  27. 3 points
    Seen these and thought they were funny.
  28. 3 points
    I believe we're talking about "The Bard's Tale". At least I hope so, or it's going to be a Picard Facepalm moment for me. Excellent game series! I'm currently replaying it, the remastered trilogy version. I had wanted it for some time now and it just went on sale in the 2021 Steam Summer Sale, discounted to $3.74. Totally worth it!
  29. 3 points
    I did this for Dungeon Master, Dungeon Master 2, Eye of the Beholder 1,2 and 3, and Black Crypt. The spinner tiles were HELL.
  30. 2 points
    Awesome! Kevin, I'd say not to hesitate to lean on the community too: If you have a particular list of things you need (FPGA etc) to get proto stuff done, post a list and let folks see what they've got (or can leverage connections to get). Obviously it won't be the endgame supply chain, but if you need something (e.g., to get VERA4 onboard and rocking) it could be the hive mind might be in a position to help. As for myself, er... well, I've got a big bag of blue LEDs somewhere around here. In all seriousness, thanks to you and everyone on the team for plowing through all the things with COVID, the global supply chain weirdness and the online bikeshed factory to get things so this far. Can't wait for the next steps!
  31. 2 points
    Your mindset seems really sensible. Even if further tweaks are necessary, a release would take the community off 'pause' (avoiding any potential fall off of enthusiasm), and also acknowledge the value/importance of the work by the team members who have put so much time into the updates from r38 to r39.
  32. 2 points
    I don't have much to show for it yet but I did start work on the copy/paste - or more specifically the block selection routines. CMD-B and CMD-E now mark the start/end of the block and I have started work on highlighting the blocks in the pattern view itself. I plan on getting all that working before I tackle operations on the block. Marking the start/end of the block is easy enough - but drawing the highlighted area is much less trivial. This is partly because a single pattern doesn't fit in VRAM so when moving around channels, the blocked area has to get redrawn depending on what part of the pattern is in VRAM (specifically VRAM can hold all the rows, but not all the channels). To solve this I'll probably have a function that evaluates the start/end areas and highlights any part of that currently in the view which I can call whenever I need to. This is all to mark a block. Once a block is marked, I'd like to have multiple commands to operate on the block. This will include copy/paste but also can include other things like changing the octave of all the notes in the block, or inc/dec'ing the notes, etc. The copy/paste itself is still a bit of a mind exercise but I'll worry about that once I have block selection actually working. Another change is I broke the edit_pattern module out into sub-modules of its own, as it was getting pretty hairy having everything in one big file. I like this approach and will be breaking out the other routines into separate files to break up edit_pattern more into bite sized chunks.
  33. 2 points
    And Bender! Head canon: With all the issues found with advanced CPU techniques, bugs in CPUs, security issues due to inter-core spying, etc etc etc, they decided to go to the best CPU that would be known secure where any defects were well documented. Yeah, that's it.
  34. 2 points
    No, because the instruction set simply doesn't have the needed operations. As was mentioned above, there's no integer divide or multiply, let alone doing floating point math or the SSE instructions that operate on 4 integers at a time. Scripts are a maybe, but again, note the performance numbers Scott pulled out above. A 3GHz 6502 would be running at the equivalent speed of a 100MHz x86 - but without the math coprocessor, or even multiplication or division. If you think back to the 100Mhz days - yes, you can absolutely run a web browser on a 100MHz computer, but it's going to be much slower than a modern PC, and anything involving fancy math (such as decompressing JPG and PNG graphics) is going to be slooooooooooooooooooow.
  35. 2 points
    Alas, the best I could come up on my own with were 1990s games or later. To supplement your list: Dragon Quest V (1992), the game protagonist marries one of three different women (the player chooses which). Super Mario RPG (1996), an antagonist named Booster attempts to marry Princess Toadstool. The ceremony is broken up at the last second. Final Fantasy X (2001), Summoner Yuna agrees to marry Seymour Guado, but it turns out to be a ruse that goes wrong and she is forced to flee from the ceremony at the last second. And the closest a quick internet search could get me was Phantasy Star III (1990), where the main character can marry one of two women in the game.
  36. 2 points
  37. 2 points
    It bears keeping in mind that it's not unusual for standard government contracts to give preference to parts that are available from more than one supplier, so it is possible that a different provider as a second source of a compatible part can be a useful step to land a large government order. In that case, rather, "hey, why are you selling the customized design I ordered from you?" it can be, "c'mon, how soon am I going to be able to show that you are also selling the customized design I ordered from you?"
  38. 2 points
    Email sent. Also, first post!
  39. 2 points
    Seems to be straight forward (said the blind mand) Thanks. I will look into it.
  40. 2 points
    As far as creating new computers from scratch goes, this is breakneck speed. I've seen a few crowdfunded efforts, and this is going pretty well, all things considered. It's certainly going faster than the Mega 65 at a similar point in its development. It's been a bit over two years since David's announcement that he was going to build this computer... I honestly don't see how anyone can criticize the timeline, considering everything that's happened over the last year alone. That has been mentioned in another thread, here. They will be running a beta test, but they are selecting testers behind the scenes. If I was a betting man, people who have actually written software for the system are likely to be at the front of the line for a beta unit. There are a few forum members here who have already written text editors, assemblers, a completely new machine monitor/debugger, and some games. They all have jobs. And their day jobs take precedence. We've also just had a worldwide health scare that has put everything behind by months, if not years... I'm not surprised at some delays. My concerns are actually on the software side, as we still don't have an official release of the latest emulator, and there's still a ways to go before that's complete. However, we can't expect one man to do it all, and there are plenty of tasks people could tackle and submit as pull requests to GitHub. The forum is the official source for communication with the team. Other social media outlets are there for people to communicate with each other, but Perifractic (as the defacto front man for the team, at this point) has committed to announcing things here first and using this web site as the development hub for the system. And that has certainly been working. There has been a lot more technical and effective conversation here than on Facebook, which is a terrible way to organize information. Hardly. The current design is very much what David proposed in his manifesto, just over two years ago. It's a real 6502 CPU, a VGA quality display, and a couple of audio chips with FM and simple "beep boop" synthesis. From where I sit, the Commander X16 is exactly what David wrote about back in 2018 and 2019. The original post is here: https://www.the8bitguy.com/2576/what-is-my-dream-computer/ and the "part 2" where he announces he's going to build his own computer: https://www.the8bitguy.com/3543/my-dream-computer-part-2/
  41. 2 points
    Oops, forgot Episode 3: And now, Episode 4 is out:
  42. 2 points
    I would still like to know why this isn't the primary location for X16 info. Facebook is Draconian in their account rules and fast and loose with that data once they have it. I refuse. But if it's just me, so be it, but pretty disappointing.
  43. 2 points
    The biggest problem I think would be memory access time. Modern CPUs have complex fetching and caching schemes to pull lots of memory into the CPU at one time, and lots of units running in parallel to keep the CPU busy at all times. Whenever an x86 CPU has to access RAM it has to slow down potentially to wait for the memory request to be fulfilled. This is the primary reason why a 1 MHz 6502 was comparable to a 4.77 MHz 8088 for certain types of processing. If you could keep an 8088 busy with data already loaded into registers, it would potentially be a lot faster than a 6502 (depending on the instructions), but any time it had to go to memory it took 4 cycles, and since it was a 16 bit CPU with an 8 bit bus, it took 8 cycles to load a word. I would love to see a super fast 6502, but since the model requires a memory access (or more) with most instructions, it will always be limited to the speed of RAM access. Now, modern RAM can be accessed very quickly compared to the Good Old Days, but each access is (as I understand it) accessing 64 bits at one time. So the interface between the CPU and RAM has to be able to deal with the bits per access. A 6502 is built around 8 bit bytes, so either the CPU has to be rearchitected to deal with more bits per memory access, or a shim interface would have to be inserted between the CPU and RAM to mask out just the bits of interest, which would slow things down. http://forum.6502.org/viewtopic.php?f=1&t=6049 is a forum post that talks about theory and practice of what 6502 architecture speeds have done. Most notably is the quote (if accurate): So it isn't thought impossible by experts, but someone has to want to do it to make it happen. Clearly the market for it isn't there or else it would have been done already (most likely).
  44. 2 points
    One simple approach is to use the secondary address as a generic index to a protocol string, and send the protocol string along with the secondary address it is associated with on the command channel. Supposing the device is device #14: OPEN15,14,15 PRINT#15,"SA2:P80" CLOSE15
  45. 2 points
    Pointers are, at the same time, the best and most terrifying thing about C. Here's my favorite measure of code quality:
  46. 2 points
    Now that Core War is 1.0.0, I've started thinking about Pirate Kingdoms of the Rhylanor Coast. In C, of course. I. A PRETTY MAP First, I need a pretty map. The current map is ugly. So I need rounded coastlines. OK. II. AN ECOLOGY Settlements. Depending on the type of land and size of settlement, they may grow, have Food (smaller settlements), Gear (larger settlements), and Ships (on the coastline). Settlements interact with local flora, fauna, and other Settlements. Their size and gear rating determine the Settlements' radius of influence. Thus Cities with excess Gear have the largest influence and can be very powerful. Fortresses. A special kind of Settlement geared for defense. Gear is an economic multiplier. It stretches out your Food supply. It makes you more effective in battle. Gear builds and maintains Civilizations. A prolonged loss of Gear production causes Settlements to fail and Civilizations to fall. An established City may revolt and break off of a Civilization if there are prolonged problems. III. PLAYER ACTIONS Your Group can: barter with Settlements for manpower, food, gear, ships -- if the settlements don't have a bad opinion of you. or attempt to plunder them. establish Settlements with an initial investment of manpower, food, gear, and ships. be a "Primitive Wandering Bandit team", subsisting on trade and plunder. be a "Leif Ericsson" band, arriving in ships to settle in an unknown land.
  47. 2 points
    --continued-- In the prior posts we found a Plus/4 graphics program and made the changes to adapt it to the X16 BASIC bitmap graphics commands (ok, and yes, a bit more fiddling to get it working). Our original X16 adaptation took 11 minutes, 4 seconds to plot its output. In a first round of optimizations, we hit the low hanging fruit (organization, simple math/efficiency tweaks, and parser tricks), to get the plotting time down to 10 minutes, 6 seconds. In my last post, we did a second optimization pass. This time we really pulled out the stops, cutting the plotting time down to about 7 minutes, 46 seconds. To get there were did more aggressive things like taking out an ugly/slow scaling operation, evicting expressions/parts of expressions from the inner loop, and changing our bounds check branching in a way that was conscious of the two very different scenarios we had previously folded into a single conditional branch. Now its time to finally talk about how the (Commodore 64 based) X16 BASIC interpreter uses simple variables like those throughout our program. This will reveal a couple ways to shave some more runtime off our little program before giving it a final polish. For this discussion, we will focus on 'scalar' variables (sometimes called 'simple' or 'regular' variables). I'll call them 'simple' variables here. We'll skip arrays for now, since we don't have any in our program. Simple variables in BASIC are named storage references such as "A" "X1" "G$" or "I%" that can hold a single value in the form (floating point, integer, or string) corresponding to the variable's designated type. Floating point number variables have no additional designator after the name. Integer number variables have a percent '%' sign type designator. String variables use the dollar sign '$' to signify. Under the hood, each simple variable that is 'in use' (i.e., it has been initialized by having its value set) gets its own 7 bytes of memory at a location starting just beyond where the BASIC program listing ends in memory. The first two bytes of each 7 byte sequence hold that variable's name (with high bits added or not to designate the type). Some or all of the next 5 bytes hold the variable's assigned value in the case of a numeric variable, or if it's a string variable, a pointer with the memory address for the actual string, and a byte holding the string's length. Each time the BASIC interpreter is asked to assign a value to new variable as program flow proceeds, it allocates the new variable its own 7 bytes which are stored in memory after all the 7 byte sequences previously assigned for other variables. When the value of a variable changes, it continues to use those same 7 bytes worth of storage at the same memory location they were first placed. There's no 'deletion' of a variable. Even if you assign one a null value, it will continue to use those 7 bytes right where they were first allocated. (You can only use 'CLR' to clear ALL variables). As noted earlier in the thread, the 'DEF FN' function assignment uses regular variable storage. The function itself gets 7 bytes in simple variable space to hold its pointer and some other stuff; and the 'placeholder' (sometimes called dependent variable) used in the 'DEF FN F ([placeholder])' statement also gets its own additional 7 byte allocation within the simple variable space. When BASIC is asked to do something with a variable (such as 'POKE A, 127' or 'A=1.61803399' or 'PRINT A*A') it goes on a little scavenger hunt. It begins with what it knows to be the 16 bit memory address marking the start of simple variable storage and evaluates whether the first two bytes it finds there are the variable name/type it's looking for. If not, it skips forward to the next group of 7 bytes; reads/evaluates the 'name/type' bytes there to see if this time it's found the one it wants ... and so on,... and so on,... and so forth. The interpreter only knows its done searching if it either finds the variable, or gets to the memory address it knows is supposed to be the start of the next category of information in memory (i.e., the beginning of array space). To appreciate the implications of this way of fetching and storing variables, I want you to imagine you purchased a really REALLY cheapo cell phone. It has no alphabetization or search function for the contacts list. Each contact you add just goes into its own entry with the contact's name on top, and the contacts can be displayed, one contact per screen, and paged through only in the exact order you put the contacts in. So every time you want to call or text a contact, you begin at the start of the sequence and then flip through each and every entry, one-at-a-time, saying to yourself "nope, nope, nope, nope, nope, nope,..." until you see the name of the contact you're looking for (or get to the end of the series and decide you need to add the person as a new contact). The more contacts you put into that clunky contact list, the more work it is to flip to the entries for the most recent additions. THAT is how BASIC's interpreter has to fetch and store variables. Each new variable that gets added (in the order encountered during the flow of program execution) is slower to use (fetch/store) than all the variables initialized earlier in the run of the program. Now, of course, the internal machine language routine the BASIC interpreter uses to flip through the variables storage area and find the one it's looking for is fast, and probably benefits a LOT from the extra CPU cycles on the X16. But it is STILL slowing things down. With this in mind, is it any wonder that getting rid of the 'X1' and 'Y1' calculations from that problematic scaling operation in my last post gave such good results in terms of speeding things up? Not only were those variables the slowest ones (introduced last in the program flow), but they were part of expressions that called other 'late in the game' variables, and; then, 'X1' and 'Y1' were used in bitmap 'PSET' and 'LINE' commands that required multiple fetch operations of each of them. Now we know the problem, let's go forth and optimize some more. The solution is easy and can lead to further opportunities: H. Initializing variables the order of frequency of use, particularly within the inner loop. Ideally, you would do this after you optimize everything else (so you don't have to keep re-tallying as you change things). But I often can't resist the temptation to do it earlier in the process, even if it means I'll have to further tweak the order of initializations again later. The tallying process is easy: Here's a screenshot: As you can see, I went over the listing and changed the color of the variables to be tallied up, for emphasis. Then I listed them out, and counted them up with tick marks. By the way, if the inner loop is a 'FOR/NEXT' structure (as here), and if variables are passed to the 'FOR' statement for the 'STEP' and 'TO' parameters, then you should not count those instances as being in the inner loop. The values of those parameters are just pushed onto the BASIC stack when the 'FOR' is executed, and variables used to pass those parameters are not fetched again (or altered) in the loop iterations (in contrast with the indexing variable, which is updated each time the loop runs). Now we have the count, we will 'assign' something to (and therefore initialize) the variables in order of priority at the very very beginning of our program. Here, we can use line 1 which has plenty of room. We just set all those variables to zero, sequencing them from the most frequently used to least frequently used in the inner loop. Since we previously learned that a decimal character '.' parses as the zero value, we'll do that here just for style points (and as something of a homage to the spirit of Jim Butterfield and the other Commodore wonks who figured this stuff out). If you have a 'tie' in any of your tallies (two variables with the same fetch/store count in the inner loop) you can break the tie by considering how many additional times, if any, each of the variables with the same inner loop tally get used elsewhere in the program. Concerning variables destined to be assigned to act as constants in the original initialization, those that participate within the inner loop should be put in the preliminary initialization sequence according to their inner loop tally. They will still be set to their intended constants when the interpreter gets to that part of the program initialization, but they'll also retain their 7 bytes' spot in memory based on the priority order we selected when we first initialized them. Here's our resulting listing: And the resulting runtime: Nice! Nearly 40 more seconds of time savings! That 7 minute mark is so close! I. Getting Wonky. Now that we know our fastest variables (so far) we can consider whether there's some places where we can take advantage of one or more of the fastest variables as sort of pseudo scratchpad registers within the inner loop. This will not be possible in every program and it takes some thinking (and maybe changing things in a way that might require you to tweak your tallies). As you can see from the last listing just above, our 'Q' variable is currently used extensively in the inner loop. One expression sets the value of 'Q' and MUST be done separately to set an intermediate value. The reason is that the resulting value is then used 4 times in the stack of Cosines function and it would not be 'expression simplification' by any means to copy the expression that sets 'Q' to every other place 'Q' is later used in the loop. But we can speed this up a bit. We know after the value of 'Q' is set, its only further use within the inner loop is to be fetched (read) as part of the Cosine stack. 'Q' is not further modified. We also know that the 'S' variable gets assigned the result of that Cosine stack (as modified by the '*C' operation we moved into that expression in a prior optimization). Then, finally, 'S' is used in an addition/subtraction expression to finally derive and assign the final vertical pixel coordinate to variable 'Y'. This means that 'Q' and 'S' are just temporary holding for values on the way to getting the final 'Y' value in each iteration of the inner loop. Significantly, 'Y' is not used any earlier within the inner loop and indeed, 'Y' does not participate in any prior calculations. 'Y' is free until it is assigned near the end of the loop. Here's what we're going to do: We will throw out the 'Q' and 'S' variables. In the expression that currently assigns a value to 'Q', we will actually temporarily assign that value to 'Y' instead. Then, in that stack of of Cosines, we will now be putting 'Y (with that temporary value) in there in place of 'Q' at all instances. And, instead of assigning the result to 'S' and then doing one more calculation to derive the final vertical coordinate, it looks to me like we can also fold that final 'Y=G-(S-T+F)' expression right into the one with all the Cosines. We will assign the outcome of the resulting combined expression BACK into the 'Y' variable. We go from: 5 Q=SQR(J+I*I)*D: S=C*(COS(Q)+COS(Q+Q)+COS(5*Q)):Y=G-(S-T+F): IFY>GTHEN7 to 5 Y=SQR(J+I*I)*D: Y=G-(C*(COS(Y)+COS(Y+Y)+COS(5*Y))-T+F): IFY>GTHEN7 Does that change and the reuse of 'Y' in such a way freak you out? It shouldn't. REMEMBER! Its OK to have the variable name on both sides of the '=' sign for a variable assignment. (i.e., 'Y=Y+1'). The interpreter knows that what you mean is "fully evaluate the expression on the right of the '=' sign using the CURRENT value of 'Y' and put the result of evaluating the expression back into the same variable, 'Y', at the end of the process. The old/original value participates on the right side, and the assignment of the new value occurs only after the old value held by the variable is done being used. We eliminated 'Q' and 'S' from the entire program, so obviously we strike them out of the initialization sequence we previously put in line 1. Redoing our inner loop variables tally we see (unsurprisingly) that 'Y' is now the most frequently used variable since we're treating it as something akin to a 'scratch' register on the way to deriving its final value. So 'Y' now gets moved to the very front of the initialization sequence making it the fastest variable. All those times a fetch is necessary for 'Y' , the interpreter will find it immediately without having to do any of the 'nope, nope, nope' stuff. I am not sure I am explaining this very well, but I believe it will ultimately give us another time savings -- and of more than a few seconds. (I really REALLY want to get this below 7 minutes runtime!) Lets 'LIST', 'SAVE' and 'RUN' to see what we accomplished. And there we are! We've done it. A total time below 7 minutes. Fairly impressive, considering we started at over 11 minutes and our goal at the start of optimization was to get down to just under 9 minutes. And by the way, and I can't emphasize this enough: it took ever so much longer to 'write it up' than it took to actually come up with the changes made throughout this thread. J. Putting on a final polish! Before wrapping this up, I'm doing a final polish, which should give just a hair more speed and really finish up the adaptation / conversion / optimization. I'm getting tired of typing so let me briefly summarize the following listing: - I gave the program a final crunch. I squeezed as much into certain lines as possible, taking care to fix any line number references that needed to change. The real goal here was to get the lines comprising our outer and inner loops down to 4 total lines instead of 5. Even if I crunch it differently, the two conditional branches in the inner loop won't let us get the number of lines used by the loops any smaller. Still there should be a small time savings. - I've assigned variables 'L' and 'K' in place of the '.5' and 'A*A' parts of expressions previously used in the outer loop. This is not for speed (they're not in the inner loop), but for space savings so I could squeeze our 'X' bounds check in at the end of the crunched line 3. - I've flip-flopped the branch condition and structure of the 'Y' bounds checking. The reason is that the most common outcome of that evaluation will be "yeah, 'Y' is fine, now proceed to plotting the pixels" and it's ever so slightly faster to put the plotting commands immediately after the 'THEN' rather than (a) having program flow fall through to the pixel plot when the coordinate is ok; and (b) executing a jump 'over' the pixel plot routine in a separate line number when its not. So instead of testing whether 'Y' is greater than 199 ('Y>G') and, if so, branching so as to skip the plotting; we now test the opposite (if 'Y<G') and if so, plot. Probably just a minor benefit if any, but that's what I did. - Beyond printing elapsed time to the screen, I've had it display a name for our program when its done plotting. My college-aged daughter claimed the naming rights. She has dubbed this routine 'The Proteus Oscillator' because it "looks like a water god is making waves..." (She also challenged me to make a Python version. This may present a problem, as I don't have expertise in Python. But I'm going to give it a try in the next few weeks I guess. Why not?) - Finally, I added a few lines with a 'blurb' at the end of the program listing. Without even using 'REM' statements! Since the program flow never reaches these lines, its not a problem. Of course, there's the obligatory shout-out to the magazine that was the original source of the program. They never credited the original author by name, so I can't either. OK, that's all folks. Sorry this thread turned out so very verbose. I have been trying to target a particular level of 'nascent enthusiast' who is just starting with X16 and playing with BASIC. If anyone finds this of value, it will have been worth it. One final screenshot with the output of the program in its final form. Cheers!
  48. 2 points
    I always considered myself pretty knowledgeable when it came to the C64, but never claiming to be an expert by any means. That being said, in all my years, I never seen a C64 split into two parts. Never knew this existed. Anyone else seen these? Found one browsing eBay just now. https://www.ebay.com/itm/184915685522?hash=item2b0dd57092:g:4LIAAOSwnb1g3dC9 I would like to see this system. Not sure why, but it fascinates me. Edit: Added photos from auction since it will eventually be taken down after being sold and the link will no longer work.
  49. 2 points
    Rethinking this, since the cable goes inside the mainboard case it wouldn't easily be connected/disconnected to several of these in some commercial application, so I think that's out. Rather, I think some user didn't like all the cables/attachments coming out from the keyboard. They'd rather have the single ribbon cable going from keyboard to mainboard, then all the spaghetti from there. And I'd say they located it to the right (not directly behind, as in the picture) by the way the ribbon cable goes from right side of keyboard to left side of mainboard unit. How they found the case(s) that matched well enough as it does is beyond me. The rear notches on the foreign case don't even line up top to bottom as far as I can tell, so I think it was 2 other cases. Edit: Actually, I think the 2 pieces of "foreign" case are identical, just flipped, one's the bottom of the keyboard, the other is the top of the main board.
  50. 2 points
    I have started a new tutorial series on YouTube to teach you how to program in Assembly Language for the X16 and other 6502-based systems. The first lesson -- and introductory overview of the basics -- is now on YouTube:
  • Create New...

Important Information

Please review our Terms of Use