Jump to content

TomXP411

Members
  • Content Count

    572
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by TomXP411

  1. This obviously could use a helper function to convert the increment, bank number, and channel number to the correct data values to go in the structure. Using this structure is only mildly better than simply brute forcing it, as you did with your first function (although it needs a bank parameter, which I'm sure you already know.) Remember, there are 2 data channels in VERA, and using both of them together is a good way to solve certain problems. So that bit needs to be added to any routine that builds the bit masked control data for writes to or reads from VERA.
  2. Wow. I get more excited every time I see one of Kevin’s posts. The parts availability thing has me worried, though. I know it will only be a matter of time before suppliers catch up on the FPGA and other parts that are in short supply, but it’s frustrating to see this happening right now and being unable to do anything about it. (Of course, I can’t buy an NVidia RTX 3080 for love or money, either, so….)
  3. It's not just a matter of pressing a button on GitHub. He has to separately compile and package the release for each platform... not that it should be that hard to do, assuming the hard work is already scripted. Having said that... I've compiled it on several platforms, and it only takes a few minutes, once you've got the environment set up. Here is the Windows version: https://www.commanderx16.com/forum/applications/core/interface/file/attachment.php?id=1154 If you're on Mac or Linux, it should compile pretty much directly from the Github master. Windows is the only one that's difficult; I had to cross-compile it on Linux to make it work.
  4. Thank you. The change to the bank registers alone is more than enough reason to publish R39. We all know and accept that the hardware will not perfectly match the emulator, but I think it's still important to stay up to date with the latest ROM changes and hardware changes we do know about. Combined with the other changes, it's more important at this point to have the latest code than the "best" code, IMO.
  5. ... and requires you to do a CLC or SEC every time you do simple math, unless you know the state of the carry flag already. This is exactly the kind of tradeoff you get when you simply a system to save money... but obviously the result was worth it.
  6. This right here is why the 6502 was so popular. It was cheap, compared to the 8080, its best competition when it was first created. The 6502 cost $25 in 1975, compared to $360 for the 8080 at launch. Even if the 8080 went down in price over the two years between its release and the 6502 launch, I still doubt it went down to $25. Not even close. So when Apple and Commodore both set out to release an inexpensive home computer, it's no surprise they went with the MOS 6502, rather than the Intel 8080. Obviously, Intel's strategy won out, but that's mostly due to the success of PC clones, rather than the intrinsic merit of the processor. I actually do think the 8080 was a better CPU, but was it 7 times better? With the price of the two processors, I'd have made the same decision as Tramiel, Woz, and Jobs back in the 70s.
  7. There’s actually a text file in the emulator directory that shows all of the ROM variables. They are the SYM files: BASIC.SYM, KERNAL.SYM, and so on. I’ve been contemplating writing a simple script to pull those into a project template and update my template and projects automatically when that gets updated. Anyway, address $3E2 is VARTAB in R38. I haven’t looked at in R39, but you should be able to always find it in the BASIC.SYM file…
  8. You already have all the information you need to do the math. 1080i60 is the current broadcast standard (although I suspect studios are production video internally at 1080p60 or 3840p60). 1920 x 1080 is 2073600 pixels, or 6,220,800 bytes per frame. That is 186,624,000 bytes per second. It was already stated above that a 3GHz 6502 would run roughly 1.4MIPS. How are you going to transfer 186 megabytes per second when you can only process 1.4 million instructions? At this point, the questions you’re asking are making less and less sense, since it’s already been explained that 6502 architecture is decades out of date, and no amount of clock cycles will make it a practical processor for modern desktop computing demands.
  9. No, because the instruction set simply doesn't have the needed operations. As was mentioned above, there's no integer divide or multiply, let alone doing floating point math or the SSE instructions that operate on 4 integers at a time. Scripts are a maybe, but again, note the performance numbers Scott pulled out above. A 3GHz 6502 would be running at the equivalent speed of a 100MHz x86 - but without the math coprocessor, or even multiplication or division. If you think back to the 100Mhz days - yes, you can absolutely run a web browser on a 100MHz computer, but it's going to be much slower than a modern PC, and anything involving fancy math (such as decompressing JPG and PNG graphics) is going to be slooooooooooooooooooow.
  10. Yeah, those are like $15 for 5 units, and it's a complete, self-contained unit. So rolling a custom solution doesn't make a lot of sense, when modular solutions are cheap and easy to install.
  11. I've been pondering this myself. I learned BASIC programming on a VIC-20, and that served as a good foundation for learning C and C++ later. Using this a a lead-in to an a programming class would be an interesting approach...
  12. A max232 based shifter with pre-soldered DE9 is like $3 on Amazon...
  13. I'm not sure what you mean there... the Altairduino runs on an Arduino and requires a MAX232 chip in order to convert TTL to RS-232. Likewise, the Commander X16 will require level converters when connecting to basically any 32-bit microcontroller, since most are not 5v tolerant. The Raspberry Pi definitely is not.
  14. I have one of those. It's a different brand, but it's the same monitor. Mine looks fantastic, and since it works on USB C/Thunderbolt, it works great with just a single cable.
  15. That’s actually a perfect example. I have two of those with WiFi radios that I purchased to work with my Altairduino… only to find that the level converter chips in the Altairduino doesn’t work, due to a design defect on the adapter board. Since then, I have found that RunCPM is a lot more effective for running CP/M software, so the only thing the AD does now is blink for me. Anyway, at $23, this is more cost effective than a Raspberry Pi, and while you still need a level converter, a TTL to RS-232 level converter is actually easier to get and use than the 5v - 3.3v we’d need for a Pi. (I’m pretty sure the User port will be running at 5V.)
  16. Here's an example of what I think you meant: https://www.amazon.com/Ethernet-Converter-Adapter-Support-USR-TCP232-410S/dp/B07FM5WQKD This an RS-232 to Ethernet Terminal Adapter. (These devices are not modems, nor do they pretend to be, though. For one, they do not have the Hayes-friendly ATD command. Instead, you have to issue a more complicated AT+ command string to make a TCP or Telnet connection.) What surprises me is how so many retro enthusiasts don't realize this is already a thing. These terminal adapters can already do everything mentioned on this thread, and all we need to do is provide software on the Commander side to interact with the TA. It's worth noting that TCPSer is actually impersonating a modem, rather than a terminal adapter, so it is not a multi-channel device and can't handle multiple TCP streams, nor can it do UDP at all (that I'm aware of.) The AT+ firmware on terminal adapters is a little different in that it CAN do those things... but it's not as simple as just "Connect to server and send data." You have to request to read data from the TA's buffer and request to send using AT+ commands. It's actually pretty straightforward to write something like this for the Raspberry Pi... I might go ahead and write something myself, since I've put the parallel interface stuff on the shelf for a bit.
  17. That would do it, I suspect. Personally, I'm just very disappointed that they didn't go with the 816 from the start. It only takes one extra chip to make it work as intended, and we wouldn't have the silly 4K banks. Instead, we'd have up to 16MB of RAM in 64K banks. I started designing a (very non-Commodore like) OS for the 816 back when Stefany got started with the Feonix. Building an OS to operate with multiple 64K banks and 16-bit reads and writes is much, much easier and faster than the way the CX16 was designed. I get why Michael and David went with something familiar, but the 65816 with a 24 bit address space would have been so much better, the difference is almost night and day.
  18. It’s probably the opposite. The 816’s stack and Direct Page can be moved. So it may be that they need to be initialized as part of the startup procedure. .
  19. Yeah, I found it. I was looking for a post by @Kevin Williams, but someone else relayed the information. Bear in mind that it's $27/ea for 10 units. When they make 100 or 1000 units, the price per unit will go down dramatically. I did some of my own PCB pricing for a personal project, and I was surprised how quickly the price goes down as you increase the size of the production run. Honestly, I expect this to come in at a retail price of around $400 with a case and keyboard, which is less than I spent building a complete Ultimate 64 system.
  20. Also, what? Where is that? I think you may have been deceived, because I can't find an official announcement about hardware or pricing. OTOH, this board (and its predecessor) have been subject to trolls and spam, and it's possible someone posted with a fake account, specifically to rile people up. You may have fallen for a troll. If you're talking about this post, you're completely off base. Kevin intended to post here, too, but couldn't at the time due to a transient problem. As to the prices on the image... those are prototype boards and small scale pricing. Quite frankly, I wish Kevin had not posted that screen shot, precisely because it would lead people to incorrect conclusions.
  21. As far as creating new computers from scratch goes, this is breakneck speed. I've seen a few crowdfunded efforts, and this is going pretty well, all things considered. It's certainly going faster than the Mega 65 at a similar point in its development. It's been a bit over two years since David's announcement that he was going to build this computer... I honestly don't see how anyone can criticize the timeline, considering everything that's happened over the last year alone. That has been mentioned in another thread, here. They will be running a beta test, but they are selecting testers behind the scenes. If I was a betting man, people who have actually written software for the system are likely to be at the front of the line for a beta unit. There are a few forum members here who have already written text editors, assemblers, a completely new machine monitor/debugger, and some games. They all have jobs. And their day jobs take precedence. We've also just had a worldwide health scare that has put everything behind by months, if not years... I'm not surprised at some delays. My concerns are actually on the software side, as we still don't have an official release of the latest emulator, and there's still a ways to go before that's complete. However, we can't expect one man to do it all, and there are plenty of tasks people could tackle and submit as pull requests to GitHub. The forum is the official source for communication with the team. Other social media outlets are there for people to communicate with each other, but Perifractic (as the defacto front man for the team, at this point) has committed to announcing things here first and using this web site as the development hub for the system. And that has certainly been working. There has been a lot more technical and effective conversation here than on Facebook, which is a terrible way to organize information. Hardly. The current design is very much what David proposed in his manifesto, just over two years ago. It's a real 6502 CPU, a VGA quality display, and a couple of audio chips with FM and simple "beep boop" synthesis. From where I sit, the Commander X16 is exactly what David wrote about back in 2018 and 2019. The original post is here: https://www.the8bitguy.com/2576/what-is-my-dream-computer/ and the "part 2" where he announces he's going to build his own computer: https://www.the8bitguy.com/3543/my-dream-computer-part-2/
  22. I tried that once; it’s disappointing that you don’t actually get an Altair front panel. Also, IIRC, there’s no way to pull data off and get data on to the virtual Altair. So it’s kind of useless as a portable CP/M computer, except for playing with the included software.
  23. I wouldn't bother with a TheC64 at this point. if you really want the vintage experience on an emulated system, set up a Raspberry Pi/BMC system. You can buy a "for parts" breadbin or C style computer on EBay, some 3D printed mounting hardware, and a PCB that hooks up to the Pi's GPIO connector for the keyboard. That will give you a much better experience than the TheC64.
  24. The most common way people will likely hook to the Internet is probably via a serial device hooked to the User port, with some sort of Hayes AT emulation. Of course, that will only work for one connection at a time, with either raw TCP or Telnet mode communication. For interfacing with multiple endpoints at the same time, or for UDP, I'd suggest looking at the Espressif AT command set. https://docs.espressif.com/projects/esp-at/en/latest/AT_Command_Set/TCP-IP_AT_Commands.html
  25. Not really. Modern AMD64 processors are much more clock-efficient than the 6502 was, and Intel CPUs have been Superscalar since the 90s. A Superscalar CPU is capable of executing one instruction per clock cycle, which is about as fast as you can make a CPU at any given clock speed. If one was to assume MOS continued development of the 6502 and built a 32-bit and 64-bit chip, it would have ended up taking basically the same development path either Intel or ARM took: ARM trended toward RISC designs and low power CPUs (which is why virtually all cell phone CPUs use ARM processors), and Intel trended toward larger dies and more parallelism, which is why we have 20 execution units on a Core i9. In fact, going back to 1985 or so... the 6502 only appeared to run faster than its competition, because it cheats: the 6502 splits the clock internally into two phases (Phi A and Phi B), and it further splits each of those phases on half and does certain things on the front half and back half of the phase. While it worked at 1MHz and 2MHz clock speeds, I suspect this is unsustainable at higher speeds. You simply can't jam 3 or 4 T-States into a single clock tick and expect all of the disparate parts of a system to stay in sync at speeds of hundreds or thousands of megaHertz.
×
×
  • Create New...

Important Information

Please review our Terms of Use