Jump to content

Scott Robison

Members
  • Posts

    739
  • Joined

  • Last visited

  • Days Won

    32

Everything posted by Scott Robison

  1. Indeed. The proof is in the pudding. If it's easy, go do it and show us the better more enlightened way. I know it is hard for some people to find time to do such things when writing, derailing thread topics, and fighting to keep foreign governments from making us appear foolish take so much otherwise productive time... It seems it would be a great investment for the world if one could take the Ben Eater world's worst video card and turn it into something comparable to VERA.
  2. When I first read that, my thought was "I've heard about fast 6502 cores realized in an FPGA before." Fortunately, I went on the read the article. That is not what I was expecting. Awesome!
  3. There are different FPGA with differing capabilities. Not all are such low power, though mostly I think yes, you would need level shifters to interact with a 5V bus. For communication between a physical CPU and the FPGA (or really, anything interacting with the FPGA), your HDL defines a number of externally exposed IO lines to serve whatever purpose you want. For example, I have a Nexys 4 DDR board that exposes 40 pins to the outside world (and more IO is assigned to other IO devices on the board itself, such as switches, 7 segment displays, LEDs, network, VGA, etc, etc, etc). Some FPGA have a CPU sitting next to the FPGA, or IP is available to embed a soft core CPU into the fabric of the FPGA. Others just provide the FPGA and a processor (if desired) has to be created from scratch or sourced from another project or offering.
  4. I posted recently on my FB: "I think Shatner going to space is a bad idea. The Klingons said there would be no peace as long as Kirk lives, and Shatner bears a striking resemblance. Just inviting trouble."
  5. I think this thread has jumped the shark. Perhaps it needs to be locked, too.
  6. Yep, replace P+0 with P and replace %10001 with a variable defined earlier and I got the third loop down to 170 jiffies, or 0.36 ms per VPOKE/POKE. 46% speed increase.
  7. So by using VPOKE 8 times in a loop, each statement will average 0.67 ms per VPOKE (plus extra time for any calculations since I'm just poking values of 0). By using one VPOKE and 7 POKE statements in a loop, the average time is only 0.42 ms per VPOKE/POKE. 37% speed increase.
  8. Here is a better test case that is more apples to apples based on Ender's comment. So 322 jiffies to use pure VPOKE, 590 jiffies to use pure POKE, 218 to use hybrid approach that should autoincrement. Replace %10001 with a variable to make it even faster, probably. Edit: Oh, I put one too many POKE commands in the final loop. That is just a typo. Remove one of them to get the exact 8 VRAM pokes you wanted. That gets the time down to only 199 jiffies.
  9. Ah, good to know, thanks. In my testing just now, my version is definitely slower because the extra pokes do math on P that isn't necessary in the VPOKE version, so don't use it even though it is "technically correct" but "slow". And I realized that if P is greater than 32767 AND is going to fail with it and AND only works with valid signed integers. Edited: removed screen shot in favor of next message. There is the test code, and the numbers under RUN are the number of jiffies to execute the pure VPOKE and the number of jiffies to execute the pure POKE.
  10. VPOKE will set an address (I don't think it's documented as to whether it will set addr 0 or 1), but I don't think it sets the increment. Regardless, VPOKE will always reset the address every time it is used. By avoiding VPOKE completely you can set the address, pick whether it is 0 or 1, and target the port. Whether it is a net improvement will depend on how much overhead is involved in the multiple POKE statements and how many bytes you write with auto incrementing addresses. I feel confident that my solution is technically correct (though I didn't try running it), but it might not be more efficient than a series of VPOKE statements. That would probably be a good test. Run the 8 VPOKE based version 1000 or so times and time it, then run the 12 POKE only version 1000 or so times and time it and see if one is definitively better than the other. Now I'm curious. Testing.
  11. From my time in radio, "programmer" or "programming" has very different meanings in that context. Those of us with software experience think of it as meaning one thing when it actually has broader implications. Deciding upon the sequence of songs to be played is programming. Configuring a VCR to record shows at a given time is programming. And writing HDL descriptions is programming in that the HDL has rules and syntax of a sequence of keywords, which is subsequently transformed into a bitstream that has its own sequential format that is loaded into the FPGA, just as the programming of a ROM determines a set of values returned in a sequential manner based on address pins. Which is to say, you are correct. It is not programming in the limited sense many people think of programming, but it is surely a form of programming just as other examples are as well. In fact, one can (if one wants) consider Word or Excel or {insert example program here} an interpreter that allows one to write programs in a domain specific language using a rigid IDE. An expert with those (or comparable) tools can do incredible things. And old timer machine language programmers (those who coded in hex, not in assembly) can look down their noses at those of us that only know high level or assembly language because we're not doing it the right way either.
  12. If you didn't read the linked article, you really should. It's amusing.
  13. https://newsthump.com/2021/10/12/blue-origin-crew-concerned-by-new-uniforms-ahead-of-shatner-space-flight/?utm_campaign=shareaholic&utm_medium=facebook&utm_source=socialnetwork&fbclid=IwAR21TTPhN9rK7WWOXFdA6kdIkTDptgBABCeWCvpLg5BzSSqq1OfeEdu0XHA
  14. Exactly. Really, a CPU is not a computer either. It is but one component of a computer. A C=64 with a pristine 6510 but without a VIC II is a paperweight. Or a pair of 6526. Or the right number of RAM chips. I mean, they are fancy paperweights that do various forms of blinky lights or sounds or whatever depending on what is missing, but the computer is a complete collection of parts and interconnections between a number of components. I agree with the philosophy that it is nice to have a discrete component based system for a certain set of reasons. In like fashion, the MOnSter 6502 is a nice recreation of a MOS 6502 made from all "discrete" components. It would also be cool to see a 6502 made from all vacuum tubes. But some of these are impractical, even if they are cool and have other positive attributes. The beauty of an FPGA system is that, for a sufficiently large FPGA, it is possible to recreate all the individual components that go into a computer in a single chip. There is still more to do of course. You have to get input from the outside world into the FPGA through IO pins, and get output to the outside world through other IO pins. That has certain benefits and certain detriments, just like the other things listed above. And that is the entire world of engineering: Understanding what is possible and weighing the pros and the cons and coming up with the right solution for a given problem. If one wants to create a video system out of all discrete components because it "simply requires optimization" then I say, go create it! I was reading just yesterday about the C74 project which has as a goal to create an entire C64 out of 7400 series logic chips. Someone is "working" on the VIC II portion of that. I hope they succeed, it will be a sight to behold I'm sure. But it isn't "practical" for an intended mass produced system in this day and age. And that's okay, not everything has to be practical. But if you are hoping to build a computer that can be used by people, practical is a really good thing to strive for.
  15. That's a difficult question to answer exactly, but from what I can find via Google searches, you're looking at an up front cost of multiple tens of thousands of dollars, then a volume of at least 10k per year to bring the costs down to the sub $2 range per IC. It's not impossible but not practical for the expected scale of something like the X16. That all assumes some of the least expensive processes would be usable.
  16. With the solution above, I am assuming P is the complete VRAM address from $00000 to $1FFFF. The second line doesn't have to do the "OR INT(P/65536)" part if the address will always be $FFFF or smaller.
  17. Given a base address P I think what you want to replace the VPOKE with: POKE $9F25, PEEK($9F25) AND 254 : REM SELECT ADDR 0 POKE $9F22, %00010000 OR INT(P / 65536) : REM INCR 1 AND VRAM ADDR BIT 16 POKE $9F21, INT(P / 256) AND 255 : REM VRAM ADDR BITS 15:8 POKE $9F20, P AND 255 : REM VRAM ADDR BITS 7:0 REM NOW THE ADDR IS SETUP SO DO EVERYTHING AFTER VPOKE LINE...
  18. I haven't looked at reference material to ensure all the numbers are exactly correct, but that seems reasonable in theory.
  19. I agree you've been deliberate, and my comments were not directed any any particular person, just a general observation. Sorry if they came across differently, that was not my intent.
  20. I'm not an FPGA expert but you're on the right track. However, there are different physical limits on a PCB with discrete components than there are on an FPGA. When creating the 6502, there are more or less two types of uses for transistors in the CPU. One is as gates, the other is as so called "random logic". If we were to compare them to computer programming, gate based transistor logic is like structured programming. It is relatively easy to understand when you look at certain combinations of transistors that they create various types of gates: NOT (inverter), AND, OR, XOR, NAND, NOR XNOR. Other combinations though do not correspond to standardized gates, and those are the "random logic". It isn't truly "random" as in "random numbers" it just isn't as structured as the gate based transistors. You might call it "spaghetti code" of the integrated circuit world. You can do the same things (I think, generally speaking) with gates, but it might not be as efficient. It might require more transistors and take more time to process with gate based transistors vs random logic based transistors. FPGA is a field programmable GATE array. So that is one difference between what one might get when implementing a 6502 in an FPGA vs in silicon, as the FPGA primarily provides for gate based design work. Extending the definition beyond the CPU to a video subsystem: Given that there exists hardware description language for VERA, it could be implemented in discrete parts, but now you have to consider timing. With all the functionality in gates in the FPGA, which takes up millimeters of space, signals can propagate between gates orders of magnitude more quickly than they could on something the scale of a circuit board with traces between chips. Just the fact that they are further apart means things take longer. Then you also have more considerations of noise. More parts means more things to go wrong and more time to troubleshoot. In the end, the FPGA is the most cost effective way to create this system. I love the idea of a discrete parts board with a separate CPU and IO chips and all the good stuff, but not everyone is going to love that as much as just having something that works.
  21. I wouldn't send up a 90 year old, but if you want press, I guess you invite a big celebrity to go up on your rocket.
  22. When people focus on what David put in his first video, such as "no FPGA" and claim this is not in keeping with his dream, they seem to overlook the price aspect of that initial dream. We've all subsequently learned (for those who didn't know it previously) that some of those goals are mutually exclusive. You can either spend a lot more time, money, and other resources trying to build a discrete video subsystem that has all the bells and whistles that were desired (which completely destroys the cost) or you go FPGA (which is much more affordable) or you just throw up your hands and say "can't be done oh well". David showed X8 as being possible at an affordable price point but ultimately stated it wasn't enough like X16 on FB, so they're looking at something similar that will still get close to his originally desired price point. I big part of the motivation for this was "how can people get into retro computing at an affordable price". Here is a question for the community at large: What is better for the X16 ecosystem, a kit that only select people will be able to assemble, or a relatively inexpensive FPGA based solution that many more people can afford that won't involve assembly? While the former would be awesome to have (and I plan to buy it when it becomes available) there is far more potential buy in with the latter (which I also plan to buy when it is available). I think there is too much "what do I personally want" and not enough "what is best for adoption so there can be a vibrant community". There are still C64 games being sold today thanks to critical mass of adoption and not nearly as much if any software being released for the KIM. If one simply wants a retro computer to use on their own and aren't worried about a community, and a kit scratches that itch, fine. But if you want to be able to find software to run on it that you didn't write yourself, you're going to get a lot more options if the hardware is more accessible to a wider audience. In that respect, the kit is not unlike a KIM (inaccessible to all but the most hardcore fans) and the complete FPGA solution is the C64 (much more accessible and interesting to a much larger universe of potential users).
×
×
  • Create New...

Important Information

Please review our Terms of Use