Jump to content

Scott Robison

  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by Scott Robison

  1. "If a ram is a male sheep, is a rom a female sheep?"
  2. The process of reading a particular file in a FAT file system: Find the master boot record to identify the partition table type. Is it truly MBR or is it really a GPT? These have different ways of identifying partitions, so a robust system has to handle both, though I suspect we only care about the "true MBR" case and not the GPT. Is there more than one partition on the media of a compatible type? Pick one. What is the size of the partition? This is the "official" way to identify whether it is FAT12/FAT16/FAT32. Get the boot sector of the partition. Use the values in it to find the FAT, root directory, and cluster size. Use a directory to find a file of interest, which will tell us the size of the file and the location of the first cluster of the file. Read the cluster. Use the FAT to find the next cluster. If not end of tile, go to 6. Most of the items have variables in them that you do not know how to process until after you've read them, and this is just the high level overview. Can an FPGA be tasked with handling this? Sure. It doesn't require a full blown general purpose CPU. But it is certainly far more than just one or even a few logic gates. Could it be possible to implement this so that the CPU sends the commands and tells the FPGA the eventual destination of the next X bytes in advance so that it doesn't have to handle the final delivery when VRAM is the destination? Yes, and that would be less complicated than the co-processor (where a co-processor is less than a full CPU but more than a few logic gates). There likely is not enough space left in the FPGA for even that, though I do not know. And it would have difficulty dealing with error conditions. Then of course we have the question "what if the bytes in the file must be processed in some way before spewing them into VRAM?" Many / most file formats include at minimum a header of some sort to identify the contents of the file. Compression is often used to make files smaller. Some define a program in some virtual machine. For anything more complex than "raw sequence of bytes already in the format you want for VRAM" you'd have to have the CPU read the data so that some processing can be done (decompression for PNG files, as an example, or running a VM over it for true type font byte codes, processing the header to know where to seek to so that substreams of data can be processed correctly, etc). All these things require logic. You are correct in what is theoretically possible, though I think your estimates of how many resources would be required are on the low side given the flexibility built into the FAT32 format. One can mandate a lot of the variables be set to specific constant values to limit the complexity for particular use cases, but you can never get rid of it completely based on my experience. General purpose systems provide flexibility, though they cannot offer an optimal solution to every problem.
  3. One, just to be clear ... I am not one of the designers. I have no say, just expressing thoughts too. Two, I'd love to see something like that myself, but it couldn't be done with the existing FPGA because it doesn't have the space or connections at present that would be required. The one thing the Commander X16 has going in its favor over is a higher clock rate, but of course that will still take multiple clock cycles per byte to copy from a RAM bank. There are lots of things that would be cool to see, it's just a matter of money at this point.
  4. The problem is that the FPGA doesn't have direct access to the 64K primary address space or the 2M banked memory. Everything that goes on in the FPGA happens in a 32 byte window in the IO area. Thus it wouldn't be able to do stash / fetch / verify.
  5. It's not just a matter of sending a byte from one channel to another. There is a file system that has a definite structure. The CPU is what processes that structure. Looking up the file allocation table, enumerating directories, opening files, seeking to offsets, reading clusters, copying the data around. That's not functionality that comes easy to an FPGA without having a programmable interface. 6502 is one of the smaller mainstream CPUs and it takes by one estimate about 700 lookup tables (https://electronics.stackexchange.com/questions/400504/how-many-luts-are-needed-to-implement-a-cpu). There is a lot of logic that goes into processing a file system. I've worked for a couple companies over the years that dealt with file system processing, and I've written a complete user mode implementation of NTFS and FAT32. In other for the FPGA to decide where to send a byte it has to have an implementation that can process all the details of the filesystem. It can be done, but it is far more complicated than just "send the byte to channel X". Edit: I wrote complete read-only implementations of those file systems to read data from a cold backup image that was just stored as a collection of clusters. I've not written a writable interface, but since we're talking about reading from the file and writing to memory, I have direct experience with that. I even have a patent on identifying what files changed in an incremental image based backup. I don't mean that to brag, just to establish credentials.
  6. Yes, it is physically possible to build the functionality you would prefer directly into the FPGA. It would mean either getting a larger FPGA, which would cost more money, or it would mean decreasing the amount of video RAM or other functionality (such as audio). There ain't no such thing as a free lunch. Something has to pay for the increased functionality which does not happen magically. The goal of the platform is to provide an 8-bit system in the style of what was used 40 years ago today (and suddenly I feel old). Modern support chips are in the opinion of some a step too far, but given the lack of available modern production of equivalent chips for video, audio, and so on, it is a necessary compromise without which there would be no real possibility of a Commander X16. Forcing the 65C02 to be involved in processing means that the process is less efficient than it could ideally be. That is the compromise that was selected for this design. Every design has compromises. Regardless of what we might like to see, it (almost definitely) isn't going to change at this point. I qualify it thusly because I don't make any decisions and acknowledge that it is a statement of opinion, but it is an informed statement of opinion.
  7. What's more, doing a ROM update should probably be done on a UPS to avoid the scenario of a half updated ROM (particularly in the case of trying to update system ROM banks) in the event of a power interruption.
  8. Back in the late 80s I think it was, there was this company that got a bit of press for developing what they called "Web Compression". This is before there was a world wide web. Anyway, it had the "interesting" property that it had 16 to 1 lossless compression on any data exceeding 64K. So you could take 1G of data and compress it down to 64M. Then you could feed that in and get it down to just 4M! Then 256K! Then one more time to exactly 16K! I think if they found a way that data could be compressed iteratively until they got it under 64K, that algorithm could do just about anything, up to and including running x86-64 builds of an operating system on a 65C02.
  9. A robust pair of "KERNAL" functions. One in the formal kernal that allows writing to any bank other than the kernal itself. Another that allows overwriting the kernal, probably in another bank I would imagine to avoid the problem of the kernal changing in the middle of the update.
  10. It is an interesting idea if we could come to some level of agreement as to what the various shapes / colors / non-textual cues meant. We have a problem with evolution of language already. Look at how people are beginning to object to the terms "master / slave" when used in a technological context. The words have legitimate meaning, yet culturally we evolve language to mean more or less than it did previously. We change the pronunciation of words. An excellent example is how American's used to pronounce "DATA" most typically as "dah-tuh" before the late 1980s, but we've shifted to "day-tuh" since then. Some credit Patrick Stewart's British accent as driving that over seven seasons of Star Trek The Next Generation. Other examples are harass (is it "har-ass" or "hair-iss"?) or err ("air" or "urr"?) Extending that to shapes, colors, iconography, look at the typical "save" icon: a 3.5" diskette. Mainstream computers started abandoning it circa 1998, yet we still have it to this day, and a generation of computer users are likely as unknowledgeable about the significance of the icon as they are about a rotary dial phone. Written language has, as you've said, the ability to include background information through exposition, parentheticals, asides, and so on. A good text editor can take source code comments and squash them out of the way so that you can view the code without the "extraneous" noise, but then you can click on something to expand it when it is useful. As for ways to "embellish" programs, I think comments are the "best" (for some sufficiently fuzzy value of "best") way we have to augment the significance of the associated code. If we had smarter tools that could extrapolate common idioms into automatic comments, I could see something potentially useful there, but it seems like a Very Hard Problem(TM) to solve. C++11 and later have "constexpr" expressions. I don't necessarily love the keyword syntax, but the idea is that they are more constant than a "const" (which isn't really always constant, but often is simply used as a synonym for immutable). A valid constexpr function can be used as an initializer of a value or array dimension or in other similar "real constant" contexts. Why not have a compiler / environment that, in addition to providing compile time evaluation of functions to constant values, somehow also did compile and / or link time profiling style analysis? Something that didn't require you to actually run the code but still provided "hot spot" identification of the generated code. That is also a Very Hard Problem(TM) but I think less so than AI based identification of code translation to automatic comments, as it were.
  11. I don't think anyone has created a generic utility that attempts that yet, though maybe one was used in creating this demo.
  12. I've read some posts that indicated that the existing functionality pretty much maxes out the current FPGA when someone suggested more video RAM, and that the next size up would increase the cost. The reality is that there is always something more than could be done with an FPGA if you just had more logic blocks or more IO pins or more interconnects or whatever, but those increase the cost, and this is already costing more than I believe was originally desired (though they can likely bring the price down in the future, I believe is the plan).
  13. That's a good point. Twas just an idea.
  14. I am not an expert, but ... Yes, the SPI interface is on VERA. But it is a relatively low level interface. It probably doesn't natively understand the format of the data on the SDCARD, just that there is a collection of bits and bytes. It is probably necessary for the CPU to control the SPI IO ports, just as it controls the video and audio IO ports, then interpret the data to decide what commands to issue next. After receiving the data the software running on the CPU can decide what to do with that data (copy it to main system memory, a RAM bank, or back to VERA on another IO port). FPGA flexibility means that you can have one "chip" on the FPGA that does a certain level of work and communicates it over selected IO pins. In the case of SPI, it would not surprise me if the routing of the signals for SPI goes directly from the SPI hardware that connects to the physical SDCARD, then is minimally routed to specific pins that the CPU can access through a given address. Again ... I am not an expert, I'm just theorizing as one who has (literally only) played with FPGA and has not seen any of the HDL that is used to create the bitstream for the FPGA. I could be completely wrong. Mainly I just wanted to opine to see how close my theory is if an answer is ever provided.
  15. Speaking of engineering, I feel like scrum has been a plague in many ways. I'm not opposed to agile, and I agree with the manifesto. It is just what some companies have done to agile by the name "scrum" that really bothers me. Agile is supposed to do away with certain things that scrum seems to double down on. There is far too much "no need to think about the problem because we'll just throw it away later, we only need to do the minimum work to achieve 2 week sprint goals". The idea that code will be thrown away becomes a self fulfilling prophecy. This is not to say that ever detail of "Formal Scrum(TM)(Patent Pending)" is bad, but IMO they just are trying to replace one set of often bad practices ("Formal Waterfall(TM)(Patent Pending)") with another.
  16. In all seriousness, C128 had an extended SYS command: SYS address,a,x,y,s Where a,x,y,s were optional and were used as values for the given registers if given. I would think a better solution for X16 BASIC is to just extend the SYS command to be "SYS addr,bank,a,x,y,s" where bank can be ROM or RAM based on the address provided. That way it allows setting of register values and SYSing into either ROM or RAM banks depending on the context, just like JSRFAR does.
  17. I think "BOB" would be a good command for a far sys... Or not...
  18. While my BASIC-like peer project isn't going to be an assembler per se, I was planning to allow some sort of "inline assembly" first class feature so that those portions that would benefit could use it.
  19. I've looked at all those resources recently as I've been researching for my own BASIC successor, but I am in no way trying to go as far as your suggestions with it. I want to see a more expressive language that doesn't have as much interpretation overhead as BASIC, but I'm not trying to get to zero overhead. Assembly has its place, as does C and other compiled languages. I just want to see something that can make for a friendlier / more structured experience, where portions are "compiled" or tokenized in advance (lets take the simple example of numbers in interpreted BASIC which are represented as an array of PETSCII digits that have to be converted each and every time the line is run; there should be an tokenized format for that that can preprocess the digit sequences to binary numbers that have no runtime overhead; also long variable names that are more distinct than two character variables but that can be stored in a compact format so that the runtime doesn't have to search for the long names while running). Anyway, I have given thought before to doing an XML or JSON based "language" that represents all the usual constructs in a normalized manner, but that has a rigid definition so it could be easily translated into "any language". But even that isn't necessarily visual. The saying is that a picture is worth a thousand words, and I do consider good programs works of art, but there is an expressiveness that has to be understood by both the human and the computer. I may just not have sufficient imagination, but text based languages are that medium that provides for just the right mix of concrete and abstract that allows us to work effectively with our tools. I can see certain types of visual tools working well for narrow types of tasks on sufficiently fast processors with enough memory and bandwidth. I don't see how it could work well in a retro environment, though, unless we want to say "you have to give up actually developing on the machine". I do agree that there is not enough consideration given by many developers today to how their code impacts the hardware. They've been taught to rely on garbage collected languages with big heavy libraries because programmers are bad at managing things like memory (though they never seem to think who used what language to implement their preferred language). Anyway, best of luck. I look forward to more fleshing out of ideas with some sort of prototype so that I can better understand what you're suggesting.
  20. Or a better dev environment than BASIC. Not unlike what one would have access via a cartridge on the C64 with Simon's BASIC (and many others ... https://www.c64-wiki.com/wiki/Cartridge#Overview)
  21. I appreciate that response and the reasons for it, but given the intended audience of these computers, it seems they are going to be far more likely to want to customize their ROMs not unlike the addition of JiffyDOS back in the day. It would be nice if BASIC just had a simple extended SYS command that performed a JSRFAR to a particular address in a given ROM bank, at least. I guess the nice thing about the ability to write to the ROM will be the ability to patch BASIC at the same time one adds a program to an otherwise empty ROM bank if such a feature is never added. Actually, it could be "fun" to create a completely custom ROM if one wanted to play with the hardware without needing any backward compatibility, necessarily (though that's an extreme fringe case). A project I've been wanting to do (who knows if the day job will allow the time to complete it) fairly begs to be used instead of BASIC which would be best enabled by updating the ROM. In that scenario, I imagine adding a new language "ROM" bank and patching the Kernal to hand off control to it instead of BASIC (after sufficient development and testing to provide confidence that it works of course). Just spitballing along the lines I think Stefan is describing. By putting something in the ROM, you open up potentially 16 K of RAM for data, which is not insignificant on a 16 bit address space, plus saving time of re-loading. Not that re-loading will be as significant as it was on a 1541...
  22. I sense an aftermarket opportunity to reflash bricked ROMs.
  23. Thanks for taking the time to put it up in PDF. I've long contemplated visual programming but have never come up with anything I think is "good enough". My wife works at a middle school that has a "creative coding" class (I think it's called) that is just very simple introduction that uses a system they call "code blocks". It gives them a list of the javascript primitives and allows them to drag and drop them into a "function" pane, reorder with a mouse, and edit parameters a specific block might take (like loop or if conditions). I like the idea of decoupling the code from a "rigid" text format that allows the students to get a feeling for things before expecting them to get syntax correct and so on. Based on my quick perusal of the PDF file, I think the problem with it is the amount of effort that has to go into the tool by experts to create something usable by the novice, and people potentially being put off by terminology like "intent oriented programming" or some such. I've been thinking through a "modern BASIC" for the X16. My hope is to come up with something that could be in the ROM and available at powerup without having to load it from a file. Something that would "abstract" the address space of the X16 into a more linear looking thing rather than banked RAM / ROM + conventional memory. Something that could support cooperative multithreading, multiple "processes", fully relocatable tokenized code, and "virtualized" access to the hardware (particularly the display). Something that has better integrated debugging capabilities. I don't know what will eventually come of it, as it is something I am restricted to working on in my spare time, but it would provide more functionality than BASIC v2 and provide an environment that could support multiple programs running at once. No, it won't be a speed demon, but it's intended for those who do not want to have to do everything in assembly. As for the question posed about commercial BASIC software ... yeah, a lot. https://en.wikipedia.org/wiki/QuickBASIC was used in commercial software development. A former employer, Clark Development Company, released multiple versions of PCBoard BBS software developed using QuickBASIC before eventually migrating to C and assembly language somewhere around version 14 or 14.5 (or maybe 14.5a ... it was over 25 years ago now and my brain isn't as nimble as it once was). https://softwareengineering.stackexchange.com/questions/149457/was-classical-basic-ever-used-for-commercial-software-development-and-if-so-ho answers the same question on a broader basis. Now, a big part of the problem with answering this question is what qualifies as "BASIC"? Do you mean strictly line numbered BASIC running a slow interpreter? If so, then less software (though not zero). I think the development language is independent of the eventual delivered program, though. QuickBASIC allowed compiling to DOS EXE so that one didn't have to own the purely interpreted environment. Also, as we've seen much over the last 20 to 30 years, a slow interpreter is not necessarily a stumbling block to commercial software. A significant percentage of the web exists due to strictly interpreted languages (though more powerful than BASIC) such as PHP and Javascript. Anyway, I don't mean to poo poo the ideas. Just sharing my thoughts.
  24. One step further would be to export as PDF ... this will allow one to keep the formatting in what is arguably a more open format.
  25. I would also suggest sharing it in a way other than PPTX ... not everyone has or even wants to use Microsoft Office / PowerPoint.
  • Create New...

Important Information

Please review our Terms of Use