Jump to content

Scott Robison

Moderators
  • Posts

    829
  • Joined

  • Last visited

  • Days Won

    37

Everything posted by Scott Robison

  1. Right, I remember dumb terminals. My first programming class in college was FORTRAN 77 which was hosted on one campus mainframe and we accessed it via 3270-style terminals. Later I worked for a company that put terminal emulation software in first responder vehicles so they could access the same systems they'd access from the office. I'm sure by now it's been replaced with web based terminal emulation, though given how slow government can be at times to change, who knows. It has been probably 18 years since I last had a clue about what that company was doing.
  2. It's good that you have an actual use for it, and nothing wrong with dumb terminals. In my case it was just a solution in search of a problem.
  3. I was thinking of something similar a couple days ago. Except I'm not a hardware guy typically. So I bought the Ben Eater kits and just starting putting them together tonight to try to get a handle on some of this kind of stuff. Not the exact split, but the separate CPUs with their own RAM and communicating through VIAs. Good luck!
  4. SDHC can still read SDSC. They are electrically compatible. So <= 2 GB is still possible if one happens to have an old one laying around that they wanted to use. I wonder if you can still buy them, actually? {time passes at Amazon} I see a couple 2 GB still hanging around on Amazon. https://www.hugdiy.com/micro-sd-1mb-p-249 is purportedly a 1 MB card which would clearly not be adequate for FAT32 but at least someone claims to be selling pre SDHC at this point. I only found that one indirectly from Google, though their website also claims to still offer 128 MB SDSC. I think that older cards would work, but certainly there is no need to buy them anymore.
  5. It could be, but I'll guess that it just uses the "individual file" approach. That's probably the very reason why their library is written that way to take advantage of "only link the object files used". More than anything, I wanted to point out that not all toolchains are exactly alike.
  6. An idea I had a few months ago: Use both tile layers as text. Both layers in 256 color mode. The background tile layer would use all reverse space characters with a 256 color mode. The foreground tile layer would have whatever characters you wanted to show with their own separate color from 256 color mode. In this way you have an "apparent" 256 color background and foreground mode (which uses twice as much video memory). Then it could "blink" the cursor by just swapping the color attributes between the background and foreground tile layers. It is effectively like the double petscii demos but limiting oneself to a single character of any color in the background.
  7. That's a reasonable design for almost every platform (aka machine and toolchain). It really depends on just how advanced the toolchain tries to be. By putting every function in its own source file, and including those in a library, the linker can include just the object files that have needed functions. But there is not a hard and fast rule that a linker should exclude unused functions (though I think most do). It could just say "you want stdio? I'm pulling everything in!" Some toolchains are smart enough to write every individual function / declaration as a separately linkable entity, so it doesn't matter if you put everything in one file, it can still optimize the final file size by excluding unused pieces. Where you're talking about cc65 and X16, then your plan is a sound one.
  8. Exactly! I had a thought some time ago to create an IBM PC reimagined as a Commodore style system. One thought I had was to treat disk sectors in a similar way to CBM DOS. As already written, CBM DOS uses the first two bytes of a block to encode the next track and sector. If it is the last sector, the same two bytes indicate how many bytes are valid in the last block. This works well for a system where blocks are 256 bytes each, but not as well for a format that has 512 byte blocks (or larger). In fact, this is why the 1581 had physical 512 byte blocks but split them into virtual 256 byte blocks to make DOS backward compatible. That could be done as well, but I have an alternative. My thought for a pseudo CBM PC machine (ignoring that CBM did release MS DOS based systems later) would be to use the first two bytes of a block as the next logical block number of a file as a positive signed 16 bit integer. If the last block, the absolute of a negative value would indicate how many bytes were used on the last block. This would allow the block size to increase over time while still being compatible with blocks up to 32 KB in size.
  9. Yes, it can. My (perhaps poorly phrased) point was that the difference is not as big as you might think it would be otherwise. If you write a program that only uses printf, and one that only uses cprintf, and another that uses both, you'd see that the incremental size for both includes overlap when functionality is shared (like a common formatting function that both can use).
  10. A lot of it depends on the platform you're targeting. stdio.h is one of the language mandated standard headers, so it is well defined what it should do. conio.h is a "quasi standard". Many platforms have it, but it isn't mandated by the bodies that say "this is what you must have in a C environment" so it will vary a lot more based on the platform. It's been a while since I used conio on DOS, but i seem to recall it *did* scroll the screen when it wrapped around the bottom line. But because it isn't standardized, there is no requirement that be the case. Even the processing of CR & LF is not mandated by the standard. The systems based on posix will treat LF as and end of line character that both advances to the next line (what LF is defined to do in ASCII) and to the beginning of the line (what CR is defined to do). This provides more "flexibility" on DOS & Windows platforms (though maybe not the most useful depending on your point of view) but keeps lines "shorter" in posix because you only need a single character to mark the end of a line. Once you move outside of the standard libraries, there is no requirement that they behave the same way.
  11. The reason there isn't a generic seek in CBM DOS is because of the way files are built. Seeking to a given offset in a file is an O(N) operation, because you have to get the first block, use it to find the track and sector of the second block, and lather rinse repeat until you get to block N where you want to read some data. CBM DOS did include relative files which did support a record number based seek for up to 254 byte records (if I remember limits correctly). FAT based file systems have a global file allocation table so that one can much more quickly follow the chain of clusters (assuming a relatively fragmentation free file system image and enough memory to hold the entire FAT). This is much easier said than done though because the FAT per Microsoft must include a minimum 65527 clusters to qualify as FAT32. Given that each entry is 32 bits, that means that we're looking at about 256 KB. A highly fragmented file could be almost as bad to seek in from the perspective of an X16. Relative files had the benefit of being processed on the drive. The C64 only had to say "give me record X" and by magic the drive satisfied the request. FAT32 really isn't designed with a 6502 in mind. The potentially good news is that there is code in ROM for a fat32_seek operation: https://github.com/commanderx16/x16-rom/blob/a200d6266038fc5ff506280e70383e5774bd0ac9/dos/fat32/fat32.s ... this should make it possible for one to seek at some point, even if it isn't implemented in the cc64 library today.
  12. I know your pain. I bought the Ultimate Hacking Keyboard (with a name like that, why not) when it was crowd funding a few years ago. They were slow to deliver everything, but they did (which I can't say about every thing I've ever kicked money into). It has been awesome being able to position the halves independently and do most everything from the keyboard itself. My only really problem is my N key is getting fidgety after several years of heavy use. It is an open source design and I can replace the switch myself, but it still "works" and it lives by the credo "if one N is good, two are better!" (most of the time) Actually, I bought two of them, along with two expansion modules for each I can mouse from the keyboard or my recently delivered thumb trackball. Really has been a great keyboard for me personally, but it was not inexpensive. Since it is a programmable keyboard, and I can put multiple configurations in it that are a hotkey away, I really ought to see if WASD keycaps would fit it (I expect they would). This isn't my personal keyboard, just a representative one I found online. Instead of the QWR "QWERTY" mode ID, mine reads SDR. Because why wouldn't it?
  13. I'm not the authority on what X16 will support obviously, but: There is SDSC, SDHC, SDXC, & SDUC (Secure Digital [Standard|High|Extended|Ultra] Capacity). The minimum size of a standard capacity card is not specified. The maximum limits for each are 2 GB, 32 GB, 2 TB, 128 TB. Other than standard capacity, each level of card has a minimum size that must be strictly larger than the previous capacity maximum. According to Microsoft, the "standard" for FAT 32 is a minimum 32 MB partition. There is nothing physically incompatible between FAT 32 and smaller partitions, but a lot of software won't know what to do with things outside the spec. I don't know what type card interface exists in the hardware, but it should handle anything that is at least 32 MB (maybe smaller depending on how lenient the drivers are) and the maximum will likely be 2 GB or 32 GB. 2 TB is in theory possible, but the SD standards people say that cards larger than 32 GB use exFAT instead of FAT32. They could probably be reformatted to use FAT32 up to 2 TB or 16 TB (depending on what sector size is encoded in the SD Card) but there are going to be systems that don't know what to do with a > 32 GB card that isn't FAT32. So ... X16 may have its own limitations, but those are the limits as I understand them from my past life writing hard drive utilities and user mode file system drivers.
  14. True, though I suspect avoiding printf by using cprintf probably doesn't help the binary size much. I would think that they both use the same formatting code underneath and the only real difference is whether the screen is targeted or a file stream pointer. If they do not share the exact same formatting code, then that only makes things worse, especially on memory constrained 8 bit platforms!
  15. It really depends on what you're trying to do. cprintf is not part of standard C, but it is offered by many platforms. And it doesn't have the potential overhead of processing data as a file stream, so it can be (not that it is guaranteed to be) faster when you *know* you want to write to the screen and not whatever is pretending to be the standard output stream.
  16. Note that the way FAT32 works is: Every file has an 8.3 entry. If it has a long file name, it also has additional entries per 13 character part. For example, a file name x16emu.exe has a file named X16EMU.EXE (because traditional file names are all uppercase). Then it will have a single long name entry because x16emu.exe is under the 13 character limit. So if you are planning on say 16 character names, you'll use 3 entries: 1 for the short, 2 for the long.
  17. FAT32 has a limit of 65536 files in a given directory, but! Two are already used for the current and parent (. & ..) entries in subdirectories but not the root. If a file has a simple 8.3 file name, then it only takes one entry. If a file has a long file name, it takes a variable number of entries (one extra entry per 13 characters in the LFN).
  18. I admit I turn on disk drive sounds in Vice. But then I also miss fast loaders when I don't have them, so I'm really inconsistent. Imagine today's teens using Micro SD cards with their ~100MB/s transfer speeds in 40 years when people are using warped space physics to transfer data FTL and missing the good old days of their slow 100 MB/s cards.
  19. https://www.popularmechanics.com/military/a29539578/air-force-floppy-disks/ is an interesting story about how 8" disks were still used until just a couple years ago for military applications!
  20. I guess if I'm going to be *really* fair, I should limit my width and height like I did the thickness. So 13 x 18 Micro SD, or 234 TB in the same area (or less). Still danged impressive progress we've made over 50 years.
  21. This of course ignores the thickness discrepancy (1.6 mm per 8" disk vs 1 mm per Micro SD), but since you can't fit a minimum of two Micro SD in the thickness of the 8" disk, it doesn't seem worth computing.
  22. In prepping for my first lesson on Tuesday, I'm reviewing a power point that is provided. It's not the worst thing I've ever seen, but it's not great. One thing they wanted to talk about was the invention of the floppy in 1971, and they showed 3.5" disks. So I went and grabbed pictures of 8", 5 1/4", & 3.5" next to each other. Then I added a Micro SD next to them for contrast. I never used an 8" disk. According to my research, the first ones could hold 80 KB. Converting to metric, they are 203 mm per side, or 41,209 sq mm. Micro SD is 11 mm x 15 mm, or 165 sq mm. Largest capacity available today is 1 TB. Using decimal units for consistency and ease of computation, 12.5M times more capacity on one Micro SD vs an original 8" disk. But density is important! You can fit almost 250 Micro SD cards in the same surface area as the 8" disk. I'm rounding up. Anyway, 250 TB in the same surface area, or 3.125B times more capacity if my math is right. It is claimed that a standard typewritten page is about 2 KB. 40 pages of text on the 8" disk vs 125B pages in the equivalent surface area of 1 TB Micro SD cards. Or in Library of Congress units: 25 LOCs in the Micro SD capacity, 8 nano LOC in the 8" disk capacity.
  23. It is intended to be taught to students starting in 6th grade or later, so it still has traces of elementary school around it. So there will be pieces like that that I will avoid (if someone *wants* to draw pictures about inventions that's fine, but it isn't anything I'm going to collect and grade). I do want them to take a minute or two and think about those sorts of inventions and write a list in words. If they can't do that much, they may not be ready for intro to python. What I'll be grading is their participation as part of the class so that they have some historical background. I plan to bring in several different computers (I have my legitimate C= 16 with tape drive and joystick, I have a raspberry pi with my bare metal CBM emulators, an Intel Compute Stick, and I have some old spent motherboards and cards [probably]) to show some of the various forms they can take and to be able to point out parts of the motherboard that make things work. Combined with a slide presentation of "ancient" computers. They will be using Chromebooks and I'll have my surface pro plugged into a projector. But yes ... this is the first year this school has offered this type of class, so there are bound to be some growing pains as I go through and adapt the material to cut out the busy work in as much as is possible and really focus on writing software with important side trips (without the coloring.)
  24. I have a curriculum provided, but the director at my school is very much against "inauthentic" experiences and wants useful stuff, so I have latitude to use the better parts of the curriculum (PRINT INPUT variables etc) and ignore busy work. For example, in the first lesson is the following activity: It's not that the entire activity is bad, but I shudder a little bit at a coloring assignment in a class like this. The discussion is important. Writing out ideas is a good idea. Coloring might be useful for some students but it just doesn't feel "authentic" to me personally. Of course, all I really know at the moment is the curriculum that already exists and that I'll have 11 or 12 students in my class (subject to first week changes in enrollment). Once I get to know students I'll have a better idea of how to tailor it for them.
  25. Thanks. I agree. I'm not sure how much Python is exposed in the classroom environment (as in we use a web based platform that provides a Python environment that their classroom work will be completed within, as the students use Chromebooks). More than anything, extension library availability may be an issue depending on what the students want to do for projects they choose. Of course, there is nothing that requires them to use just the one platform. They can't install extra tools at school, but there are multiple online Python sites that allow you to edit and run code, and multiple platforms support installing Python for their use at home. As for the magical incantation problem, you hit on something I hadn't considered. Yes, libraries can give you that effect, but I'm especially thinking of things like Stack Overflow. So many snippets are available online that people will grab and use without understanding what it is or why it works. At least with a library you are using an interface to access functionality. With random snippets I feel like it is an even worse way to cobble together a program.
×
×
  • Create New...

Important Information

Please review our Terms of Use