Jump to content

Scott Robison

Members
  • Content Count

    144
  • Joined

  • Last visited

  • Days Won

    5

Scott Robison last won the day on May 16

Scott Robison had the most liked content!

Community Reputation

108 Excellent

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Speed measured in seconds per cycle, not cycles per second.
  2. I like the idea ... if money were no object, I'd buy one. By "money were no object" I mean both for having enough to buy the hardware and enough to pay for the space to store it and all its friends I'd buy to keep it company.
  3. I never used that, but my one assembly class in college in the mid 80s was on Apple II computers, and we ran a similar progression. First hand assembly with typing of hex, and only after learning to hate that were we allowed to use mnemonics. The most interesting part of that class to me (having already had a bit of background with 6502 from my experience with Commodore and Atari computers, and a roommate's Apple II) was the floppy drive we used. It's been too many years, but it was probably one of the first two found at https://en.wikipedia.org/wiki/Floppy_disk_variants, either the MCD-1 or (more likely) the CF-2.
  4. I think everyone has their own "level of difficulty" as it were for these things. On the one extreme, you have the MONSTER 6502 team who created a 6502 from "discrete" transistors (I know they aren't really discrete, but they did what they could and details are not as important here). Then you have the all in FPGA models. FPGA can replicate existing chips exactly, though it is hard to accomplish. The more people push the limits of the original tech, the more difficult it becomes to replicate exactly, as they are depending on implementation details that were never part of the documented interface. I am personally not interested in pushing beyond the documented interfaces, and I'm happy to see the documented interfaces work properly without worrying about the implementation details that were never intended to be exploited. FPGA can also allow one to try their hand at designing something that's never been done before and that wouldn't be practical if one had to create a run of ASICs just to try out ideas. FPGA can bridge the gap between the two, to create "identical copies" of something in low quantities that would be too expensive otherwise. I personally am not bothered by decisions to attempt discrete component based computers that harken back to the past. Some feel this way about cars as has been listed here. We love the computers of our formative era. I am also not bothered by recreating things in FPGA format. Sure, you can't necessarily point at individual chips and tell people what they each do, but one can create a graphical rendering of it and do essentially the same thing. Or create a LEGO version to have a physical representation of the logical bits. Let's face it, if they'd had access to FPGA back in the day, they'd have used it where appropriate. There is nothing wrong with trying to create a system from discrete components, and there is nothing wrong with trying to make things in BASIC just to see how far that can be pushed. We all have our itches to scratch.
  5. Probably emscripten: https://emscripten.org/
  6. While the code is on github and is licensed in a way that it would support other flavors of deployment, the team Does Not Like(TM) discussions of alternatives deployments / configurations on this forum. They have licensed it in a way that permits such things, but this forum is not the place for such discussions (as I was told after innocently engaging in similar discussions). The FAQ in theory covers it, though I think it should be made more explicit. Regardless, consider yourself warned (by someone who has no authority, just experience with the eventual outcome). In a more generic light: Do not assume that just because code is available on github that it is licensed in such a way that you can do other things with it. There are many licenses in the world, and publicly available code does not mean it is free to use for whatever purpose.
  7. Not on the CX16, at the very least. The timing of chips is critical, and while the SDCARD is much faster than the older mass storage devices, it isn't going to be fast enough to respond in a timely fashion on the BUS. Note: This is an educated opinion without having looked into actual timing diagrams.
  8. The single best (as in most valuable to me) programming class in college (the first time I tried in the 1980s) was a FORTRAN 77 class. I did not have many of the modern niceties. Sure, it had subroutines and functions, but it didn't have many typical control structures. But the reason it was a great class was not because FORTRAN 77 was a great language (though there are things written 50 or more years ago still being maintained because Fortran is great at what it does). It was a great class because the instructor taught it not as a FORTRAN 77 class, he taught it as a learning to program class. So he actually took the time to show "you should use a WHILE loop in this case; FORTRAN 77 doesn't have a while loop, so you would write it as an IF condition with a GOTO to exit the loop, and a GOTO at the bottom to return to the top. In my experience of managing and interviewing engineers over the years, there are far too many who are completely lost when they don't have their tool box. They don't know how to adapt. They don't know how to craft a tool that isn't already in their toolbox when needed. One common question we would ask engineers, just to weed out ones that couldn't cut it, was to write in pseudocode a routine to reverse the characters in a string. We were in a C# shop at this point, and I was shocked at how many BSCS grads could not do it without resorting to a standard library function. The point wasn't that we expected them to write a bunch of string reverse code from scratch. Clearly the standard library would be the best way to accomplish this in production code in most cases. But there are many algorithms / decisions for business logic that will not be in the standard library, and if they can't reverse the characters in a string without a "strrev" function or similar, we knew they were't going to be a good fit for our organization. Note: Just because they *could* write up a strrev didn't mean they *would* be a good fit, but it was a good first step. So I disagree fundamentally that BASIC is a dead-end language if what you are trying to do is learn logic and program structure. There are ways to write BASIC in a "pretty" way to ensure that it is "structured" even if it must use GOTO. The problem with GOTO isn't that it exists, the problem is that it isn't used well. Now, I'm not advocating that everyone should be forced to write in BASIC v2 before moving on to other languages, but it could be taught in a way that allows BASIC programmers to adapt to other languages, once they understand that the BASIC incantation for "PRINT" is knowns as "PRINTLN" in Java or "printf" in C "std::cout" in C++. Yes, the syntax is different, but they ultimately do the same things. That's what I think a good programmer / engineer does. They know there are a bunch of types of screwdrivers and wrenches and pliers and other tools. Some are better suited for certain tasks, but in a pinch, you can adapt and use a less capable tool when you must. BASIC is that limited toolbox that can help you learn about the "basics" of tools that you will encounter as an engineer. In like fashion, 6502 ML isn't a dead end either any more than x86-16 is a dead end in a world of x86-32 or x86-64. They all allow one to learn how CPUs work at a low level and that knowledge can be translated to other platforms. Circling back to the interview of programmers who are lost when their library doesn't provide the exact tool they need: That doesn't make them bad. There are jobs for people who know how to do particular tasks, and they are good and necessary jobs. I would not be a good fit for a webdev position given my specialty which made me stand out to my employer and which results in regular contacts from recruiters trying to hire me away. A webdev would not be a good match for what I do. Specialization can be a good thing, but so can generalization. It's allowed me to work a number of different industries and with multiple programming languages. Could I do webdev? Probably. I just am not interested in it, so I'm glad there are others who are, and I'm glad my skills are not shared by as many which increases my value in projects that require high performance on custom platforms.
  9. It's not pushback, it is differences of opinion. It's why some people swear by Windows and others swear at it. Ditto Linux and every other system ever created. Multiple people have suggested ways one can have a border if one wants, without it being a mandatory part of the system. When the X16 is released and we have the ability to flash the ROM, you or someone can modify the initial parameters of the hardware so that a border is provided by default.
  10. I may be in the minority, but Commodore BASIC v2 is responsible for getting me started! Sure, there are a lot of things missing WRT modern languages, but I firmly believe that learning any language is a useful exercise to get one to begin thinking in a programmer mindset. And the more languages you learn, the better off you are because you being to contemplate programming in the abstract (input, output, variables, conditionals, control structures) instead of the concrete (INPUT, PRINT, A$, IF X=Y THEN ...). I know there are plenty of people who believe "avoid these toy languages because you'll learn bad habits" or other similar arguments. For learning the basics (pun not intended but apropos) of how to write a program to tell a computer what to do, BASIC is great. Not every language must lead directly to a six figure income. Then of course there is assembly / machine language. Talk about a language without modern first class features like functions!
  11. Most things can be done in RAM. A very minimalist 6502 ROM could be limited to the top 1 to 4 KB of RAM that did nothing more than allow loading of a binary file and run it. You want BASIC? Poof! You want a stand alone text editor. Done! In the case of the X16, IMO (not that anyone really cares what I think, as it isn't my dream machine) the decision has been made to have a 16 KB bankable ROM that will never be available as RAM. Thus "immutable code" fits in that place really well. Kernal, BASIC, GEOS to name a few. Or Xenon (the name of my present vapor ware concept). I agree that the speed of SDCARD vs the old IEC interface means that loading isn't nearly the PITA it was back in the day. However, providing an environment that allows high level interpreter code that can escape to ML just seems perfect for ROM banking, especially when it allows multiple tasks to be loaded and running at once. Using an example others have given, I'd love to see a text mode windows UI that allowed one to load both a text editor and an assembler or compiler. Alt TAB (or some magic key) between the two windows to work just as you do in any modern multiprocessing system. If the environment supported shared memory blocks so that the text editor could share the current source file with an assembler, even better (potentially). It doesn't replace highly optimized ML / native programs, it's just a different interface. That being said, it's not like I'm ready to burn something to ROM.
  12. My desire to add to ROM is a "improved" BASIC-like interpreted language. One that supports multiple tasks and provides an almost GEOS like interface, but all in a text / tile based interface (think along the lines of the TEOS project; https://www.youtube.com/channel/UCrELuExo6T_USEqELq1xVHQ for videos). In my case, it would not be designed primarily for native code (though it would ideally support native apps if they wanted to use it not unlike BASIC / KERNAL routines). The utility of it in my mind is by making the interpreted code position independent (even more so that BASIC) it could support multiple simultaneous tasks and even crude multitasking without having to play with IRQ task switching. I think it would provide a "linear" address space to gloss over bank switching and other hardware abstraction layers to support sharing the single VERA or other hardware resources more easily. This could all be done without ROM, of course. With as much RAM as is available, it would not be a huge pain point to sacrifice RAM to the run time. But it would be nice if those static unchanging parts of the system that multiple applications depend on are in ROM so as to keep as much RAM free as possible.
  13. Pony Schmony! I want a UNICORN!
  14. Modern systems have in large part (I think) migrated to more complicated hardware schemes so that there is room for multiple BIOS/ROM images in a single chip, so that in the event of an incomplete update, there is still a failsafe/unmodified version sitting next to the failed update. In the early days of field updatable chips, this was not the case. If you were only able to partially update the chip due to a power failure or some such, you very much would brick the device because you would have half of one BIOS and half of another (perhaps).
×
×
  • Create New...

Important Information

Please review our Terms of Use