Jump to content

Change of product direction, good and bad news!


What should we do?  

359 members have voted

  1. 1. Should we release the Commander X8?

    • Yes, it should replace Phase-3. It's good enough.
    • Yes, but you should still offer a Phase-3 Commander X16 eventually too.
    • No, don't release the X8, stick with the original plan.
  2. 2. Should we still make a Phase-2 product?

    • Yes, Phase-2 is what I want
    • No, skip and go straight to Phase-3
  3. 3. For the X16 Phase-1, do you prefer a kit or a somewhat more expensive pre-assembled board?



Recommended Posts

I'm completely new to this project and I might not be fully up to date. I thought, the initial plan was to create a dream 8-bit computer from standard DIL components that will be available in the near to mid future, too (the problem with replicas of older machines is the lack of certain chips, right?). While the VERA looks like a very cool project that possibly can be sold independently, this already contradicts with that idea. Hence, in my opinion, there is no major difference left between having everything in an FPGA or just some major parts. To rebuild or fix this machine in the future, you will need one anyway. Maybe this only can be solved by making the complete hardware (incl. VERA) and software also open source - though I completely understand that the makers want to get their spent money back first.

Link to comment
Share on other sites

1 hour ago, Ju+Te said:

I'm completely new to this project and I might not be fully up to date. I thought, the initial plan was to create a dream 8-bit computer from standard DIL components that will be available in the near to mid future, too (the problem with replicas of older machines is the lack of certain chips, right?). While the VERA looks like a very cool project that possibly can be sold independently, this already contradicts with that idea. Hence, in my opinion, there is no major difference left between having everything in an FPGA or just some major parts. To rebuild or fix this machine in the future, you will need one anyway. Maybe this only can be solved by making the complete hardware (incl. VERA) and software also open source - though I completely understand that the makers want to get their spent money back first.

So you've missed... a lot. 

Yes, the hardware will be an open design. The software was originally intended to be open source, but the team chose to license the Commodore 64 KERNAL from Cloanto, making the operating system proprietary. It is possible to port the portions written by Mike Steil and other users to something like the Open KERNAL written by the MEGA folks (which has been discussed), but I think David decided the licensing option was the simpler way to go. 

From the beginning, David was going to use an FPGA graphics chip. Originally, he planned to use the Gameduino, but the performance just wasn't there (Gameduino uses an SPI serial link), and he ended up trialing two designs submitted by users. The VERA design won out, so that was selected and refined to make the final product. 

So while the video interface is on an FPGA chip, the rest of the system, RAM, I/O, and CPU, all are still discrete chips. 

And yes - I'm pretty sure there will eventually be an all FPGA system. The individual components for this already exist; it's just a matter of someone actually assembling them into a single core that can run on a popular and inexpensive FPGA platform. 

 

Link to comment
Share on other sites

1 hour ago, Ju+Te said:

I'm completely new to this project and I might not be fully up to date. I thought, the initial plan was to create a dream 8-bit computer from standard DIL components that will be available in the near to mid future, too (the problem with replicas of older machines is the lack of certain chips, right?). While the VERA looks like a very cool project that possibly can be sold independently, this already contradicts with that idea. Hence, in my opinion, there is no major difference left between having everything in an FPGA or just some major parts. To rebuild or fix this machine in the future, you will need one anyway. Maybe this only can be solved by making the complete hardware (incl. VERA) and software also open source - though I completely understand that the makers want to get their spent money back first.

This is the basic argument. 

It's not practical to produce a serious video system without some dedicated chips - even in the early 1980s we had 6847, 6845, 99x8 and various ULAs in machines or dedicated chips. VDUs built out of TTL and RAM chips with an EPROM character generator tend to be late 1970s/early 1980s and many of them used a CRTC. Dave's original idea, which was to produce something cheap out of real parts always was impossible unless you took a Galaiksjan approach to video generation, and while this is very interesting, is limited.

So there's a compromise, which is essentially video on one chip, and everything else with real parts. The main problem with this seems to be timing, which seems to have caused various other CPLD and MCUs to be added (not sure exactly what), and some things moved to Vera. So sound is now on Vera, and it's a moot point whether there's an advantage in having a real sound chip as well. The PS/2 keyboard interface was supposed to be done over the PIA as well, never quite sure how/if that worked given it's a slow clock. It wouldn't surprise me if that moves onto Vera as well.

Then you could get to the point where you've pretty much got a standard 6502/RAM/Flash ROM reference design with all the add ons on an FPGA, with other modern components forming a bridge.

It's supposed to have educational value (I understand the kit builders enthusiasm) but I've actually done this with real children in schools (usually doing on the fly repairs of BBC Micro Keyboards), and there's not a huge amount you can say other than this chip does this, this one does this and so on. it undoubtedly helps bridge theory and reality but they can't experiment with it.. They all look much the same but it takes a few minutes. If you want to get down to the nitty gritty of hardware you'd be better off with Ben Eater's course which is first principles parts but no sort of development platform (as yet) or a basic digital electronics course, if they still do these ?

If you want real chips on there it'd be better to take a leaf out of Alan Sugar's book with the CPC472 and do it on an FPGA and just plonk fake components on. Then you can buy cheap fakes from China and it doesn't matter if they don't work.

 

Link to comment
Share on other sites

17 minutes ago, TomXP411 said:

So you've missed... a lot. 

Yes, the hardware will be an open design. The software was originally intended to be open source, but the team chose to license the Commodore 64 KERNAL from Cloanto, making the operating system proprietary. It is possible to port the portions written by Mike Steil and other users to something like the Open KERNAL written by the MEGA folks (which has been discussed), but I think David decided the licensing option was the simpler way to go. 

 

Have you had a look at the source ? I remain unconvinced. Apart from problems like the use of the Z-register there's a whole Z80 emulator in there. It's not very clear what any of it actually does and I can't find any real effort to seperate Mega65 specific code from generic code. It also seems to follow the modern tradition of comments being for wusses. Even if it works, it looks in the "nightmare to maintain" category. https://github.com/MEGA65/open-roms

Link to comment
Share on other sites

4 hours ago, paulscottrobson said:

So there's a compromise, which is essentially video on one chip, and everything else with real parts. The main problem with this seems to be timing, which seems to have caused various other CPLD and MCUs to be added (not sure exactly what), and some things moved to Vera. So sound is now on Vera, and it's a moot point whether there's an advantage in having a real sound chip as well. The PS/2 keyboard interface was supposed to be done over the PIA as well, never quite sure how/if that worked given it's a slow clock. It wouldn't surprise me if that moves onto Vera as well. ...

Pinning down some details that are a bit loose here.

There aren't "various other CPLD and MCU's" added due to problems with "timing". The problems with timing were handled by getting the chip select sequence with R/W and clock phase right for the chips which variously expect 6502 style bus timing (leading select), Z80 style bus timings (synchronous select) and PICBUS style timings (leading R/W), but that was a matter of getting the correct versions of chip select signals from the glue logic chip select circuitry, as some needed to be OR'd synchronous with the clock and some needed to be asynchronous.

The problem with timing with the AY3 sound chip led to replacement with a different PSG sound chip, the SAA1099. IIRC, it was mentioned on Facebook that Frank's plan for Vera from the outset was to include PSG capabilities, so PSG on Vera is more about the design goals of the Vera designer making the SAA1099 PSG redundant than about issues integrating the SAA1099 sound chip. The YM2151 is an FM chip, which completes the 1980's "chiptunes trinity" of PSG, FM and PCM.

The MCU is an 8bit MCU ATTiny to handle the power management on the ATX power supply, which is not a pure old fashioned "dumb" power supply but includes a signal from the PS whether some characteristic of the current is valid. The version 1 and 2 prototypes had a circuit done with glue logic which apparently was working with the PS they had but when tried with some other ATX power supplies could be a bit flaky. After all, ATX power supplies assume that the motherboard has a microcontroller handling power on/off etc. It is accessed via I2C bit banged on a 6522 VIA.

The PS2 can't move to Vera, there aren't enough pins. The two options are handling the PS2 with a VIA or handling it with a version of the ATTiny with more pins (which would, of course, be quite period "consistent", as the original AT keyboard/mouse interface that the PS2 was evolved from was handled with an 8bit MCU). If they end up using the ATTiny, then an MCU may end up being used to deal with a timing issue, but it was added to integrate with ATX power supplies under the ATX specification, rather than "it works with what we have on hand".

 

Edited by BruceMcF
Link to comment
Share on other sites

On 9/25/2021 at 2:22 PM, BruceMcF said:

The PS2 can't move to Vera, there aren't enough pins. The two options are handling the PS2 with a VIA or handling it with a version of the ATTiny with more pins (which would, of course, be quite period "consistent", as the original AT keyboard/mouse interface that the PS2 was evolved from was handled with an 8bit MCU). If they end up using the ATTiny, then an MCU may end up being used to deal with a timing issue, but it was added to integrate with ATX power supplies under the ATX specification, rather than "it works with what we have on hand".

Fair enough, though the latter sounds to me like "adding an MCU to cope with the problems of timing". The Sinclair QL did the same thing, with an 8049, which would be 1984ish (it was a bodge OTOMH) so it's certainly period useable. So really are the FPGAs which are just very grown  up user programmable ULA and specialist chips. It's the same idea ; put lots of complex circuitry in one chip rather than having a rats nest of wiring, albeit created differently.

But then you get back to the same problem. Do you really have a machine where you can understand how it works ? Does it have any educational value at all as hardware ? And how much of it are you just treating as "a black box which does stuff" ?

If you want to really understand how it works you'd be better of with Nand2Tetris or the Ben Eater computer. Or both.

  • Like 1
Link to comment
Share on other sites

On 9/26/2021 at 4:22 AM, paulscottrobson said:

Fair enough, though the latter sounds to me like "adding an MCU to cope with the problems of timing". The Sinclair QL did the same thing, with an 8049, which would be 1984ish (it was a bodge OTOMH) so it's certainly period useable. So really are the FPGAs which are just very grown  up user programmable ULA and specialist chips. It's the same idea ; put lots of complex circuitry in one chip rather than having a rats nest of wiring, albeit created differently.

But then you get back to the same problem. Do you really have a machine where you can understand how it works ? Does it have any educational value at all as hardware ? And how much of it are you just treating as "a black box which does stuff" ?

If you want to really understand how it works you'd be better of with Nand2Tetris or the Ben Eater computer. Or both.

Pedantically (given that I'm an Econ professor/instructor by trade), it's adding MCU to cope with the problem of correctly powering up the board on an ATX power supply, and then using the MCU that was needed in any event to cope with problems of timing ... very much as the C128 had a Z80 because it was thought that running CP/M would be a checklist selling point but then the Z80 was used as the power-up processor so the C= key could be checked for to determine whether to come up in C64 or C128 mode.

It would be even [b]more[/b] like that if they would just commit to doing that, but they prefer to fix whatever issue is making reading the PS2 ports at 8MHz flaky ... which arguably might be for the reason below.

Whether they bit bang the PS2 with a VIA's or bit bang it with an ATTiny, it seems like a white box for anyone interested in how accessing a PS2 port works. The VIAs are preferable because it would be in 6502 machine code, but it's as white box as an pure open source Linux without any proprietary binary driver blobs, and with far fewer layers between a program calling CHARIN and the hardware talking to the PS2 keyboard to determine what character it being typed. It's far from the closed device driver code between my Gateway notebook and the characters I am seeing appear on my screen right now.

Link to comment
Share on other sites

I never was quite sure how it actually got keyboard input. I've only done it on Microcontrollers, Arduinos. There was a block on making a cheap serial terminal for a long time because people could get it generating video easily enough, and reading PS/2 was a standard library, but not at the same time, until the 1284 (?) had a seperate asynchronous clocked input. I always thought you had to poll it if you didn't use an Interrupt.

Link to comment
Share on other sites

On 9/26/2021 at 10:35 AM, paulscottrobson said:

I never was quite sure how it actually got keyboard input. I've only done it on Microcontrollers, Arduinos. There was a block on making a cheap serial terminal for a long time because people could get it generating video easily enough, and reading PS/2 was a standard library, but not at the same time, until the 1284 (?) had a seperate asynchronous clocked input. I always thought you had to poll it if you didn't use an Interrupt.

If I understand, they poll the keyboard and mouse in alternate vertical blank intervals, so 30/sec.

Edited by BruceMcF
Link to comment
Share on other sites

I think, one major aspect for the decision whether to take an X8 or wait for a more perfect X16 is the time to market. If you have to decide between getting an X8 in, lets say, 2021 (or 1st half of 2022) or waiting for an X16 several years from now on, this could have significantly influence to the answers. A dream 8-bit machine only has a good chance to convince people if it does not remain a dream but becomes available in real hardware (no matter whether in DIL chips or "programmed" in FPGA). So the big question is, how the project members incl. and around David estimate the likelyhood of getting the X16 to market in the next few years - even if it is just the board without case, keyboard or power supply (assuming the hardware is compatible with common keyboards, mice and power supplies; the case shouldn't be a big problem with a wood store around the corner and the ability of using a saw to cut plywood).

Link to comment
Share on other sites

On 9/27/2021 at 2:38 AM, Ju+Te said:

I think, one major aspect for the decision whether to take an X8 or wait for a more perfect X16 is the time to market. If you have to decide between getting an X8 in, lets say, 2021 (or 1st half of 2022) or waiting for an X16 several years from now on, this could have significantly influence to the answers. A dream 8-bit machine only has a good chance to convince people if it does not remain a dream but becomes available in real hardware (no matter whether in DIL chips or "programmed" in FPGA). So the big question is, how the project members incl. and around David estimate the likelyhood of getting the X16 to market in the next few years - even if it is just the board without case, keyboard or power supply (assuming the hardware is compatible with common keyboards, mice and power supplies; the case shouldn't be a big problem with a wood store around the corner and the ability of using a saw to cut plywood).

A DIY X16p doesn't seem like "several years from now" ... from the reported state of affairs on the prototype and the current state of the system code on the development branch of the emulator, half a year would seem like a conservative release date for a crowdfund.

Link to comment
Share on other sites

On 8/20/2021 at 12:29 AM, The 8-Bit Guy said:

Phase 2 would likely have 1 or possibly 2 expansion slots compatible with the phase-1 system.  Phase 3 would have no expansion capabilities.

My advice is based on various business and computer science training and a little experience.

I recommend producing and shipping an initial X8 system ASAP whether you offer DIY kit and/or preassembled.

Think about more than just the numbers you also need to consider intangibles those things for which you cannot directly assign dollar values.

For example people need to be able to get their hands on an actual system to stimulate them and keep up their morale so they will be willing to wait for an X16 system.

Be clear about everything involved with regard to a DIY kit and recommend it only to those who have more than enthusiasm they must already have experience with assembling such things otherwise you will have a headache and discouraged customers.

The emulator should be dropped right under their nose repeatedly so they are reassured that they can give the system a full go without handing over their wallet.

I am willing to buy a 'baby' X8 system and I bet many other former hobbyists and people curious about computer science are willing to risk it.

********

Good Luck,

Dan.

  • Like 3
Link to comment
Share on other sites

My opinion on the X8:

I think current X8/X16 resembles quite a bit the Commodore 16/Commodore 64 situation. The C16 had superior hardware in some ways (more colours and faster MHz for example) but inferior in others like less memory and poorer other performance. And most importantly neither were compatible. I think this is a much more apt comparison to X8/X16 than the sometimes mentioned C64/C128 are, because the latter were compatible and the C64 was a true subset of the C128... Which the X8 does not seem to be compared to X16 in the current plan... Indeed the C128 fared much better on the market compared to the C16 and while it did not add terribly to the ecosystem in terms of capabilities used, it did not harm it either.

From this mistake Commodore made, I come to my opinion that X8/X16 should be more like C64/C128 and not at all like C16/C64...

If the X8 is released, it would be best in my view if it were 100% a subset of X16 as far as software development compatibility goes. Basically, the X8 FPGA should be configured to simulate an X16, simply with less memory/features. Nothing better, nothing different! Firstly, the MHz should not exceed the X16, that should be an easy part. Secondly, the VRAM access should be made similar, I think that should not pose major problems either? These would go a long way into making the X8 a subset of X16 (so more C64/C128 rather than C16/C64 situation). I'm sure there are other details that would need adjusting as well, but you get the idea I hope. If you can't make it 100% a subset, a 99% subset is much better than 80% etc... If you can't make it very close, I would not release at all.

Just my two cents and good luck all with the project!

  • Like 2
Link to comment
Share on other sites

To further flesh out my idea in the above message:

I think X8 not being superior in any way compared to X16 is even more important than not being different, for the X16 to make sense after an X8 release. For example, it was mentioned upthread that perhaps the X8 can't be made to use VRAM exactly like X16 is. I could see this being the case, however that would not automatically mean that it would need to be given superior features.

The X8 window into VRAM is a superior feature compared to X16 and would be detrimental to the ecosystem similar to how the C16/C64 dynamic worked. If it turns out (if?) the X8 can't be made exactly the same in this regard, it could still be made very close – for example allow writing to a single byte window only. Scale the window down to one byte, similar to tuning the MHz down to the same as X16. This way the VRAM address might be different, but VRAM access would not be superior.

I know this is a crude example, but just thought I'd mention. My humble opinion only.

Link to comment
Share on other sites

I agree that software written for the X8 should run on the X16, too. Maybe it is possible to use the same VRAM access of the X8 for the X16 phase 3, too?

What I'd like to keep in the X8 is the USB port for keyboard/mouse. But this should not influence the software at all.

Link to comment
Share on other sites

Indeed the software side is what matters. I think a single, compatible target platform is important for the success of any retro platform. It has been a good recipe for the ZX Spectrum Next and I expect it will work wonders for the MEGA65 as well, once released. Where there are too many or ever-changing targets, it may be that much harder to reach critical mass.

  • Like 1
Link to comment
Share on other sites

I COULD BE WRONG, BUT:

 

I think compatibility is overrated, for two theoretical and two practical reasons.

First, if the two systems were compatible, then the least capable one is likely to be what developers code for generally, and the more capable one is likely to be what the demoscene codes for.  In other words, you split the user base REGARDLESS, so time and resources are spent and nothing is accomplished.

Second, in order to make the two systems compatible, you must cripple features on one or both machines, which is a lose/lose overall.  And if only ONE of the machines is crippled thus, it either drags them both down (because you code to the lowest common denominator) or it kills the more expensive of the two.  So time and resources are spent and nothing is accomplished.

* * *

Now for the practical reasons: 

First, the X8 fits neither the ecosystem nor the market of the X16.  Therefore, there's no reason to make the two architecturally compatible.

Second, the X8 and X16 are architecturally DESIGN COMPLETE.  I strongly doubt the X8 is going to be re-engineered, and neither is the X16 -- sticking with the plan is the way to finish, rather than reversing direction and starting over.

 

 

THAT SAID:

Both systems have the KERNAL in common.  That means a lot of the two systems are ALREADY compatible.  A VERA ABI layer in the KERNAL could increase this compatibility a bit.

 

 

Edited by rje
  • Like 1
Link to comment
Share on other sites

On 9/28/2021 at 8:53 PM, rje said:

Second, the X8 and X16 are architecturally DESIGN COMPLETE.  I strongly doubt the X8 is going to be re-engineered, and neither is the X16 -- sticking with the plan is the way to finish, rather than reversing direction and starting over.

If so, I would personally stick with the X16 plan and not launch the X8 diversion. (Or alternative launch just the X8 and focus on that.)

But of course those owning and doing the project will make their own choices. Just my two cents. 🙂

  • Like 1
Link to comment
Share on other sites

That was my first impression.  Bruce, who is an economist, helped show that the market separation between X8 and X16 make them reasonably distinct, and selling the X8 won't impact selling the X16, which is my only concern.

 

Also, I just noted that the KERNAL is common between them.  Therefore, a small VERA ABI layer in the KERNAL could help bridge the two to a degree, allowing more code compatibility without being full-featured.

 

Link to comment
Share on other sites

On 9/28/2021 at 1:10 PM, Janne Sirén said:

I'm not so much concerned about the sales, I concur they can both sell. My thoughts were on maximizing the developer focus by sticking to a single platform and thus maximizing the content and energy around said platform.

Yes, there is that.

I suspect it won't be that bad.  I just re-read the KERNAL documentation https://github.com/commanderx16/x16-docs/blob/master/Commander X16 Programmer's Reference Guide.md#sprites), and had forgotten that there are already sprite calls in the KERNAL that serve as a translation layer between a program and VERA.  And it had this for awhile.

It stands to reason that there are also PSG KERNAL calls planned.  If they also create a memory-move from system RAM to VERA, then the KERNAL effectively insulates the coder to a large degree from VERA access.

It doesn't grant 100% compatibility, but it does let an (even larger) amount of code written on one platform work on the other.

Niche cases will emigrate to one or the other platform.  It's a strength that they have different emphases and price points.

 

To sum up, the MAIN hoo-hah about the X8 is that VERA is interfaced differently.  This is an old problem that the KERNAL would solve, so presumably we should be able to call things like:

JSR $FFE0   ; psg_play_sound()
JSR $FFE3   ; memcpy_to_vram()
 

(The addresses are bogus; I just made a couple up)

Edited by rje
  • Like 1
Link to comment
Share on other sites

Personally I'm "pro" all the projects, but I must admit those projects that have focused singularly speak to me more than those that have been a bit more all over the place. I think the X16 roadmap and the original plan made sense from the focus sense, and now the X8 is sort of out of nowhere kind of a diversion from that. I got the sense from the initial direction of this project that avoiding diversions was a part of the plan and it made sense to me. I still think it makes sense. Maintaining two incompatible platforms seems like a diversion.

But just my opinion of course, nothing more.

Link to comment
Share on other sites

X8 is of course controversial in general, and so it was reasonable to keep the lid on it.

IT is interesting to think about their incompatibilities, and what effort that might create.  I'm not sure if it would...

...and actually not aiming for 100% compatibility helps in that regard.  Aiming for a level of general compatibility, via a minimal set of KERNAL calls, gives them design flexibility.

* * *

 

In fact, think about a KERNAL where the video and sound calls are generalized, and aimed at a specific base capability.

What prevents you from replacing VERA with something else, assuming you meet the KERNAL's capability assumptions?

Just thinking out loud.

 

Edited by rje
Link to comment
Share on other sites

The X8 at 12MHz would be significantly better for BASIC programming than the X16 at 8MHz, which is great, because programming in BASIC is probably why most people would buy the X8, given that it can only play games made for or ported to the X8, due to its memory limitations.

I don't think they need to be identical in assembly language programming, largely because anyone who can program in assembly can handle the minor modifications required to port a small X16 program down to the X8 or an X8 up to the X16.

spacer.png

X8: An inexpensive 8 bit computer for the nostalgic average person and otherwise curious people.

X16: A money sink for really big nerds.

Edited by Tatwi
  • Haha 1
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Please review our Terms of Use