Jump to content

picosecond

Members
  • Posts

    63
  • Joined

  • Last visited

Everything posted by picosecond

  1. Pretty much. I don't remember seeing the interface details published, but it should be similar to a 65C22 VIA. It's a bespoke digital design. The logic building blocks inside the ICE40UP5K are pretty simple, mostly D flip-flops in various flavors and 4-input look-up table cells which can do any arbitrary 4-bit logic function. The full cell library is specified here: https://www.latticesemi.com/view_document?document_id=52206 Almost. 1.2V is the core voltage. There is a second power rail for the IO, 3.3V for Vera. The FPGA has internal level shifters between the core and IO rails. External level shifters are needed to interface the 3.3V Vera IO to the 5V X16 logic.
  2. Lattice ICE40UP5K in the 48-pin QFN. https://www.latticesemi.com/en/Products/FPGAandCPLD/iCE40UltraPlus The datasheet: https://www.latticesemi.com/view_document?document_id=51968 Lattice's iCE40 family is a good place to start exploring FPGAs. They are a good deal simpler than most offerings from Xilinx and Intel.
  3. Here is an over-simplified but still useful way to visualize FPGAs. Imagine a huge Ben Eater style breadboard prepopulated with thousands of simple TTL gates and flip-flops, but no wires. By itself this logic does nothing, but by adding the right wires one could implement many possible useful circuits. Programming an FPGA is analogous to plugging wires into this breadboard.
  4. Hey neighbor, welcome. Ditto. It's never going to happen while I am still working, but it seems like a fun post-retirement project. That's not too many years out at this point.
  5. Consider the possibility that you know much less than the FPGA designer... Of course it feels like the X8 design is shoehorned into the FPGA. It was. I don't know the primary motivation. It could be cost, desire to reuse hardware or simply the fun challenge of seeing how much function could be crammed into a cheap FPGA. Unlikely. The ZX-UNO RAM by itself costs about the same as the CX8 FPGA. Everything else is kinda-sorta similar so the cost delta is dominated by the ZX-UNO FPGA. That's 20 bucks or so in hobbyist quantities. It's not surprising that the higher cost external RAM + bigger FPGA design point is higher function than the embedded RAM/smaller FPGA design point. Spend more, get more. Both designs have their merits but neither is objectively better.
  6. I can't tell if you're serious or playing semiconductor buzzword bingo. I am going to have to go with "simply wacky".
  7. It feels like you are conflating microcontrollers and FPGAs. Logic implemented in an FPGA has clearly defined functional boundaries and electrical interfaces. Each part is responsible for something. Interaction between functional units are meaningful. Given the necessary design collaterals the circuit can be modified or customized. Some compiled C++ program which simulates a system is completely different of course. But this has nothing to do with an FPGA implementation like CX8. Sure, the 6502 core in CX8 won't have exactly the same microarch as one you bought from WDC. The building blocks are different. But they are not all that different. The FPGA implementation is "just" a bunch of gates, flops and RAMs all wired together. I don't know what prompted this comment. I never advocated for the Raspberry Pi here and specifically said it is a completely different architecture.
  8. I get your point, but I would call what you are describing superficial understanding, not meaningful understanding. The idea I am ranting against is that discrete implementations are necessary or superior for deep understanding. That does not match my experience. As an artistic choice, great. As a pedagogical choice, not so much.
  9. I have never believed these are connected. I think it is unfortunate that 8BG has been promoting this notion. The only thing that makes off the shelf parts understandable is their documentation. Without docs how could anyone design with them? Even good docs stop at some level of abstraction. For example, YM2151 docs describe nothing about its microarchitecture, which is need to really understand how it works. I would argue that properly documented highly integrated designs can be more understandable than their off the shelf cousins. Phase 1 X16 and Phase 3 X16 are equal complexity and equally understandable. The packaging differences are superficial. Raspberry Pi has no architectural commonalities with phase 3 X18/X8. The only superficial thing they have in common is a high level of integration. If people prefer the cool appearance of big PCBs with lots of chips, I have no argument with that. I think they look cool too. I just reject this idea that knowing this chip is the CPU and this chip does graphics imparts any meaningful knowledge of the computer's operation.
  10. The ice40up5k has hardware SPI and I2C units, two of each. I imagine CX8 is using one of these. These units do not have dedicated IO, so using them isn't free.
  11. Yup. The lack of a hardware UART is galling. The advertised workaround being the half-baked expansion bus makes it worse.
  12. @The 8-Bit Guy, this was a good instinct. It is amazing anyone thinks this is their business, or that you are somehow incapable of managing your own finances. That being said, it is shocking how much money has been spent before having a solid prototype. There seems to have been a lot of putting the cart before the horse. For example, I never understood the rush to release a logo'd keyboard. You did write that it should only take an hour or two. Seems like a cheap investment if you care about X8 sceptics taking you seriously. Here we get to the root of the problem. Except for Frank, your team does not have the digital design experience to execute this project, at least not in a timely fashion. There is no shame in this, nobody emerges from the womb an electronics expert. Without experience the only path is to learn while doing. This will always take longer, and starting down dead-end paths is inevitable. But there has been this constant drum-beat to lock down the design and build something, NOW! I'm sorry, speed, quality and learning on the run are incompatible. Your choices are build junk, go slow, or seek experienced guidance. From the outside looking in, X16p appears not close to production ready. The expansion bus, arguably its main feature, is just not good. You have already told of other non-working areas that need firmware updates. Speaking of which, how did you end up in a place where only one person in the world can do a firmware update? That's just not OK for a project that wants to be serious. After 35 years of designing digital systems I think I am a decent judge of projects and talent. The project was stalled for ~6 months by about the simplest possible design bug. How many marginal bugs are waiting for quantity production to show up? You are in worse shape than you think. So what if it does? I'll make this brief, unlike some commenters here. I do engineering, not marketing. There have been many ridiculous comments here, kneecap X8 to prevent competition, don't fragment the ecosystem, yadda yadda yadda. None of this crap matters. Let's be real, this is not the next dominant computing platform. This project is a toy targeted at a niche audience. That's not meant to be pejorative. I love toys. Watching the development from afar has been enjoyable. You did this for fun and education, not to put a roof over your head and feed your family. How did you end up with 1000 case minimum orders and 50% down on gawd knows how many keyboards? X8 sounds like a fun project that meets most of the goals you laid out in the first half of video #1. If you like it, release it. You don't owe the discussion forum armchair quarterbacks anything. I closing I will make one final pitch against the Cloanto deal. It's pointless, the only good reason to license firmware is for backwards compatibility. This was a bad decision, one of the few cases where your instincts let you down. If you don't own your firmware you don't really control your project. Is a small convenience worth having this millstone around your neck forever? I hope you reconsider. Best regards and I sincerely hope for your project's success.
  13. I missed the joke until you explained it, but the original title was obviously tongue in cheek. "Some people are so touchy".
  14. Stop trying to make LX8 happen. It's not going to happen.
  15. Why on earth would you assume Frank would need help for this? That's borderline insulting. Besides, I seriously doubt there is sufficient unused space in the X8 FPGA for a YM2151 core. That makes no sense at all.
  16. Commander X8 is what this project should have been from the start. Wasn't the whole point supposed to be retro bare-metal programming on a reliable, relatively inexpensive platform? Write off X16 as a bad idea and release the X8. I never understood why anyone cares what package the transistors live in, surface mount vs. through-hole, etc. It's the architecture that matters, not the appearance. An FPGA 6502 core is no different than a discrete 6502. Heck, all of WDCs new work is cores in FPGAs. The biggest problem isn't manufacturing, it's licensing. If you don't own your kernel (sic) you don't own your product.
  17. It's a nice feature but I wonder if it is worth the two expansion slot pins. Did you consider the PC motherboard way, a pin header and CD-audio cable? Same here. I can't muster even a little nostalgia for it.
  18. Read the datasheet before answering. Good idea... yeah, it's probably not the probe. Maybe a bad ground?
  19. This is application and expansion card dependent, so I don't see a need to be too prescriptive. The main requirement is that every DMA controller needs a "DMA enable" register whose reset state is disabled. DMA controllers may take the bus only when enabled and software may enable only one DMA controller at a time. The application can decide on enable scheduling in multi-controller situations. The main point expansion card designers need to know is that multi-controller arbitration is software, not hardware controlled. I still say using SYNC is better than using /ML. It costs the same and avoids adding restrictions like "don't access Vera auto-increment registers if a DMA controller is enabled". That's a nice improvement over proto#2.
  20. Here is my speculation: It's the low impedance of the scope probe. Try switching to 10x mode.
  21. I think you always need to halt the CPU with /RDY and you always need to tri-state the busses with /BE. Can you give an example when both are not required? This attempt at self-arbitration won't avoid bus contention when two DMA controllers want access on the same cycle. Without real hardware arbitration you are left with enabling one DMA controller at a time through software. There is nothing wrong with software arbitration but it renders this /ML business pointless. I think I mentioned in another thread that it looks unsafe for DMA controllers to interrupt writes to auto-increment addresses. An easy way to avoid this problem is to take the bus only during opcode fetch (by monitoring SYNC). As a bonus, this inherently avoids breaking atomic operations so /ML is no longer needed. Also note that because /RDY is directly driven by the DMA controller it is impossible for DMA controllers to address anything that uses /RDY to add bus wait states. It is nominally 62.5ns, but what is the duty cycle spec on your crystal oscillator? +/- 5% is pretty typical unless you pay extra for better. Or did you switch to a 16MHz oscillator and divide it by 2 to square things up?
  22. Yup. That matches my guess exactly. I did not see anyone from the design team confirm this but I can't think of any other parts that match. The 16Kx16 SPRAM has just one address port so I would call it single-port and leave it at that. But that is picking nits. I think we both agree that it is definitely not "truly dual-ported". Hence the request for citations...
  23. Right. I omitted that intentionally, which makes it all the dumber. I was thinking applications would schedule writes on their own. But the schedule would be for blocks of writes, not individual ones. Thanks for the correction. Anyway, I suppose it is possible reads are working on the bench but that is almost worse. It is much better for things to be broken-broken, not sometimes-broken or sometime-in-the-future-broken. It would suck to have a batch of slow parts causing sound problems halfway through a production run. It's even worse if the slow parts break a bunch of DIY kits.
  24. Those are logical ports, and maybe that is what Bruce meant by "truly dual-ported". I inferred he was describing the internal construction of the FPGA RAM blocks.
×
×
  • Create New...

Important Information

Please review our Terms of Use