Jump to content

picosecond

Members
  • Posts

    64
  • Joined

  • Last visited

Posts posted by picosecond

  1. YM2151 fans may enjoy Ken Shirriff's latest blog post, where he describes internal details of its beefy ancestor, the YM21280.  Some may recognize Ken from his frequent appearances on CuriousMarc's youtube channel.

    The YM21280 is a marvel of efficiency, employing dynamic shift registers, logarithms/exponentials and other methods to save transistors.  One cannot help but admire the craftsmanship.

    • Like 1
    • Thanks 1
  2. On 10/13/2021 at 2:43 PM, Ju+Te said:

    If I would treat VERA as a black box, how is it accessed from outside (the 6502 CPU)? Does it have a couple of address input and 8 bit data in/output pins that are connected just like RAM or RAM to the bus of the CPU?

    Pretty much.  I don't remember seeing the interface details published, but it should be similar to a 65C22 VIA.

    On 10/13/2021 at 2:43 PM, Ju+Te said:

    How this internally works?

    It's a bespoke digital design.  The logic building blocks inside the ICE40UP5K are pretty simple, mostly D flip-flops in various flavors and 4-input look-up table cells which can do any arbitrary 4-bit logic function.  The full cell library is specified here: https://www.latticesemi.com/view_document?document_id=52206

    On 10/13/2021 at 2:43 PM, Ju+Te said:

    Do I understand it from the datasheet correctly, that level shifters are needed to communicate with a 5V bus because it operates at 1.2V?

    Almost.  1.2V is the core voltage.  There is a second power rail for the IO, 3.3V for Vera.  The FPGA has internal level shifters between the core and IO rails.  External level shifters are needed to interface the 3.3V Vera IO to the 5V X16 logic.

    • Thanks 1
  3. On 10/11/2021 at 1:33 PM, Ju+Te said:

    [FPGAs] form some kind of PCB of some generic (TTL) parts

    Here is an over-simplified but still useful way to visualize FPGAs.

    Imagine a huge Ben Eater style breadboard prepopulated with thousands of simple TTL gates and flip-flops, but no wires.  By itself this logic does nothing, but by adding the right wires one could implement many possible useful circuits.

    Programming an FPGA is analogous to plugging wires into this breadboard.

    • Like 3
  4. Hey neighbor, welcome.

    1 hour ago, ECWhitney said:

    I’ve thought about writing the 6502 processor in Verilog as an FPGA exercise, I just never seemed to find the time to do it… 

    Ditto.  It's never going to happen while I am still working, but it seems like a fun post-retirement project.  That's not too many years out at this point.

  5. 17 hours ago, Lasagna said:

    To me the X8 feels like a design constrained by the skills of the FPGA designer - someone knows the X16 FPGA (I forget is it Xilinx? Lattice?) and that specific FPGA toolchain really well and is shoehorning the design into that FPGA when we could have all the things we want, memory, real ports, and expansion, going with a ZX-UNO forked solution.

    Consider the possibility that you know much less than the FPGA designer...

    Of course it feels like the X8 design is shoehorned into the FPGA.  It was.  I don't know the primary motivation.  It could be cost, desire to reuse hardware or simply the fun challenge of seeing how much function could be crammed into a cheap FPGA.

    17 hours ago, Lasagna said:

    And the cost would likely end up about the same.

    Unlikely.  The ZX-UNO RAM by itself costs about the same as the CX8 FPGA.  Everything else is kinda-sorta similar so the cost delta is dominated by the ZX-UNO FPGA.  That's 20 bucks or so in hobbyist quantities.

    It's not surprising that the higher cost external RAM + bigger FPGA design point is higher function than the embedded RAM/smaller FPGA design point.  Spend more, get more.  Both designs have their merits but neither is objectively better.

    • Like 1
  6. 15 minutes ago, Carl Gundel said:

    Off the shelf parts (not programmable chips) have clearly defined functional boundaries and electrical interfaces.  This means each part is responsible for something.  The interaction between the chips is meaningful, and the circuit can be modified, customized, repaired, etc.  This is mostly sacrificed when all the functionality is simulated in a programmable chip.

    It feels like you are conflating microcontrollers and FPGAs.

    Logic implemented in an FPGA has clearly defined functional boundaries and electrical interfaces.  Each part is responsible for something. Interaction between functional units are meaningful.  Given the necessary design collaterals the circuit can be modified or customized.

    Some compiled C++ program which simulates a system is completely different of course.  But this has nothing to do with an FPGA implementation like CX8.

    Sure, the 6502 core in CX8 won't have exactly the same microarch as one you bought from WDC.  The building blocks are different.  But they are not all that different.  The FPGA implementation is "just" a bunch of gates, flops and RAMs all wired together.

    32 minutes ago, Carl Gundel said:

    The Raspberry Pi is as you say highly integrated, and its operating system is Linux which is not ideal for personal mastery because of its size and complexity.  This can often be too much for the beginner or casual hobbyist.  The Raspberry Pi is a great product, don't get me wrong, but conceptually totally different from an X16.

    I don't know what prompted this comment.  I never advocated for the Raspberry Pi here and specifically said it is a completely different architecture.

  7. 6 minutes ago, Scott Robison said:

    I agree for the most part. The one place I think the big board with lots of chips wins is when trying to explain or teach it to people who have no idea. A concrete chip that can be described verbally is going to win out over a datasheet from a concrete vs abstraction perspective.

    I get your point, but I would call what you are describing superficial understanding, not meaningful understanding.

    The idea I am ranting against is that discrete implementations are necessary or superior for deep understanding.  That does not match my experience.  As an artistic choice, great.  As a pedagogical choice, not so much.

  8. 51 minutes ago, Carl Gundel said:

    A computer made with off the shelf parts as much as possible so that the end user can completely understand the machine

    I have never believed these are connected.  I think it is unfortunate that 8BG has been promoting this notion.

    The only thing that makes off the shelf parts understandable is their documentation.  Without docs how could anyone design with them?  Even good docs stop at some level of abstraction.  For example, YM2151 docs describe nothing about its microarchitecture, which is need to really understand how it works.  I would argue that properly documented highly integrated designs can be more understandable than their off the shelf cousins.  Phase 1 X16 and Phase 3 X16 are equal complexity and equally understandable.  The packaging differences are superficial.

    54 minutes ago, Carl Gundel said:

    In this way it is in my mind superior to the Raspberry Pi, and for that reason the phase 3 X16 and the X8 don't hold so strong an appeal for me.

    Raspberry Pi has no architectural commonalities with phase 3 X18/X8.  The only superficial thing they have in common is a high level of integration.

     

    If people prefer the cool appearance of big PCBs with lots of chips, I have no argument with that.  I think they look cool too.  I just reject this idea that knowing this chip is the CPU and this chip does graphics imparts any meaningful knowledge of the computer's operation.

  9. 2 minutes ago, TomXP411 said:

    What’s interesting is there does appear to be some sort of serial interface on the CX8. Apparently, the ESP32 is connected to the FPGA and allows code to be loaded through WiFi.

    The ice40up5k has hardware SPI and I2C units, two of each.  I imagine CX8 is using one of these.  These units do not have dedicated IO, so using them isn't free.

    • Like 1
  10. On 8/21/2021 at 1:31 PM, The 8-Bit Guy said:

    So, I'm just going to answer a few more concerns about the X8... I made that clear at the beginning. I wanted to release it 6 months ago.

    @The 8-Bit Guy, this was a good instinct.

    On 8/21/2021 at 1:31 PM, The 8-Bit Guy said:

    Several people seemed concerned about how much money I was going to make from this project and how the X8 might reduce that...  This project was NEVER about money for me...  My main goal was to have my dream computer, and that other people would have it too. 

    It is amazing anyone thinks this is their business, or that you are somehow incapable of managing your own finances.  That being said, it is shocking how much money has been spent before having a solid prototype.  There seems to have been a lot of putting the cart before the horse.  For example, I never understood the rush to release a logo'd keyboard.

    On 8/21/2021 at 1:31 PM, The 8-Bit Guy said:

    I suppose I could find some time next week to port Petscii Robots to the X8 for demonstration

    You did write that it should only take an hour or two.  Seems like a cheap investment if you care about X8 sceptics taking you seriously.

    On 8/21/2021 at 1:31 PM, The 8-Bit Guy said:

    The X16 has taken much longer to bring to market that I thought.  There were many times where development was halted for 6 months or more because of unsolvable bugs.  And even though we are close to being able to release a kit fo the X16..

    Here we get to the root of the problem.  Except for Frank, your team does not have the digital design experience to execute this project, at least not in a timely fashion.  There is no shame in this, nobody emerges from the womb an electronics expert.  Without experience the only path is to learn while doing.  This will always take longer, and starting down dead-end paths is inevitable. But there has been this constant drum-beat to lock down the design and build something, NOW!  I'm sorry, speed, quality and learning on the run are incompatible.  Your choices are build junk, go slow, or seek experienced guidance.

    From the outside looking in, X16p appears not close to production ready.  The expansion bus, arguably its main feature, is just not good.  You have already told of other non-working areas that need firmware updates.  Speaking of which, how did you end up in a place where only one person in the world can do a firmware update?  That's just not OK for a project that wants to be serious.  After 35 years of designing digital systems I think I am a decent judge of projects and talent.  The project was stalled for ~6 months by about the simplest possible design bug.  How many marginal bugs are waiting for quantity production to show up? You are in worse shape than you think.

    On 8/21/2021 at 1:31 PM, The 8-Bit Guy said:

    I do not believe X8 sales will cannibalize X16p sales.

    So what if it does?

    I'll make this brief, unlike some commenters here.  I do engineering, not marketing.

    There have been many ridiculous comments here, kneecap X8 to prevent competition, don't fragment the ecosystem, yadda yadda yadda.  None of this crap matters.

    Let's be real, this is not the next dominant computing platform. This project is a toy targeted at a niche audience.  That's not meant to be pejorative.   I love toys. Watching the development from afar has been enjoyable.  You did this for fun and education, not to put a roof over your head and feed your family.  How did you end up with 1000 case minimum orders and 50% down on gawd knows how many keyboards? 

    X8 sounds like a fun project that meets most of the goals you laid out in the first half of video #1.  If you like it, release it.  You don't owe the discussion forum armchair quarterbacks anything.

     

    I closing I will make one final pitch against the Cloanto deal.  It's pointless, the only good reason to license firmware is for backwards compatibility.  This was a bad decision, one of the few cases where your instincts let you down. 

    If you don't own your firmware you don't really control your project.  Is a small convenience worth having this millstone around your neck forever?  I hope you reconsider.

    Best regards and I sincerely hope for your project's success.

    • Like 3
    • Thanks 1
  11. Commander X8 is what this project should have been from the start.  Wasn't the whole point supposed to be retro bare-metal programming on a reliable, relatively inexpensive platform? Write off X16 as a bad idea and release the X8.

    I never understood why anyone cares what package the transistors live in, surface mount vs. through-hole, etc.  It's the architecture that matters, not the appearance.

    An FPGA 6502 core is no different than a discrete 6502.  Heck, all of WDCs new work is cores in FPGAs.

    The biggest problem isn't manufacturing, it's licensing.  If you don't own your kernel (sic) you don't own your product.

    • Like 4
  12. 3 hours ago, Kevin Williams said:

    I thought someone might want to add a SID, or maybe put the SAA1099 back on later, etc. It will just allow you to pump audio in from an expansion card.

    It's a nice feature but I wonder if it is worth the two expansion slot pins.  Did you consider the PC motherboard way, a pin header and CD-audio cable?

     

    2 hours ago, Kevin Williams said:

    Michael Steil is not a fan of the 816

    Same here.  I can't muster even a little nostalgia for it.

  13. 14 hours ago, Wavicle said:

    The datasheet says the output current of the part is 16mA

    Read the datasheet before answering.  Good idea...

    yeah, it's probably not the probe.  Maybe a bad ground?

  14. 2 hours ago, Lorin Millsap said:

    If you think this is a good explanation and approach I can add it to my original post.

    This is application and expansion card dependent, so I don't see a need to be too prescriptive.  The main requirement is that every DMA controller needs a "DMA enable" register whose reset state is disabled.  DMA controllers may take the bus only when enabled and software may enable only one DMA controller at a time.  The application can decide on enable scheduling in multi-controller situations.  The main point expansion card designers need to know is that multi-controller arbitration is software, not hardware controlled.

    I still say using SYNC is better than using /ML.  It costs the same and avoids adding restrictions like "don't access Vera auto-increment registers if a DMA controller is enabled".

    2 hours ago, Lorin Millsap said:

    the CPU clock is cleaned up so you always get nice square waves

    That's a nice improvement over proto#2.

  15. 9 hours ago, jbaum81 said:

    the wave goes from ~2.5v to 5.0v

     

    2 hours ago, Wavicle said:

    I more strongly think you are seeing the result of stray capacitance on the scope

    Here is my speculation: It's the low impedance of the scope probe.  Try switching to 10x mode.

    • Like 1
  16. 5 hours ago, Lorin Millsap said:

    For most DMA operations you will assert both lines, but there may be some special cases where you don’t

    I think you always need to halt the CPU with /RDY and you always need to tri-state the busses with /BE.  Can you give an example when both are not required?

    5 hours ago, Lorin Millsap said:

    it needs to activate the ML signal to let other DMA devices know that a DMA is in progress

    This attempt at self-arbitration won't avoid bus contention when two DMA controllers want access on the same cycle.  Without real hardware arbitration you are left with enabling one DMA controller at a time through software.  There is nothing wrong with software arbitration but it renders this /ML business pointless.

    I think I mentioned in another thread that it looks unsafe for DMA controllers to interrupt writes to auto-increment addresses.  An easy way to avoid this problem is to take the bus only during opcode fetch (by monitoring SYNC).  As a bonus, this inherently avoids breaking atomic operations so /ML is no longer needed.

    Also note that because /RDY is directly driven by the DMA controller it is impossible for DMA controllers to address anything that uses /RDY to add bus wait states.

    5 hours ago, Lorin Millsap said:

    The PHI2 high time is just over 60ns

    It is nominally 62.5ns, but what is the duty cycle spec on your crystal oscillator?  +/- 5% is pretty typical unless you pay extra for better.  Or did you switch to a 16MHz oscillator and divide it by 2 to square things up?

  17. 59 minutes ago, Wavicle said:

    Based on the logo on top of the FPGA (Lattice) and the QFN-48 pinout, I'm guessing that the FPGA is the ICE40UP5K-SG48I.

    Yup.  That matches my guess exactly.  I did not see anyone from the design team confirm this but I can't think of any other parts that match.

    The 16Kx16 SPRAM has just one address port so I would call it single-port and leave it at that.  But that is picking nits.  I think we both agree that it is definitely not "truly dual-ported".  Hence the request for citations...

    • Like 1
  18. 53 minutes ago, ZeroByte said:

    The YM read also has one more important functionality than IRQ checking - the busy flag.

    Right.  I omitted that intentionally, which makes it all the dumber.  I was thinking applications would schedule writes on their own.  But the schedule would be for blocks of writes, not individual ones. Thanks for the correction.

    Anyway, I suppose it is possible reads are working on the bench but that is almost worse.  It is much better for things to be broken-broken, not sometimes-broken or sometime-in-the-future-broken.  It would suck to have a batch of slow parts causing sound problems halfway through a production run.  It's even worse if the slow parts break a bunch of DIY kits.

×
×
  • Create New...

Important Information

Please review our Terms of Use