Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Posts posted by Stefan

  1. Yes, I guess that is possible.

    But you cannot assume that every word boundary is marked by a blank space. In my help text, for instance, words may be preceded by line breaks instead of blank spaces. The decompression routine could, of coarse, take that into account, and output a preceding blank space only if the previous character was not a line break. This might somewhat reduce the gain of replacing one letter words by a code.

  2. I also looked briefly at the method described by @Ed Minchau

    I would assume that in order to gain compression a word that is replaced by a code needs to

    • be longer than one character
    • occur more than one time

    Analyzing my help text there are:

    • 25 words occurring 2 times
    • 5 words occurring 3 times
    • 5 words occurring 4 times
    • 2 words occurring 5 times
    • 1 word occurring 6 times
    • 1 word occurring 7 times
    • 1 word occurring 8 times
    • 1 word occurring 9 times

    A total of 41 words.

    I might try to combine lzsa compression with the Ed's method to see where I end up.

  3. On 2/19/2022 at 11:57 AM, desertfish said:

    Have you tried using lzsa and the kernal's memory_decompress() routine?  If it works it would at least save you the need to include your own decompression routine

    Hi @desertfish!

    I didn't remember that there was such a function in the Kernal.

    I did a quick test just now.

    Compiling the lzsa utility from the Github master worked without any issues on MacOS.

    I compressed the help text with the following command, as advised in the X16 PRG:

    • lzsa -r -f2 <original_file> <compressed_file>

    The output is a binary file that cannot be handled by a normal text editor. I imported the compressed file with the .incbin command supported by CA65.

    The original file was 1,817 bytes, and the compressed file became 1,149 bytes (63 % of the original). Surprisingly, my own Huffman implementation did better, resulting in compressed size of 1,093 bytes (60 % of the original).

    But as you said, I need not store my own decompression tables or routines, and the Kernal routine, which worked perfectly by the way, will in the end be more efficient.

    My Huffman code decompresses directly to screen VRAM. As far as I understand, this is not possible with the Kernal lzsa decompress function. So you first need to decompress to RAM, and then copy from RAM to VRAM to actually display the text.

    • Like 1
    • Thanks 1
  4. The help screen in X16 Edit is embedded into the executable as ASCII text, and takes quite a lot of space.

    I've been looking for ways to make the executable smaller. One interesting option is to compress the text with Huffman coding:


    After a lot of fiddling, I finally got it to work.

    The compression is done by a python script. Decompression is done in 65C02 assembly within the X16 Edit executable.

    But how well does it work? Some metrics:

    • Original help screen text size: 1,818 bytes
    • Compressed text size: 1,093 bytes (60 % of the original)
    • Lookup tables needed for decompression: 194 bytes 

    And the 65C02 decompression code also takes some extra space. In my first working implementation, the executable became 422 bytes smaller, a saving of about 23 %.

    Another issue is speed. The help screen is not displayed instantly, but I think it still is quite fast. I uploaded a video so you may see the decompression code in action for yourself.

  5. User-configurable key bindings are now in version 0.4.3.

    I also made a simple tool to create the config file, also in the download section (X16EDIT-KEYBINDINGS.PRG).

    And finally, I have now tried to support both Ctrl+key and left Alt+key combinations in the editor. Ctrl+key worked fine on MacOS, but there are a lot of problems at least in the emulator on Windows and Linxux as the emulator uses Ctrl+key combinations for its own purposes. Let me know how this works out, as I do not have the X16 Emulator set up on Windows or Linux myself.

    • Like 2
  6. I too like the cc65 toolchain for assembly programming. But it's a bit different compared to other assemblers.

    You shouldn't use the .ORG directive (at all). It doesn't control the load address of code.

    If you use the cl65 utility to assemble, specify the default config file for X16 assembly programming with the -C option, for instance:

    cl65 -t cx16 -C cx16-asm.cfg -o test.prg test.asm

    This will place the code at $0801, without a BASIC stub.

    If you want the code to end up at address 8192, you may do this:

    cl65 -t cx16 -C cx16-asm.cfg --start-addr 8192 -o test.prg test.asm

    And if you want a BASIC stub to start your code from:

    cl65 -t cx16 -u __EXEHDR__ -C cx16-asm.cfg -o test.prg test.asm

    To get finer control over the assembly and link process, you may copy the default config file to your project and edit it to your needs. The cc65 manuals have very detailed information on the config file settings.

  7. It might be hard to make everyone happy.

    Maybe I should let the keyboard shortcuts be user configurable.

    The current source code is not very far off. The shortcuts are just a list of PETSCII/ASCII values stored within the executable.

    This could be changed so that the editor loads the shortcut list from a file on startup.

    • Like 3
  8. I've done some minimal testing, and think there is a reasonable solution for the other modifier key "meta".

    That is to use the left Alt key.

    The right Alt key (labeled AltGr on some international keyboards) could continue to be the "Commodore" key used to insert graphical characters.

    The ESC key could be used as a fallback in case the Ctrl or Alt key doesn't work in a particular setup. Pressing ESC once could be a fallback for the Alt key. And pressing ESC twice could be a fallback for the Ctrl key.

    This solution doesn't require any changes to the Kernal, but you need R39 to get sufficient control over the keyboard.

    The question is if it's needed, though. The number of functions in X16 Edit is quite limited. Maybe if I move things around a bit using only Ctrl it's good enough. 

  9. Agreed.

    But don't forget that we are really targeting an OS called X16 Kernal 🙂

    Anyway, the closest equivalent to the Windows key on MacOS should be the command key. This key used to have the Apple logo back in the days I think.

    The X16 Emulator does not currently relay either the Windows or Command key. This would happen at line 183 in keyboard.c, but it is commented out:

    //case SDL_SCANCODE_LGUI: //Windows/Command

    Any other ideas for the Meta key?

    Maybe the right Alt or Control key?

  10. Hi,

    There is currently no lack of control keys, it's just the question of replicating Nano more closely.

    According to Nano's user manual, Alt is normally used as the other control key (Meta). It is stated that pressing ESC once is a fallback for the Meta key, should Alt not work on your system. And pressing ESC twice is a fallback for the Ctrl key, if that doesn't work.

    Using the ESC key in this way is not the most convenient. I would try other methods first. In R38 you have less control over the keyboard, which limits the options a lot. In R39 you could consider to use one of these as Meta:

    • Ctrl+Shift
    • Ctrl+Alt
    • Windows key (the emulator will not like it though)
  11. Thanks.

    It would be interesting to replicate Nano keyboard shortcuts more closely.

    When I've looked into this question before, I've been put off by Nano's use of two modifier keys, and the fact that the Alt key is already used by the X16 for inserting graphical characters. Nano alternatively lets you single or double tap the ESC key instead, but it feels like a fallback.

    I didn't know that Ctrl+S could be used for saving. Traditionally Nano returned the error message "XOFF ignored, mumble, mumble" when you pressed Ctrl+S, and it still does on my computer running Nano 2.0.6.

    Replace is, at least in Nano 2.0.6, also available by Meta-R. Using subcommands adds a little complexity to the UI, but it is certainly doable. There is already support for context menus in X16 Edit.

    X16 Edit has some commands that is not needed/available in Nano:

    • Ctrl+D to change device number
    • Ctrl+E to change character set
    • Ctrl+I to invoke DOS commands
    • Ctrl+T to change text color, and Ctrl+B to change background color
    • Ctrl+M to show memory usage

    If you would like to make a table of shortcuts, I would be more than happy to look into that.

  12. Since version 0.4.0, published in September 2021, you haven't been able to run X16 Edit in the last stable release of the emulator and Kernal (R38).

    In order to run X16 Edit, you have had to compile the emulator and Kernal from the Github master branch, what might become R39. I understand that setting up the build environment and compiling is not for everyone.  Therefore I tried to modify the last version of X16 Edit (0.4.2) to make it run in R38. The result of this was published today as version 0.4.2-R38.

    The changes were fairly simple to do, but I haven't tested it thoroughly.

    One difference is the addresses used for bank switching. But this is just two definitions in the source code, one for RAM bank and one for ROM bank select.

    The other significant difference is keyboard functionality. Since 0.4.0, X16 Edit uses a custom PS/2 scan code handler in order to read modifier key status and some extra keys such as DELETE, END, PgUp, PgDn, and the numerical keypad. This is simply not possible to do in R38. The modifier keys can be read by other means in R38, but there is no way that I know of to support keys ignored by the Kernal in R38.

    A benefit of supporting R38 is that the Try It Now button now may run 0.4.2-R38.

    • Thanks 1
  13. My guess is that the community is not interested in the keyboard solution per se. We just want it to become functional so that we can get on with what really interests us, eventually owning and using a X16.

    I agree that the team should choose a proven design that is easy to implement.

    @Wavicle, do you see any problem supporting both keyboard and mouse with the ATTINY + I2C solution?

    • Like 2
  14. As an I2C slave, the ATTINY cannot make too many assumptions on the data valid period on the I2C bus. I guess that is why I in my head ruled out that the ATTINY could serve both the PS/2 and I2C lines simultaneously.

    But you have tested this in hardware, and I see that it might work as you describe.

    I did some manual clock cycle counting on the Kernal I2C send_bit function to calculate for how long the clock line is held high.

    • The clock transition from low to high happens at i2c.s line 223
    • The clock transition from high to low happens at line 210
    • Between those lines there are about 24 clock cycles = 3 us @ 8 MHz

    I don't know, but is it correct to say that the handlers for both the I2C and the PS/2 must run within that time to guarantee that you don't loose data?

    EDIT: By the way, I see that the ATTINY861 has hardware support for I2C (USI). Did you use this in your test or was the I2C bit banged? I was assuming the latter, but maybe that wasn't right. I would need to read more about USI.

  15. That is really interesting info, @SolidState.

    As to using I2C as transport layer between the ATTINY and the 65C02.

    I tried to measure the time it takes to run the Kernal function i2c_read_byte. Using the clock counter at $9fb8-9fbb I came to about 1,200 clock cycles. Manual counting of the Kernal code gave a similar result, but I didn't count every code path.

    1,200 clock cycles are 150 us @ 8 MHz.

    It's clear that the ATTINY cannot be listening for incoming PS/2 data at the same time it makes an I2C transfer taking that time. The data valid period in the PS/2 protocol is much less than 150 us in my understanding.

    This means that if you are trying to use I2C to transfer scan codes to the processor, you must inhibit the PS/2 line while doing so.

    It feels wrong to do this, but it might work anyway. Even if the time it takes for the keyboard to come alive again after being disabled is 5,000 us, there is room for about 200 scan codes per second.

  16. I made two different changes to that code that were compiled and published in this thread on August 25. You need a real board to test it. I don't know if that was ever done by Kevin.

    The real problem is, as @Wavicle pointed out, that the PS/2 protocol wasn't designed to be used like this. There seems to be no standard on when a PS/2 device must become active after being disabled. The standard just says that it may not happen before 50 microseconds after the host has released the clock. It could be 50 microseconds, 100 microseconds, or any other duration. The standard doesn't prevent that it could differ from time to time even if you're using the same device (not very likely though). And different keyboards could have different delays, and so on.

  17. On 12/31/2021 at 6:26 PM, BruceMcF said:

    That would seem to be the direct solution ... pull the PS2 SCL clock line low while leaving the data line high before starting to send data to the master over I2C, release it when done.

    I still think using I2C will make matters more complicated, as evidenced by the fact that we have not yet a functional keyboard.

    The Veronica's keyboard, that I mentioned above, does not need to disable the PS/2 line during operation. After receiving the 11th bit of a PS/2 envelope, the keyboard controller directly puts the byte received onto the shift register which is then available to the 6522. There is enough time to do this before receiving the next PS/2 start bit. The 6502 may then read the byte from the 6522 with a simple LDA instruction. In other words, the 6502 and 6522 are used as designed. We shouldn't fight that.

    I don't know, but it feels like the PS/2 wasn't designed to be disabled after every scan code. There is, for instance, no standard saying how quickly the PS/2 device should start when enabled again, it's only said that it cannot start before 50 µs has passed.

  18. Nice work @Wavicle

    What strategy did you have to handle PS/2-I2C conflicts? I mean what if PS/2 communication started while you were sending data over I2C. Did you just disable the PS/2 line while sending over I2C?

    How did you handle multibyte scan codes? Was there a buffer?

    One benefit of a shift register solution is that it might be possible to shift out bits even when receiving PS/2 data during the PS/2 clock inactive state. The inactive state is 30-50 us per bit corresponding to 300-500 processor cycles on the ATTINY @ 10MHz. Is that enough to transfer one byte? I think there's a good change you could make it fit if you look at the instruction set table for the ATTINY.

  19. On 12/30/2021 at 6:46 PM, BruceMcF said:

    I think that rather the CX16 would poll the ATTiny for a key on one vertical refresh and poll for a mouse event on the next and then repeat. 1/30th of a second seems to me to be fast enough that the queue should not fill up, so there is no need for an NMI.

    "As yet no solution" could easily mean that the preference is to get the PS/2 straight from the 6522 if possible, and that is waiting until Michael Steil has time to work through suggested fixes. It could also easily mean that the preference is to use the ATTiny, and that is not working yet. That's among the reasons I am not keen on doing any tea leaf reading on comments like that.

    I agree that we should refrain from both tea leaf reading, and Kremlinology, and focus on the request for assistance put out in Kevin's original post in this thread. As he hasn't yet thanked anyone for solving this, it's reasonable to believe that it isn't solved.

    My intuition is that using the I2C protocol to send keyboard and mouse data from the ATTINY over the 65C22 to the 65C02 is asking for unnecessary problems.

    Looking for solutions online, I particularly like at least some aspects of this one: https://blondihacks.com/veronica-keyboard/

    The Veronica keyboard controller is a microprocessor that reads PS/2 data and pushes it to a shift register that is read by a 6522 which in it's turn is read by the 6502. The Veronica keyboard uses an interrupt to signal to the 6502 that there is PS/2 data to be read, and there is an interrupt handler that basically just stores the data to a keyboard buffer. The interrupt handler must run immediately and be as small as possible in order not to loose data, especially multibyte scan codes.

    As far as I understand, the 65C22 has a built-in shift register that could be used by the X16 instead of an external shift register. And there is also an unused 65C22 (VIA #2) on the board if that functionality cannot be put into VIA #1.

    I also think that a polling solution would be better than an interrupt firing at any time. This should be possible if the ATTINY buffers data until it's read.

  20. On 12/30/2021 at 2:45 PM, Fabio said:

    Isn't the ATTINY 861 connected to the Via with I2c?

    I see that Kevin said so in his original post. And that the PS/2 should be interrupt driven.

    My understanding of the 65C22 is very limited, but there is no mention in the datasheet of I2C support. As far as I can tell from the Kernal source, I2C is done by bit banging VIA pins. But yes, I suppose you could read and write data from/to the ATTINY using this kind of I2C communication.

    One thing that springs to mind is that the ATTINY then might need to drive three timing dependant serial interfaces simultaneously (two PS/2 and one I2C). Can it do all that?

    And if an NMI may be generated at any time to read PS/2 data as soon as it's available, will that cause problems for other timing dependant code running on the 65C02, for instance music?

    I feel it would be necessary to draw up some diagrams to fully understand how this is going to work low level.

  21. Continuing upon my last post here.

    Currently the PS/2 data and clock lines of the keyboard (and mouse?) are connected directly to VIA#1 PA0-1 and PB0-1, while VIA#2 is unused as far as I can tell.

    Thoughts on pin usage:

    • The ATTINY 861 has 14 GPIO pins, and 3 of them are used for power control. That leaves us with 11 pins to handle keyboard and mouse
    • Of these, 4 pins are needed to connect the keyboard and mouse PS/2 lines. Now we have 7 pins left
    • That is clearly not pins enough to transfer one byte at a time from the ATTINY to the 65C22. Maybe you could manage to transfer one nibble at a time. But even nibble transfer requires control lines, like chip enable, data transfer direction (read/write), read/write handshake, and keyboard/mouse select.
    • If byte or nibble transfers are not possible, we are stuck with serial transfer. It's very similar to connecting the keyboard directly to the VIA, however, with the benefit of having precise control over how the ATTINY sends the data it has buffered.

    Returning to @Kevin Williams's initial question in this thread: In order to write keyboard and mouse controller software for the ATTINY, there first needs to be a hardware design.

  22. Hi,

    On Facebook there was a post answered by David that I read today, and that said that there is yet no solution to the PS/2 issue.

    Unfortunately, I have no clear idea of how the 65C02 -> 65C22 -> ATTINY861 setup would work.

    Some thoughts:

    • I suppose one possibility is that the ATTINY generates a NMI on the 65C02 when there is PS/2 data to be read.
    • A drawback of this design is that the NMI could occur at any time, possibly disturbing other time critical code running on the 65C02.
    • Another design option might be to let the ATTINY buffer PS/2 codes received, and to disable PS/2 communication if the buffer is full.
    • To support both keyboard and mouse, there could be one buffer for each
    • I suppose the ATTINY cannot handle both the keyboard and mouse simultaneously. Maybe there needs to be a priority, for instance so that on receiving PS/2 data from the keyboard the mouse PS/2 line is always disabled.
    • With this setup the Kernal code could try fetching one PS/2 scan code from the keyboard and mouse buffers on each VBLANK.
  23. I guess you could use ADC, but it would be more code and slower.

    Assuming you are using r0 as zero page vector, this might work (not tested):

    ldx #<$8000
    ldy #>$8000
    stx r0
    sty r0+1
    lda r0
    adc #1
    sta r0
    lda r0+1
    adc #0 ;Adding carry
    sta r0+1
    lda (r0) ;Indirect addressing mode without Y, not supported by original 6502
    beq end
    jsr $ffd2 ;Print char
    bra loop


  24. On 12/25/2021 at 4:28 PM, rje said:

    lda #<$8000
    ldx #>$8000

    Are the < and > operators how we split a word into low and high bytes?


    On 12/25/2021 at 4:28 PM, rje said:

    Can I just directly increment a memory location, and use the carry bit to increment the high byte?

    I'm not sure what you exactly mean. However, the INC, INX, INY, and INA opcodes don't affect the carry bit, so I guess the answer is no.

    In @Greg King's post above, there is a complete code sample using the Y register to walk through the low byte of the address. As may be seen in the sample, wrap around of Y is tested by checking for 0, not carry (the row BNE LOOP in Greg's code).

    • Thanks 1
  • Create New...

Important Information

Please review our Terms of Use