Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Stefan last won the day on October 3

Stefan had the most liked content!

1 Follower

About Stefan

  • Birthday 01/03/1973

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Stefan's Achievements


Rookie (2/14)

  • Dedicated Rare
  • Very Popular Rare
  • First Post Rare
  • Collaborator Rare
  • Reacting Well Rare

Recent Badges



  1. Bringing clarity to code by lots of comments is not the Clean Code way of doing things, apparently. Comments have the disadvantage of being ignored by the assembler or compiler. Keeping them up to date is a manual process. And all manual processes will fail Good labels are a better option. The advantage of extracting small functions into macros is that you get better abstraction in higher level functions. In my example above, the function "clear_screen" contains mostly macro calls. It's possible to understand what they will do without looking at the macro. You get the big picture very quickly. And if you are interested in the details, you may look at the macro definition. That said, I've never tried to program anything in this fashion. It would be interesting to do that.
  2. Continuing on the topic of clean code, even though its not X16 Edit specific, I have some thoughts on how to apply to assembly programming. In the lesson linked above, Uncle Bob is talking a lot about the design of functions: Names of functions (and variables) should be carefully chosen, so that reading code is like reading (good?) prose Function names should contain verbs, as they are doing something A function should do only one thing; the process of cleaning code entails breaking a function into the smaller and smaller parts, and you know you're done when no more functions can reasonably be extracted A function should be short (most often 5 lines) In a modern cross assembler, such as ca65, there's nothing stopping you from naming things properly. But what about functions doing only one thing, and functions being short like 5 lines? Any high level language examines functions at compile time, and decides wether the resulting machine code is inlined or made into an actual machine language subroutine. Even if you are writing a lot of really small functions in a high level language, the binary code will probably be efficient. In 6502 assembly, if you do a lot of subroutine calls with JSR+RTS, they will all stay in the final binary code making it inefficient. I have never seen 6502 assembly code trying to be clean code in the way Uncle Bob describes. Would it even be possible without loosing performance? I think it might be, if you use extensive use of macros for code that you want to break out into a separate "function" where the resulting machine code is inlined. A simple example. Is this a good idea? .macro goto_topleft stz VERA_L stz VERA_M lda #(2<<4) sta VERA_H .endmacro .macro clear_line ldx #80 lda #32 : sta VERA_D0 dex bne :- .endmacro .macro goto_nextline stz VERA_L inc VERA_M .endmacro .macro is_finished lda VERA_M cmp #60 .endmacro .proc clear_screen goto_topleft loop: clear_line goto_nextline is_finished bne loop rts .endproc VERA_L = $9f20 VERA_M = $9f21 VERA_H = $9f22 VERA_D0 = $9f23
  3. I've done some code cleaning the last couple of days. Very rewarding when the code becomes more compact, more readable, and more stringent. So far I've reduced the binary by almost 200 bytes. Some time ago, I watched a few lectures on Clean Code by "Uncle Bob". Very inspiring, even though not all paradigms may be used in assembly, if you want things to fly fast. A nice weekend to all of you!
  4. @BruceMcF, yes I guess you could gain some performance by not checking READST every time you call CHROUT if you can trust that no harm comes from writing to the "disk" after an error condition. In X16 Edit I could try checking READST every time the program needs to change the memory page it's reading from (i.e. about every 250 bytes).
  5. I found a bug (sort of) in the routine that writes the text buffer to disk. It wasn't critical, but at least annoying. Maybe my findings are of some general interest, so here is a short description. X16 Edit uses the standard Kernal routines to write to disk, i.e. SETNAM + SETLFS + OPEN to open a file CHKOUT to set the file as standard output CHROUT to write actual data to the file The information on CHROUT error handling is a bit hard to grasp, at least for me. The entries in the compilation of books and manuals here https://www.pagetable.com/c64ref/kernal/ have somewhat divergent information on this topic. This is what I believe to be true after calling CHROUT: Carry bit set indicates that a Kernal I/O error has occurred, in this context most likely 5=Device not present. The error number is in the A register. After each call to CHROUT you need to call READST to know if a disk side error occurred, such as a file exists error. An error occurred if READST is not 0. To get the actual disk side error, you need to open a file to the command channel (like OPEN 15,8,15 in BASIC) and read the message X16 Edit did previously write the whole buffer to disk without checking READST. Only then it would check the command channel status. It worked anyway, because the disk seems to ignore data sent after an error condition has occurred (such as file exists). But is wasn't beautiful. I have also been working on a progress indicator that is shown when loading or saving files. This is useful when handling large files, so that you know the computer has not just locked up. These changes are committed to the GitHub master branch. I think my next task is a new code cleaning project focused on minimizing code size. The ROM version of X16 Edit is now only about 75 bytes from filling 16 kB, the size of one ROM bank. It would be nice to have some more leeway than that for future additions.
  6. I've uploaded a new version of X16 Edit (0.4.0) incorporating some improvements of the program I've been working on during the summer and this autumn. It's nothing major, mostly fixing small things in the user interface not working perfectly
  7. Seems logical to me. The 2-clause BSD license couldn't be much simpler (or shorter).
  8. Returning to the original questions by @AuntiePixel, there are at least two solutions in the "downloads/dev tools" area. One is @Scott Robison's BASIC PREPROCESSOR. It takes BASIC source code stored in a plain text source file with some additional markups for labels and long variable names, and outputs a runnable tokenized BASIC program. You don't use line numbers in the source file. One cool thing is that the preprocessor apparently is written in its own BASIC file format. The other is my BASLOAD program. It loads BASIC source code files stored as plain text into RAM. While loading a file it's tokenized so that it can be run be the built-in interpreter. It's made to work in parallel to X16 Edit. As the BASIC PREPROCESSOR, it doesn't use line numbers. Both solutions let you write the BASIC source files in any editor of your choosing. That includes writing source files on modern PCs and transferring them to the SD card.
  9. Hi, I found this nice article written by @Greg King on how to use the cc65 package for X16 programming: https://cc65.github.io/doc/cx16.html Section 4.2 contains information on the command line params you should use and the default config file for assembly programming. I think I used this information myself to get started with the ca65 assembler, as it's not obvious how to do this. As to the use of .ORG you could read section 17 in the ca65 users manual on porting assembly source code written for other assemblers: https://cc65.github.io/doc/ca65.html#toc17 In short, you may set the start address of the program with the cl65 command line param --start-addr or by writing your own config file replacing the default cx16-asm.cfg.
  10. .org and .segment statements are not required if you compile with the config file cx16-asm.cfg. If using the command line params I mentioned, the compiler defaults to the CODE segment, and you therefore need not explicitly tell the compiler that. The .org will, surprisingly, not affect the load address of the executable. It sets the program counter, normally only used when you compile code that is meant to be moved to its final destination after the program is loaded. As the manual says, you normally need not use the .org statement (https://www.cc65.org/doc/ca65-11.html#ss11.72) In most cases you want to create a program that can be run with the BASIC RUN command. To do this, use the -u __EXEHDR__ compiler param, and all will be done automatically for you. As proof of this, the source code, and the command line params given to cl65 in my previous post works fine. If you want to manually control where code ends up in memory when you load the executable, you must learn how to use and write CA65 config files. You can test writing a small program using the .org statement, for instance this: .segment "STARTUP" .segment "INIT" .segment "ONCE" .segment "BASS" .segment "CODE" .org $0900 lda #65 jsr $ffd2 rts Compile it with "cl65 -t cx16 -o TEST.PRG test.asm". Move the program to the SD card image. Load the program it in the emulator with LOAD"TEST.PRG",8,1. The program still loads to $0801. This is because the CA65 has a linker that decides where the code ends up. And to give the linker commands, you need to write config files.
  11. Hi, Some pointers: Lose the .org and .segment statements, they are not needed The message string cannot be at the beginning of the source code if you intend to use $080d as entry point. The computer doesn't know if the bytes there are a string or code, it will try to run it as code, and the program likely crashes. Move the string to the end of the source. To compile I use the following command: cl65 -o HELLOWORLD.PRG -u __EXEHDR__ -t cx16 -C cx16-asm.cfg helloworld.asm The compiler translates ASCII to PETSCII correctly, but note that an unshifted char in your modern PC will be an uppercase char on the X16, presuming you are using the default uppercase/graphics mode that the X16 is started in. The modified source code that I tested (lines that could be removed commented out): ;.org $080D ;.segment "STARTUP" ;.segment "INIT" ;.segment "ONCE" .segment "CODE" CHROUT = $FFD2 ;CHRIN = $FFCF ZP_PTR_0 = $20 start: lda #<message sta ZP_PTR_0 lda #>message sta ZP_PTR_0 + 1 ldy #0 @loop: lda (ZP_PTR_0),y beq stop JSR CHROUT iny bra @loop stop: rts message: .byte "hello",0
  12. I made a separate thread about the SD card issues, should anyone want to continue this discussion.
  13. There was some discussion in the above thread about the SD card not always being recognized by the last development board. It's a good idea to move that discussion to a separate thread, should anyone want to continue. The simplified SD card spec may be downloaded here: https://www.sdcard.org/downloads/pls/pdf/?p=Part1_Physical_Layer_Simplified_Specification_Ver8.00.jpg&f=Part1_Physical_Layer_Simplified_Specification_Ver8.00.pdf&e=EN_SS1_8 As @Wavicle said, it follows from section 4.4 that the host is not required to keep a continuous clock frequency. The clock may even be stopped for instance while the host is filling its output buffer. However, the specification also talks about exemptions to this, for example during the ACMD41 command (card initialization). I don't know if the exemptions are relevant, but they might be. Anyway, if the SD card during the initialization command requires a continuous clock frequency in the range 100-400 kHz, and if the initialization request/response, as I understand the code in dos/fat32/sdcard.s:232-301, consists of multiple bytes, the X16 running at 2 MHz will not be able to keep up at 100 kHz. I have no intention to look any further at this question myself, at least not for now. I think the right way to proceed, is to make the PS/2 keyboard work at 8 MHz, and thereafter look at the SD card issue if it doesn't work reliably when the computer runs at 8 MHz.
  14. Even though there is no timing dependent code in those lines, there is in other places within the module, for instance: wait_ready, begins at line 40 sdcard_init, begins at line 236 I haven't gone into the details of the SD card protocol. I haven't analyzed if the changed timing when the clock rate is reduced would be OK. I only said that the Kernal code clearly is written on the assumption that the computer runs at 8 MHz, and that it would be interesting to know if the problem is still there when the computer is run at that speed. EDIT: Let me be clear about that I have no practical experience interfacing SD cards. Reading about the protocol in datasheets, I understand that there is no minimum clock frequency during normal communication. But during card initialization, you may not go below 100 kHz. When X16 is run at 2 MHz, 100 kHz corresponds to 20 processor cycles. It so happens that the spi_read function that is called several times during card initialization is a little more than 20 cycles. As I said, it would be interesting to know how well SD card communication works at 8 MHz...
  15. I also looked briefly at the code that handles SD card communication (dos/fat32/sdcard.s). It's a bit banged SPI solution that depends on proper timing. The necessary timing delays are measured in processor cycles, calculated on the assumption that the X16 runs at 8 MHz. It would be surprising if the code worked properly if you run the computer at 4 or 2 MHz. Before any other troubleshooting, it would be interesting to know how well the SD card communication works at 8 MHz. To test this you may first need the keyboard to work at that speed
  • Create New...

Important Information

Please review our Terms of Use