Jump to content

External RS-232 Interface, storage, and second screen.


TomXP411
 Share

Recommended Posts

53 minutes ago, ZeroByte said:

Ah - that's what I get for just reading the latest post and not going back to the beginning and catching up.

There is a lot of that going around. 😉 And to be fair, a network connection is planned  as part of this, but the specific firmware you use for the network interface will be up to you. I actually have several ideas on that count, and I even started planning some NodeMCU software for the task. So if you want to open a new thread on the topic, I’d be happy to contribute my ideas. 

Link to comment
Share on other sites

So news on my latest experiment:

My latest experiment involved driving and sensing the Raspberry Pi’s GPIO pins with the PIGPIO library. This seems to work very well at relatively low data rates, but I have run into an interesting flaw: the nanosleep() and gpiosleep() functions seem to have a minimum execution time that’s far longer than the microsecond range suggested by the documentation. So, in effect, I can’t get more than something like 2000 cycles a second through the GPIO interface. 

So one thing I tried doing is removing the sleep statements out of the transmitter and simply using good old fashioned FOR loops to slow down the transmitter code. I discovered that the receiving code starts dropping packets once I get to the point where I’m sending 64K in less than 2-3 seconds.

For my next test, I plan to introduce a handshaking pin to let the receiver affirmatively acknowledge a packet. This will change the data flow and turn the two devices into equal peers, rather than client/server. In this case, the data flow would look like:

Sender switches Data and CLK pins to write mode. 
Sender sets data on D0-D7, Sender sets CLK high. 
Receiver reads D0-D7 and sets ACK high. 
Sender lowers CLK
Receiver lowers ACK
Sender switches all pins to read mode. 

In this mode, there would be no ready to send/clear to send (ie: buffer state) lines. Instead, either device on the bus would simply raise the CLK line to send a packet. 

This would also mean we could support multiple devices on the bus. That would probably involve an additional ADR pin, so we’d have the following pins on the bus:
D0-D7 - data or command being transmitted
CLK - tells receiver to read the data bits
ACK - tells transmitter the data has been read
ADR - select a specific receiver
CMD - a command is being send to the receiver

 

  • Like 1
Link to comment
Share on other sites

7 hours ago, TomXP411 said:

So news on my latest experiment:

My latest experiment involved driving and sensing the Raspberry Pi’s GPIO pins with the PIGPIO library. This seems to work very well at relatively low data rates, but I have run into an interesting flaw: the nanosleep() and gpiosleep() functions seem to have a minimum execution time that’s far longer than the microsecond range suggested by the documentation. So, in effect, I can’t get more than something like 2000 cycles a second through the GPIO interface. 

So one thing I tried doing is removing the sleep statements out of the transmitter and simply using good old fashioned FOR loops to slow down the transmitter code. I discovered that the receiving code starts dropping packets once I get to the point where I’m sending 64K in less than 2-3 seconds.

For my next test, I plan to introduce a handshaking pin to let the receiver affirmatively acknowledge a packet. This will change the data flow and turn the two devices into equal peers, rather than client/server. In this case, the data flow would look like:

Sender switches Data and CLK pins to write mode. 
Sender sets data on D0-D7, Sender sets CLK high. 
Receiver reads D0-D7 and sets ACK high. 
Sender lowers CLK
Receiver lowers ACK
Sender switches all pins to read mode. 

In this mode, there would be no ready to send/clear to send (ie: buffer state) lines. Instead, either device on the bus would simply raise the CLK line to send a packet. 

This would also mean we could support multiple devices on the bus. That would probably involve an additional ADR pin, so we’d have the following pins on the bus:
D0-D7 - data or command being transmitted
CLK - tells receiver to read the data bits
ACK - tells transmitter the data has been read
ADR - select a specific receiver
CMD - a command is being send to the receiver

 

Or, in other words, it's still a client/server approach, but the server mode is selected implicitly by the most recent command that has been set?

Since when ADR and CMD are selected, both sides know which is sender and receiver, but when it is not, each side has to know which is sender and which is receiver, to know whether they are monitoring the CLK pin or driving it.

And swapping the read/write status of CLK and ACK is going to be an extra overhead on the CX16 side. Fixing the direction of the CLK/ACK pins and allowing their status to switch in a system read would seem to be workable. Then there would be no change in the read/write settings of the PortB lines, and the change in read/write settings of the PortA settings don't have to happen each byte during a packet read or write:

System write (Data):

  • System sets data on D0-D7,
  • System sets CLK high (CMD/ADR both low). 
  • Current Device reads D0-D7 and sets ACK high. 
  • System lowers CLK
  • Device lowers ACK

System read (Data):

  • Device sets data on D0-D7
  • Device sets ACK high. 
  • System reads D0-D7
  • System sets CLK high. 
  • Device lowers ACK
  • System lowers CLK

Command write{*}:

  • System sets D0-D7 to write
  • System sets command on D0-D7
  • System sets CMD, CLK high. 
  • Current Device reads D0-D7 and sets ACK high. 
  • {If CMD is a read command, System sets D0-D7 to read}
  • System lowers CLK and CMD
  • Device lowers ACK

Device Select{*}:

  • System sets D0-D7 to write
  • System sets command on D0-D7
  • System sets ADR, CLK high. 
  • All devices read D0-D7
  • Selected Device sets ACK high. 
  • System lowers CLK and ADR
  • Device lowers ACK

{*Note: Device selection and Command Writes can pre-empt existing commands to devices, to be able to break a stalled device read, so for microcontrollers, CMD and ADR might be set up as interrupt lines.}

 

  • Like 1
Link to comment
Share on other sites

Yeah, I've already poked several holes in the design, including collisions and stalled reads. I'm back to needing 14 pins, so if I'm going to do this with a Pi, I need to find pins other than 16-23 that are free.

Pin 24 seems to always be a 1 and pins 0-15 seem to also have data on them. However, I haven't looked that closely at the bit pattern on the low pins, I just saw a number like Cxxx and so shifted my data up by 16 bits to transmit in the high word. 

Either way, if I can't get 14 pins out of the Pi, I'm going to give up on that idea and look at something else.

 

Link to comment
Share on other sites

What if the network card worked like old school CBM smart peripherals? Put the brains on the card / device. Open a channel to it, send commands, write data, read results, close the channel, etc.

It shouldn't matter if it uses Hayes modem or raw socket io protocol, as long the 'thing' buffers IO properly.  It could handle any protocol (internally translated on the device or natively implemented on the CX16).

Link to comment
Share on other sites

Posted (edited)
5 hours ago, TomXP411 said:

Yeah, I've already poked several holes in the design, including collisions and stalled reads. I'm back to needing 14 pins, so if I'm going to do this with a Pi, I need to find pins other than 16-23 that are free.

Pin 24 seems to always be a 1 and pins 0-15 seem to also have data on them. However, I haven't looked that closely at the bit pattern on the low pins, I just saw a number like Cxxx and so shifted my data up by 16 bits to transmit in the high word. 

Either way, if I can't get 14 pins out of the Pi, I'm going to give up on that idea and look at something else.

 

I think using swapping the roles of CLK/ACK when reading from the device and using commands to sort out whether the device is reading or writing as in the four state version I sketched should avoid collisions and also allow breaking out of stalled reads and does it with the same 12 lines you specified ... I had a DATA control line when I started writing it, but it turned out to be redundant. If there is a 13th pin available, I would make it a DVERR pin input into the User Port.

IOW, using a two line handshake but keeping it client/server so that the command from the client dictates whether the following byte or packet is read or written, so the command makes a separate Data-Direction line redundant.

Edited by BruceMcF
Link to comment
Share on other sites

4 hours ago, BruceMcF said:

I think using swapping the roles of CLK/ACK when reading from the device and using commands to sort out whether the device is reading or writing as in the four state version I sketched should avoid collisions and also allow breaking out of stalled reads and does it with the same 12 lines you specified ... I had a DATA control line when I started writing it, but it turned out to be redundant. If there is a 13th pin available, I would make it a DVERR pin input into the User Port.

IOW, using a two line handshake but keeping it client/server so that the command from the client dictates whether the following byte or packet is read or written, so the command makes a separate Data-Direction line redundant.

That makes for a chatty protocol, though. It also increases the number of times I have to switch the port between reading and writing. If I have a “data waiting” line, I don’t need to issue a command to check the buffer. I just read the state of one pin. If I want to read a byte from the buffer, I just set the Read pin high and raise the clock. One of the original design goals was to prevent unneeded chatter, and removing those lines adds a lot of chatter I’d rather avoid. 

Link to comment
Share on other sites

15 hours ago, TomXP411 said:

That makes for a chatty protocol, though. It also increases the number of times I have to switch the port between reading and writing. If I have a “data waiting” line, I don’t need to issue a command to check the buffer. I just read the state of one pin. If I want to read a byte from the buffer, I just set the Read pin high and raise the clock. One of the original design goals was to prevent unneeded chatter, and removing those lines adds a lot of chatter I’d rather avoid. 

It IS a protocol the is more efficient for packet transfers then for byte at a time transfers, but I'm not so sure how high a priority is for byte at a time transfers. If one packet is 128 bytes, your original protocol would have required changing the data ports R/W and flipping polarity of the CLK/ACK port twice for every byte of a write. Flipping polarity of CLK/ACK requires a read of the Port B data direction register, and AND #n and an ORA #m and a write to the Port B data direction register, so it's more expensive than flipping the read/write of PortA.

This one only requires changing the data register status twice for 128 bytes of a packet read, and Port B is stable.

One way to reduce pins is to incorporate the ADR function into the CMD function, where CMD #0 is a device select command, with the following byte the selected device, so all devices listen to a CMD but the non-selected devices only listen for all bits low.

Then you can have:

D0-D7 (I/O) - data or command being transmitted
HANDOUT (O) - handshake CLK when system writing, ACK when reading
HANDIN (I) - handshake ACK when system writing, CLK when reading
CMD (O) - a command is being send to the device(s), #0,N is the command to switch to device N
READ (O) - system is ready to read data from current device
DDATA (I) - device has data available for reading
... as 13 pins.

One advantage of that is that if DDATA is on a pin that can trigger an interrupt, then the device can raise the DDATA when it has anything buffered and once the interrupt is processed and the buffer starts to be emptied, it just holds DDATA high until the buffer is empty.

Edited by BruceMcF
  • Like 1
Link to comment
Share on other sites

Note that part of the device select is that set-up type stuff can be text commands to be written out as a sequence of ASCII characters, with the control characters $0-$2F and $80-$9F reserved for functional operating commands. Handing set-up type commands as a string of ASCII characters is a low overhead way of doing things for the CX16 side.

Now, NUL doesn't strictly mean "the next byte is who I'm talking to", but it's easy to integrate as a special case, since any microcontroller will have a test and branch on zero operation.

Edited by BruceMcF
Link to comment
Share on other sites

1 hour ago, BruceMcF said:

It IS a protocol the is more efficient for packet transfers then for byte at a time transfers, but I'm not so sure how high a priority is for byte at a time transfers. If one packet is 128 bytes, your original protocol would have required changing the data ports R/W and flipping polarity of the CLK/ACK port twice for every byte of a write. Flipping polarity of CLK/ACK requires a read of the Port B data direction register, and AND #n and an ORA #m and a write to the Port B data direction register, so it's more expensive than flipping the read/write of PortA

I don't know where you got that idea. Once the direction is set, it stays that way until the end of a message - which consists of all the data in the buffer. That's expected to be more than one byte at a time, especially when using this for File I/O. 

Even with the Peer to Peer protocol, there's no switching the context of the CLK/ACK lines. If the server has data to send, it will keep sending data until the buffer is empty. Likewise, if the Commander has data to send, it will keep sending until the buffer is empty. 

The only time you switch directions on the data pins is at the end of a message, when there's no more data to send. At that point, all devices on the bus float the data lines until someone is ready to talk again. 

 

Link to comment
Share on other sites

6 minutes ago, TomXP411 said:

I don't know where you got that idea. Once the direction is set, it stays that way until the end of a message - which consists of all the data in the buffer. That's expected to be more than one byte at a time, especially when using this for File I/O. 

From where it said it:

Sender switches Data and CLK pins to write mode. 
Sender sets data on D0-D7, Sender sets CLK high. 
Receiver reads D0-D7 and sets ACK high. 
Sender lowers CLK
Receiver lowers ACK
Sender switches all pins to read mode. 

I didn't follow why the CX16 floating the pins during a write was part of the handshake, but it's listed as the last step in the handshake, and once the sender has floated the pins, it's got to start from the first line for the next byte.

Did it mean to say:

  1. Sender switches Data and CLK pins to write mode. 
  2. Sender sets data on D0-D7, Sender sets CLK high. 
  3. Receiver reads D0-D7 and sets ACK high. 
  4. Sender lowers CLK
  5. Receiver lowers ACK
  6. Goto 2 on multiple byte transfers
  7. Sender switches all pins to read mode. 

 

Edited by BruceMcF
Link to comment
Share on other sites

6 minutes ago, BruceMcF said:

From where it said it:

Sender switches Data and CLK pins to write mode. 
Sender sets data on D0-D7, Sender sets CLK high. 
Receiver reads D0-D7 and sets ACK high. 
Sender lowers CLK
Receiver lowers ACK
Sender switches all pins to read mode. 

I didn't follow why the CX16 floating the pins during a write was part of the handshake, but it's listed as the last step in the handshake, and once the sender has floated the pins, it's got to start from the first line for the next byte.

Did it mean to say:

  1. Sender switches Data and CLK pins to write mode. 
  2. Sender sets data on D0-D7, Sender sets CLK high. 
  3. Receiver reads D0-D7 and sets ACK high. 
  4. Sender lowers CLK
  5. Receiver lowers ACK
  6. Goto 2 on multiple byte transfers
  7. Sender switches all pins to read mode. 

 

I wrote that on my cell phone while I was walking back from lunch. I think you can assume that when I get around to implementing that on a test platform, I'll use some common sense optimizations. 

Link to comment
Share on other sites

2 minutes ago, TomXP411 said:

I wrote that on my cell phone while I was walking back from lunch. I think you can assume that when I get around to implementing that on a test platform, I'll use some common sense optimizations. 

Knowing the first I can assume that ...

... before hearing that and not knowing whether floating the pins was part of handling the possible deadlocks in that protocol, I didn't assume I knew what change would be optimizing and what change would be breaking.

I still prefer the handshake lines being unidirectional, especially if there might be +5v and +3.3V level translation. A +5v tolerant 3.3v line hex driver tied on for the three output lines, a +3.3v to +5v hex level shifter for the two input lines, also tied on, and the data bus transceiver(s) can be controlled by READ high or low.

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...

Important Information

Please review our Terms of Use