View previous topic :: View next topic |
Author |
Message |
kahnm
Joined: 30 Apr 2007 Posts: 9
|
Send array of bytes via UART as a packet w/ 1 start bit |
Posted: Tue Aug 13, 2019 12:48 am |
|
|
I need to send an array of bytes, which may contain zeros in the array, using one of the UARTS. I want to send it as a packet with a single start bit to start the packet. If I use putc then each byte has its own start bit. The device I'm writing to doesn't like that.
These aren't ascii so fprintf won't work.
Is there any simple way to do this? |
|
|
PCM programmer
Joined: 06 Sep 2003 Posts: 21708
|
Re: Send array of bytes via UART as a packet w/ 1 start bit |
Posted: Tue Aug 13, 2019 1:27 am |
|
|
kahnm wrote: | I need to send an array of bytes, which may contain zeros in the array, using one of the UARTS. I want to send it as a packet with a single start bit to start the packet. If I use putc then each byte has its own start bit. The device I'm writing to doesn't like that.
These aren't ascii so fprintf won't work.
Is there any simple way to do this? |
How about SPI ? Though, SPI puts out MSB first. The UART puts out
LSB first. But, you can swap the bits before sending the SPI, if you
must have LSB first. This thread has some swap_bits() code:
http://www.ccsinfo.com/forum/viewtopic.php?t=23364
This thread shows it being used with SPI:
http://www.ccsinfo.com/forum/viewtopic.php?t=45283 |
|
|
Ttelmah
Joined: 11 Mar 2010 Posts: 19538
|
|
Posted: Tue Aug 13, 2019 1:31 am |
|
|
No.
Standard serial has a start bit for every byte. This is done because
otherwise the clock tolerances required become tighter and tighter with every
successive byte.
If you have a single byte with a start bit, you can accept timing differences
of up to about +/-5% total over ten bits and still happily read the data.
If you send hundreds of bytes with only a single start bit at the beginning
then the clock tolerance becomes terrifyingly accurate. After more than a
very few bytes, Unbelievably/unacceptably so.
Implies your packet must have some other clocking method to work.
Suggests it is actually some form of self clocking protocol where the
receiver self synchronises. This is normally done by using a format that
ensures there are guaranteed transitions. So, something like Manchester
encoding.
You need to tell us about the actual protocol. Simple asynchronous
communication without start bits is just not going to work, so there
has to be something else being done. You can stretch transmitted lengths
a lot, but not to hundreds of bytes.
Remember a SPI peripheral can generate a stream of bytes, and if you
simply don't connect the clock this will be a steady clocked data stream.
However as already outlined this is not going to work if the receiver is
simply relying on timing from one start bit, so there has to be some other
process going on in the nature of the data. You need to tell us more.
I see PCM_Programmer also suggested the SPI approach, while I was typing.
There are some other questions. How big is the packet (how many bytes),
what is the speed required, and is the packet fixed size or variable?.
The latter would be very complex to do unless there is a known 'point'
in the sequence that marks the location for the end. |
|
|
temtronic
Joined: 01 Jul 2010 Posts: 9241 Location: Greensville,Ontario
|
|
Posted: Tue Aug 13, 2019 4:31 am |
|
|
As others have pointed out, we need to know what the 'other' device is, especially the speed required.
For over 3 decades I've used a single clock, multiple bytes transmission to communicate over 15 miles of solid copper wire so I KNOW it can work, very, very reliably in fact.
In my case I send a data stream consisting of 1 start bit and 44 bytes of data.
Providing the 'other device' has a crystal controlled clock or is very close and baud rate is low, a simple 'bit banged' driver will work fine.
Jay |
|
|
kahnm
Joined: 30 Apr 2007 Posts: 9
|
|
Posted: Tue Aug 13, 2019 12:58 pm |
|
|
To all that responded - Thank You very much. After reading your responses it occurred to me that my thinking was not to clear about how the bytes were being transferred. This caused me to focus more intensely on the protocol that the receiving device was using. I was violating the protocol by not waiting for the acknowledge before sending the next command. Thanks for getting me refocused! |
|
|
Ttelmah
Joined: 11 Mar 2010 Posts: 19538
|
|
Posted: Tue Aug 13, 2019 1:08 pm |
|
|
Brilliant. |
|
|
|