The Mudcat Café TM
Thread #115539   Message #2474702
Posted By: JohnInKansas
24-Oct-08 - 08:06 AM
Thread Name: Tech: converting binary data to ASCII
Subject: RE: Tech: converting binary data to ASCII
You are collecting numerical information as binary numbers.

You DO NOT want to convert to ASCII.

You want to convert binary numbers to decimal numbers.

(I think)

An example of conversion of binary to ASCII can be found at Binary to ASCII.
(Found accidentally from the Worth Scary Signs 4 site, where he suggests pasting the roadsign text"
0110001001110010011010
0101100100011001110110
0101001000000110001101
1011000110111101110011
0110010101100100001000
0001100001011010000110
01010110000101100100
shown on the 25th sign in this collection at the above site to get the ASCII translation. This sign is nearer the bottom of the page than the top.)

ASCII is a system that assigns numbers to letters, so that each letter is represented by a specific number. The ASCII system is based on 8-bit numbers, and can thus represent only 256 different "characters." This was sufficient for a basic alphabet and a few "control codes" but was replaced decades ago by the ANSI system that allowed an "extra bits" to get more characters.

Old timers like Foolsetroupe (and maybe a few others who've studied history) will recall that in DOS and ancient Windows times, one had to add the "ANSI SYS" line to "AUTOEXEC.BAT" to boot your computer so it could read the extended text characters (largely graphical symbols) that were then beginning to be used some. (And back then you might have to add "extended memory" and also put HIMEM.SYS in the bootup files as well, to get past the 64 KB max RAM that "old DOS" (and earliest Windows) could read.)

Most people now ignore the difference between ASCII and ANSI, so when they say ASCII they probably actually mean ANSI. The only difference is "how many characters" you're allowed to use in your "alphabet."

Note that IBM (and a few others) did not (back then) use ASCII character codes, but used instead a system called "ebcdyk" or something like that, which completely scrambled text when sent between IBM and PC computers, unless a "translator" was used.

The first thing you need to know is how many bits are in each number. Common instruments may use 8 or 16 bits, although occasionally you'll fine "n-1" coding that give you only 7 or 15 "useful bits."

The second thing you'll need to know is whether the device outputs "Big-endian" or "Little-endian" data. Some devices write the "least significant bit" first and some write "most significant bit first" which means the numbers, as printed from raw data, may read right-to-left or left-to-right. (Internally, PCs and Macs are (or were in the original versions) "opposites" for this division, but you never see "raw" binary, so it doesn't matter much. The original Motorola MC6800 processors used in Macs caused the "flip" but Mac has now gone mostly(?) to Intel processors which may or may not have made them "reverse" the ends.)

Although the conversion from binary to decimal numbers is simple, there is no standard simple program that I've seen for automatically translating binary numbers to decimal numbers; but you can "learn to read" binary fairly easily, or you can use a chart to translate them.

You want to convert:


Binary 00000000 to decimal    0
Binary 00000001 to decimal    1
Binary 00000010 to decimal    2
Binary 00000011 to decimal    3
Binary 00000100 to decimal    4
Binary 00000101 to decimal    5
Binary 00000110 to decimal    6
Binary 00000111 to decimal    7
Binary 00001000 to decimal    8
Binary 00001001 to decimal    9
Binary 00001010 to decimal   10
Binary 00001011 to decimal   11


In binary, note that as the number is incremented to the next larger one, the least significant bit is flipped from 0 to 1 or from 1 to 0.
If the "flip" is from 0 to 1 the next bit to the left doesn't change.
If the "flip" is from 1 to zero, the next bit to the left is also "flipped,"

If you get rid of the "ASCII fixation" and put "Binary to Decimal" into a Google search, you should come up with something like This Google Result which may find a "calculator" you can use for your conversion.

Given that most instruments are very much faster than "sonic speed," if your accelerometer is capable of giving you direct decimal values in its output, it is questionable whether your test benefits from outputting binary, given the extra work you'll do in conversion. Many instrument devices (or their interface "amps") include BCD ("Binary Coded Decimal" if you need a search term) converters, and "doing it in hardware" is generally faster than "computing it" and probably is sufficient for the test described. You're resident genius needs to look at what's required to get useful data for the test being conducted, and not be hung up on "what's the most precise that's possible," perhaps.

An (almost trivial) advantage to using binary output is that if the binary numbers are lined up in a column (especially using a monospace font like Courier) a line through the "most significant bits" (i.e. the left-most "1"s) is a simple and fairly effective, if crude, "graph" of the outputs, so with a little practice you can "read" the outputs quickly to see the changes in the data. It takes pretty long printout to get useful "graphs," if you have a high sampling rate, but manual (or programmed) conversion of "spot values" may be sufficient - if you choose to keep the binary output for this purpose.

John