Double Dabble
July 31, 2018
Over at ComputerPhile, Professor Brailsford gives a beautiful explanation of the double-dabble algorithm for converting a number from binary to binary-coded decimal. If you don’t want to watch the video — you should — there is also an explanation at Wikipedia, or you can read the original description by C. B. Falconer.
Your task is to implement the double-dabble algorithm as shown by Professor Brailsford. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
That algorithm is obviously nice when implemented in assembler (it uses only one MUTABLE register, it implements a simple and short loop, and foremost, it avoids a costly division by ten).
However, when implement in a high level programming language (EVEN in C!), it is bound to be worse than any other algorithm, because in high level programming languages, numbers are not mutable registers, they are immutable. Therefore manipulating bits in integers is costly! Processing the BCD digits (check if >4 and add 3) n times will be a big waste of time, compared to only (ceilling n 3) divisions by ten!
On my system double-dabble takes more than double the time of bcd, for 4-digits, 16-bit BCDs.
Here’s a solution in C.
Example:
A good friend died a few years ago. He was very interested in programming microprocessors. The last technical discussion we had was about Double-Dabble. Here an demonstration in Python. So, this one is for Jaques.