You can keep your Winter Olympics with all it’s competitive falling and sliding. The real action was last Saturday at The National Museum of Computing where you could witness the inaugural Great Digital Race. A simple proposition: How far can you get a computer through the Fibonacci sequence in fifteen seconds? What results do we get if we get computers from different eras to compete against each other?
Let’s address some basic questions first. What is a Fibonacci sequence? Well, it’s a good choice as it is surprisingly simple. Add the first two numbers together, i.e. 0 and 1, write the result (1) next to it and then add to the previous number. Repeat. Forever.
0 + 1 = 1 1 + 1 = 2 1 + 2 = 3 2 + 3 = 5 3 + 5 = 8 5 + 8 = 13
…and so on. You may notice that this sequence is growing in size exponentially. It will start hitting some very seriously long numbers very soon. Rather than just a mathematical curiosity, the Fibonacci sequence keeps turning up in nature again and again. It appears to be some immutable rule of evolution. When drawn as a curve you see the famous ‘golden ratio’ appear (Google it, I’m no Brian Cox). The head of a sunflower? The layout of the seeds? Fibonacci sequence. Cool, huh?
The key thing here is that it is easy to code, something like:
A = 0 B = 1 REPEAT: SUM = A + B A = B B = SUM
So, enquiring minds at The National Museum of Computing wondered, how would the results change from era to era, running this simple code on a variety of machines? Let’s race.
For full information on the results, have a look at the TNMoC article. Of note was the WITCH, the oldest operating computer in the world, which managed a respectable 3 numbers in 15 seconds. WITCH was built for reliability not speed, so in fact she did well.
I’m going to concentrate on the device to which I was assigned, the BBC micro:bit, and it’s rather more powerful cousin, the iPhone.
I was asked to code the micro:bit for the competition, which uses MicroPython as it’s primary programming language. However, I was going to be unavailable on the day of the actual race (although for the benefit of the press we had a dry run on the Thursday before as they have lie-ins on Saturdays). Enter nine-year-old Connie, a friend of TNMoC who was eager to be involved. Brilliant, I thought, she can run the micro:bit entry for the Saturday race. She willingly accepted.
I started to sketch out some code of the micro:bit. Not the greatest challenge in programming terms, but I did want to ensure it stopped at 15 seconds, or the nearest a micro controller can get to it. I was also being hamstrung by one of the race rules; every number generated must be displayed. Hmm. Where? Well, we have some options. Displaying on the front LED matrix would slow things down ridiculously. In fact, a test run got me to 55, 11th in the sequence. The other option was to output each term to the debug console, and display using a connected laptop. This rendered better results, getting to 645th in the sequence, a number 135 digits long.
So what happens if you don’t bother with all that display nonsense at all? A lot, as it turns out. Running a script that just displays the result at the end got to 6,053 in the sequence, a whopping 1,265 digits in length. Satisfied I could bend the rules, I locked this down as my entry.
I didn’t bank on Connie.
I’d encouraged Connie to have a go at writing her own version of the code, without seeing mine. We worked on it together and sure enough she started to understand how to time the script and run the loop. When I received her final code, I ran it on my micro:bit to test.
6,838 in the sequence, 1,429 digits long.
Why faster? Well, probably a bit of vanity on my part. To make the micro:bit look a bit flashy, I’d coded in a countdown made up of fifteen LEDs that went out one by one as the seconds past. Connie would be having nothing to do with such showing-off; speed was the goal. What surprised me the most was how much of a difference removing the LED updating code had on performance. Her code got 785 terms further than mine, all because I wanted some blinkenlights.
On Saturday, the little micro:bit trounced the competition and Connie won the first Great Digital Race. At this point you may be wondering about the iPhone, how could that not win? It was by far the most powerful computer in the race and should have swept the board. In the end, the iPhone 6s scored just fourth in the sequence. Fourth. That’s number eight.
The official reason given for the iPhone’s humiliation was the fact that getting the highest number wasn’t really the point. The numbers had been entered into the iPhone by voice, using Siri. With the best will in the world, you can’t get very far in fifteen seconds by voice. This was done to demonstrate how technology has moved on. After all you can talk softly to the WITCH as much as you like but she won’t do a thing until you offer up some delicious punched tape.
I’ve got a bit of a different take on the iPhone’s failure. The one thing every other computer had in common was that they were open to use. Anyone can write code for them, whether it’s the simple instruction set of the WITCH, BBC Basic, Excel macros or MicroPython. You can only write code for an iPhone if you pay (and continue to pay) the requisite amount of money to Apple for a developer licence. With that barrier, the only way to ‘input’ was by voice (or maybe by keyboard using the calculator). The iPhone failed because it refused to play.
It’s been fascinating to see how small differences can have major effects on code performance through such a simple exercise. Also, never get over-confident when there are nine-year old code ninjas about.
The final number?
Main image © The National Museum of Computing
All other images in this post are copyright of their respective owners and may not be used without permission