Kevin and I created Epilepsend for an MIT final project a few months ago, and the thing about final projects is the deadline is always a day too early, no matter how early you start. Our project couldn’t even break the 0.3 mbps barrier until two days before the deadline! Fortunately, I discovered that an earlier commit had cropped the camera feed to avoid distortion, at the cost of significantly reducing the camera acuity. Removing that code boosted our bandwidth by an order of magnitude. So I’ve been wondering: Are there any other bandwidth-boosting tweaks out there that we just didn’t have time to try?
I initially wanted Epilepsend to be lightweight enough to run on my laptop, but it has a bad habit of scorching my CPU. Sure, it can hit 2 mbps for a few seconds, and then the temperature skyrockets and Epilepsend starts lagging behind the camera stream. For the Bad Apple!! demo, I had to use a 32-thread Ryzen 3950X CPU instead. More compute never hurts, so I recently tried some tweaks to fully embrace the more powerful CPU.
My first idea was actually to disable one of my earlier bandwidth tweaks. I’ve been testing Epilepsend using a 60 Hz monitor with a 30 FPS video, and it turns out that monitors are really slow. Monitor says 60 Hz? Yeah, it’ll take 1/60 seconds to fully display a frame. You see this in action in this video of a 30 FPS video on my monitor:
The monitor only shows the entire complete frame for 1/60 seconds! Consequently, I can’t run the camera (my phone) for the decoder at 30 Hz since it easily desyncs from the monitor, and instead have to run at 60 Hz and use every other frame. Decoding a frame is CPU-intensive, so if the current frame decodes successfully, the next frame is skipped, and otherwise, the next frame is skipped with half probability. This helps decrease the CPU load and quickly recover from desyncs.
And if you have a beefy CPU and cooling, it’s completely useless. Instead, just try to decode every frame. Easy as that.
Another conceptually simple tweak I tried was putting the Reed-Solomon decoding in its own thread. Welp, Python’s GIL decided to ruin my day so I had to use both the threading
and multiprocessing
modules, but it worked in the end. CPU usage went up by a factor of three! Hooray!
On the physical side of things, I built a structure out of random books to hold my phone so my arms don’t get sore. I also tried placing a large foam board, held in place by a plush unicorn, behind the camera and turning up the encoder’s screen brightness to avoid stuff from the background appearing as reflections in the encoder’s screen.
Sadly, these tweaks weren’t magic. The bandwidth went up to 2.242 mbps which is pretty good, but only 7% more than my previous record. I did manage to reduce the amount of time to decode Bad Apple!! in Bad Apple!! from nearly 180 seconds to 70 seconds. In the end, my biggest obstacle was that my phone also has a bad habit of boiling up too, so I had to sometimes stick it in the refrigerator to cool it off quickly between tests.
Maybe someday I’ll buy an igloo and try again. I’m sure 4 mbps is within reach, with enough ingenuity and ice!