UniQMG's Blog: Bespoke shenanigans 2

Bespoke shenanigans 2

Continuing off the shenanigans of the last post comes more oscilloscope tomfoolery. So, scripts can’t run at audiorate. But, samplers can. And as it turns out, samplers have a very convenient fill method allowing samples to be written into their buffer by a script. Throw together some sample-generating code and prepare to play a stacker game, for real this time:

The game image is generated and put into a sampler, which then plays on loop. This uses two-channel audio similar to the fubbles from the last post though the points are generated directly by the game script. The game is controlled with MIDI input similar to the driving game. It’s not as cool as using the fubble sprites though, and it does start to lose some accuracy at this level of precision, which you can see as rough sketchy-looking lines around corners.

Additionally, this technique tends to cause the entire image to jump around whenever the sampler is updated. I haven’t found a good way around this, though you can stabilize it a bit using a feedback node to hold the image while the sampler is being updated. To minimize this effect, the game image is only redrawn when necessary, such as when a piece moves or locks. It’s completely unplayable if updated every frame.

The Touhou part

Now that we’ve unlocked the ability to draw arbitrary images to the oscilloscope, lets try drawing something else:

Yep, that’s Bad Apple!!. The whole audio file is pregenerated ahead of time using an external script, then synced with the original audio file and played through Bespoke. But how do we go from a video to a vector art sound file?

Well, we’ll have to trace a path around the image. First step is making something to trace. An edge detect algorithm is fairly straightforward: For each pixel, look at its neighbors and see if the color value changes drastically. This is a huge simplification, but considering we’re working with essentially a one-bit video it’ll work fine.

Next step is to take all the red edge-detected regions and follow them. This runs into an issue fairly quickly: the edge detected paths don’t necessarily have a single pixel-wide path through them. I solved this by finding pixels with more than two neighbors. These are marked in green, and an extra pixel acting as the entry point in dark green. Then, a simple pathfinding algorithm finds the shortest route through and discards the unvisited pixels.

This results in a clean image with several seperate paths that can be easily converted into a vector path, then converted into oscilloscope-drawing audio.

It’s not strictly necessary to do this, it just prevents the pathfinder from tracing around and getting stuck in a corner. The rest of the edge would be traced out as another path, but this leads to the oscilloscope having to draw more undesirable lines when jumping between path segments.

This algorithm is likely far from perfect, and it’s pretty inefficient too. The video had to be downscaled significantly to generate in any reasonable timeframe. On the other hand, it does work fairly well, and the code that generated it is available on gitlab, though it’s an incomprehensible mess of procedural pixel munging.