This runs completely in OPENRNDR where I mirrored the wiring of shadertoy.com channel inputs and provide the buffers and GLSL code with a header that allows the shadertoy-only terms like “iMouse”.
Sounds cool How does the converter work? Is it like a wrapper using ShadeStyle? Or does it convert the code to the Kotlin-based shading language linked in this post?
That is used in a Filter that draws to a full screen quad. The remaining code is parsing and housekeeping, and allowing to add the input channels to the right uniforms.
It’s only replacing the mainImage line (that breaks with GLSL 330) and otherwise wrapping.
After getting audio to work accordingly to match the buffer format from shadertoy I was able to get mic input to work.
I am still working on importing the original textures as needed. (Or maybe just hard copy them over). They are literally just images loaded into color buffers.
I am quite unsure if an API-to-project function should exist. I can see the API jsons and was able to reverse engineer them to 90%. It would be perfectly fine to just copy tab by tab to one’s own glsl files and write a few lines of how they should be wired together.
Anyway … When can I turn this into an orx ? ;D
I can post some more on this If anyone’s interested.
As always, I am open for questions.
I used TarsosDSP, but this is overkill for the audio input.
Despite not being well documented, sound for shadertoy is passed as a 512x2 color buffer of color type ‘R’ (16 or 32 bit float, not sure).
The first 512 “pixels” are for FFT, the next 512 for raw audio waveform.
You can absolutely do this with orx-minim, I just didn’t came around to use it yet.
For converting and running, I split the work into a few classes. Each shadertoy project has its own package with paste-and-copy glsl files (no change required), and one file inheriting from ProjectRenderer to implement importProject where the wiring is defined.
Now I’m wondering how hard would it be to explore the API from OPENRNDR and load them dynamically And maybe abuse it and plug layer A from one and layer B from another and see what kind of mess happens XD
A live shadertoy remixer
It would be great if the API provided how much GPU each shader uses, so you can avoid picking super heavy ones that would bring the FPS to 1.