Shadertoy to OPENRNDR shader converter

To add to the noise from my side:

I am working on a Shadertoy converter.
I worked out quite nicely so far →

This runs completely in OPENRNDR where I mirrored the wiring of shadertoy.com channel inputs and provide the buffers and GLSL code with a header that allows the shadertoy-only terms like “iMouse”.

2 Likes

Sounds cool :slight_smile: How does the converter work? Is it like a wrapper using ShadeStyle? Or does it convert the code to the Kotlin-based shading language linked in this post?

The magic comes from this template:

""" 
// Shader Interface:
#version 330
  
in vec2 v_texCoord0;
  
""" +

channels.joinToString("\n") { (index, _) ->
"""
uniform sampler2D tex$texCounter;
#define iChannel$index tex${texCounter++}
"""
} +

"""
  
out vec4 o_color;
  
uniform vec3 iResolution;
uniform float iTime;
uniform float iTimeDelta;
uniform float iFrame;
uniform vec4 iMouse;
  
#define fragCoord (v_texCoord0 * iResolution.xy)
#define fragColor o_color

// Updated code
"""
+ shadertoyCode.replace("void mainImage ( out vec4 fragColor,  in vec2 fragCoord )", "void main()")

That is used in a Filter that draws to a full screen quad. The remaining code is parsing and housekeeping, and allowing to add the input channels to the right uniforms.

It’s only replacing the mainImage line (that breaks with GLSL 330) and otherwise wrapping.

You can find the source code here: flow-template/src/main/kotlin/flow/shadertoy at master · Treide1/flow-template · GitHub

I used it in a demo in the same repo: flow-template/src/main/kotlin/demos/FlowTemplate_Demo3.kt at master · Treide1/flow-template · GitHub

Very cool! Feel free to post updates about this!

ps. I split this into a separate thread so it can be found on its own, and to keep the other thread focused on the new Kotlin-based shader syntax :slight_smile:

Some news on the project:

  • After getting audio to work accordingly to match the buffer format from shadertoy I was able to get mic input to work.
  • I am still working on importing the original textures as needed. (Or maybe just hard copy them over). They are literally just images loaded into color buffers.

I am quite unsure if an API-to-project function should exist. I can see the API jsons and was able to reverse engineer them to 90%. It would be perfectly fine to just copy tab by tab to one’s own glsl files and write a few lines of how they should be wired together.

Anyway … When can I turn this into an orx ? ;D

I can post some more on this If anyone’s interested.
As always, I am open for questions.

2 Likes

I am very curious to hear more about it! :slight_smile:

How did you do sound? Did you use orx-minim?

It would be great to be able to use any videos you have laying around. It got boring for me to always use the few available in ShaderToy.

Are you creating a new project per shader? It would be nice to create just a package per shader so all my shaders could be part of one project :slight_smile:

I used TarsosDSP, but this is overkill for the audio input.
Despite not being well documented, sound for shadertoy is passed as a 512x2 color buffer of color type ‘R’ (16 or 32 bit float, not sure).
The first 512 “pixels” are for FFT, the next 512 for raw audio waveform.

You can absolutely do this with orx-minim, I just didn’t came around to use it yet.

For converting and running, I split the work into a few classes. Each shadertoy project has its own package with paste-and-copy glsl files (no change required), and one file inheriting from ProjectRenderer to implement importProject where the wiring is defined.

For example, the FluidSimulation (original: “Chimera’s Breath” by nimitz) class implements it this way:

override fun ProjectImporter.importProject(): ShadertoyProject {
    return buildAndImport("$projectsPath/fluidSim", "fluidSim") {
        bufferA!!.setInput(CHANNEL_0, BUFFER_C_IN)
        bufferB!!.setInput(CHANNEL_0, BUFFER_A_IN)
        bufferC!!.setInput(CHANNEL_0, BUFFER_B_IN)
        bufferD!!.setInput(CHANNEL_0, BUFFER_A_IN).setInput(CHANNEL_1, BUFFER_D_IN)
        image    .setInput(CHANNEL_0, BUFFER_D_IN)
    }
}

So I can use more than one shadertoy project in one OPENRNDR project. Here I crossfade from one shader to the next:

Now I’m wondering how hard would it be to explore the API from OPENRNDR and load them dynamically :slight_smile: And maybe abuse it and plug layer A from one and layer B from another and see what kind of mess happens XD

A live shadertoy remixer :slight_smile:

It would be great if the API provided how much GPU each shader uses, so you can avoid picking super heavy ones that would bring the FPS to 1.

Thanks for sharing!