I currently try to understand how one can efficiently play multiple videos at the same time and also how to seek in them and modulate the playback speed.
A naive approach was
import org.openrndr.application
import org.openrndr.ffmpeg.loadVideo
fun main() = application {
configure {
width = 1280
height = 720
}
program {
val video = loadVideo("data/videos/small/my_video.MOV")
video.play()
video.seek(4.0)
video.pause()
extend {
drawer.clear(ColorRGBa.BLACK)
video.draw(drawer)
}
}
}
Which results in a blue screen and multiple
error? -35
in the stdout.
Jumping to the same second within the for loop results in a stuttering movement and a CPU load of around 1000% on my macbook pro i9.
Both of these aspects makes it not feasible IMO to implement a slow playback of the video through a manual step through the seconds of the video.
When I want to play multiple videos I also get the error messages
WARN [Thread-6(decoder)] o.o.f.Decoder ↘ video queue is almost full. [video queue: 49, audio queue: 0]
and the CPU is beyond 16000%. I already reduced the resolution down to 720p w/ ffmpeg. With ffprobe I get an unnoticeable CPU usiage.
Is OPENRNDR performant enough to control the playback of 6 720p videos, layered on top of each other in 30fps realtime?
I haven’t tried playing 6 videos at the time but I do know some codecs are much better for seeks than others. One reason is that with certain codecs frames can not be just read as an image from a video file, but are actually constructed based on previous frames. So requesting a specific frame can be slow because it needs to put it together first. This issue can be avoided by configuring the codec, using a different codec or maybe just loading single images (PNG is probably slower than JPG or uncompressed TIF).
The codec I remember people using is HAP. Maybe if you encode your video with it things are smoother?
This was with trial and error It seems like you can pass a function which keeps the time. If the function is for example { seconds / 5.0 } then it plays 5 time slower. In this example I modulated the speed using a sine wave.
I tried setting videoFrameQueueSize = 500 to keep all frames in memory, not sure if that worked. But I do not see any big hiccups when jumping from end to start.
I believe that if I set synchronizeToClock = false it plays as fast as possible, which is at 60 fps by default.
Ah I set useHardwareDecoding = false because otherwise it wasn’t working on my system. Maybe related to a recent update.
I try to jump to specific points in the video and freeze at specific points in time and seek in the video back and forth - based on received osc messages.
Thanks for giving the hint on using the clock instead of seek. Although I do not fully understand how this clock behaves as using a fixed value does not allow me to seek to specific moments in the video, so something along
clock = {(seconds*speed) + offset}
for speed=0 and offset=4 still playbacks the video normally to second 4 and then stops. It does not allow me to go back at e.g. offset=2.
Would be great to have some documentation on the clock specifics at least within the source code
ps: In Unix you can get over 100% cpu as 100% stands for one core is used fully - but e.g. my i9 has 16 vcores so i have 16000% cpu available.
Which version of OPENRNDR are you using? I build OPENRNDR from github to have the latest version. On March 6th ffmpeg was updated. I run it on an i5 CPU.