Im looking to render Canvas B inside of Canvas A and want to be able to physically print out an image of Canvas B onto a piece of paper. Think of it like the preview of a page you’re going to print out. Ive done something similar to my goal in p5.js using their instance mode. Hyperlinked are two different examples.
Is the ability to do this available in OPENRNDR? Can I render an OPENRNDR program inside of a running program? What about an application inside of an application? Can the main function contain two application instances? Can I freely create a drawer object and render its contents to an existing drawer?
If not, are there alternative methods in OPENRNDR that will get what I’d like?
As far as I know, the instance mode in p5.js is needed to be able to have several sketches in the same HTML page. I’m not sure how that translates exactly to an environment without an HTML containing the sketches.
If your goal is to render multiple canvases, and maybe export some of them to disk as an image, you can do that with render targets: Render targets | OPENRNDR GUIDE A render target contains a color buffer, which you can save to disk.
If what you want is to have two full programs with their unique extends {} block in the same window running side by side, that requires a bit more thinking.
There is an OPENRNDR extension that allows running multiple programs side by side, but it’s not yet public. It’s available if you build orx locally. It’s called orx-view-box and you can find some demos here. But if you’re not so familiar with OPENRNDR and Kotlin yet I would try other approaches first.
By looking at the examples you linked is not obvious what you want to build because those examples are very minimal. Could you maybe elaborate on what you would see on the screen and how you would interact with it?
Thanks for the swift reply. I think render targets might be the right thing I’m looking for. I’ll still explain what my goal is in more detail though:
Application A renders Canvas A on the full width and height of the application window. Canvas A displays data received from user input. When user presses a button, Canvas B renders a data visualization of the user input. That data visualization is displayed in the corner of the application window with a size of width/4 and height/4. Then an external printer prints out what is displayed on Canvas B. Another way to describe it is: Imaging you took a screenshot on you phone and a preview of what you took displays itself on the bottom corner of your screen. Instead of the preview being an exact replica of your screen it is a bar graph of the sum of all R, G, and B values from all of the pixels of the image on your screen.
My rationalization for the separate canvases is because I want to avoid rendering the Canvas B image twice (once for the print preview and another for the actual printing process). I figured its easier to render it once and feed the printer the color buffer of Canvas B.
Edit:
I actually got it working a bit before I saw your reply. Its kind of what Im looking for, but I still have a few questions. Heres my code:
PrintState.kt
class PrintState (program: Program){
val appProgram = program
val appWidth = appProgram.width
val appHeight = appProgram.height
val rt = renderTarget(600, 400){
colorBuffer()
depthBuffer()
}
fun renderPrintout (drawer: Drawer){
drawer.isolatedWithTarget(rt){
drawer.fill = ColorRGBa.RED
drawer.rectangle(40.0,40.0, rt.width-40.0, 60.0)
drawer.fill = ColorRGBa.BLUE
drawer.rectangle(rt.width-40.0, rt.height-40.0, 80.0,80.0)
drawer.fill = ColorRGBa.GREEN
drawer.rectangle(rt.width-40.0, rt.height-40.0, 40.0,40.0)
drawer.fill = ColorRGBa.BLACK
drawer.rectangle(rt.width+40.0, rt.height+40.0, 40.0,40.0)
}
drawer.image(rt.colorBuffer(0))
}
fun draw(drawer: Drawer){
drawer.clear(ColorRGBa.LIGHT_SEA_GREEN)
renderPrintout(drawer)
}
}
Main.kt
fun main() {
application {
configure {
width = 1400
height = 800
}
program {
val pS = PrintState(program)
extend {
pS.draw(drawer)
}
}
}
}
As demonstrated by the blue square and the black square, the render target will render items that are out of its buffer width and height. So whats the point of declaring them?
Something that I noticed is that the render target always has its origin at the top left of the application. Is there a way to set it anywhere else besides there? Or would it be simpler to use a transformation?
Yes, the code in the second reply is perfect! I was about to reply with the issue of coordinates not matching in the rendertarget with the global drawer, but that solves it.
The code in the first reply is what I’m looking for, thank you. I assume one can move the location of it by changing the second and third parameter in the drawer.image call. A questions about the code in your first reply, if you don’t mind. Im still rather new to OpenRNDR and Kotlin:
Why is canvas0’s width and height at width x 2 and height x 2?
Are depthBuffers not needed to draw circles and rectangles, but they are needed to draw lines and complex shapes?
The canvas0’s dimensions where larger just to show that it’s possible to render something in higher resolution and display it smaller.
I think it is the case that some primitives do not require depthBuffers. ShapeContour’s and Shapes definitely do need them.
Idea: take a look at Camera2D. I find it useful to pan, scale and rotate with the mouse, to center designs with random aspect ratios, or to zoom in and explore details of a design.