You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, ShaderEffect can only be applied per-renderable (sprite, text, etc.). There is no way to apply a shader effect to an entire camera's output — for example a full-screen CRT filter, vignette, chromatic aberration, or color grading.
The root cause is that the WebGL renderer has no framebuffer object (FBO) / render-to-texture support. Everything renders directly to the screen canvas. Without an intermediate texture to capture a camera's output, there is nothing to feed into a post-processing shader.
Current State
Camera2d extends Renderable and inherits the shader property
However, setting camera.shader doesn't work as a post-process: each child's postDraw() resets renderer.customShader = undefined, so the shader is lost after the first child renders
CanvasRenderTarget wraps a canvas + context but is only used as the main screen target — not as an offscreen render target
The WebGL renderer has no gl.createFramebuffer() / gl.bindFramebuffer() usage anywhere
Batchers (quad, primitive, mesh) flush directly to the bound context (always the screen)
What Needs to Change
1. WebGL Framebuffer Object (FBO) support
Add a WebGLRenderTarget class (or extend CanvasRenderTarget) that wraps:
A WebGLFramebuffer
A WebGLTexture color attachment (same size as camera viewport)
Optional depth/stencil renderbuffer attachment
bind() / unbind() methods to redirect rendering
Resize handling when the camera/canvas resizes
2. Camera render-to-texture
Modify Camera2d.draw() to optionally render to an offscreen FBO instead of directly to the screen:
// pseudo-flow when camera.shader is set:
camera.draw(renderer, container):
fbo.bind() // redirect output to FBO
... existing render logic ... // scene renders into FBO texture
fbo.unbind() // back to screen (or parent FBO)
drawFullscreenQuad(fbo.texture, camera.shader) // apply post-process
When camera.shader is not set, the render path stays exactly as it is today — no FBO allocation, no overhead.
3. Full-screen quad pass
A utility to draw a textured quad covering the camera's viewport using a given shader. This is the standard post-processing blit:
Bind the FBO texture
Bind the post-process shader
Draw a screen-aligned quad
Restore previous state
4. Effect chaining (optional / future)
Support an array of shaders on the camera (camera.effects = [effect1, effect2]), ping-ponging between two FBOs for multi-pass post-processing. This is a natural extension but not required for the initial implementation.
API Sketch
import{VignetteEffect,CRTEffect}from"melonjs";// single post-process effect on a cameraapp.viewport.shader=newVignetteEffect(renderer,{radius: 0.75,softness: 0.45});// remove itapp.viewport.shader=undefined;
Implementation Notes
FBOs are only allocated when a camera actually has a shader set — zero cost when unused
FBO should be lazily created on first use and cached on the camera
FBO must resize when the camera viewport resizes (listen to VIEWPORT_ONRESIZE)
Canvas renderer: post-processing is not supported (silently ignored, same as per-renderable ShaderEffect)
The existing per-renderable shader path (customShader in preDraw/postDraw) is unaffected
Summary
Currently,
ShaderEffectcan only be applied per-renderable (sprite, text, etc.). There is no way to apply a shader effect to an entire camera's output — for example a full-screen CRT filter, vignette, chromatic aberration, or color grading.The root cause is that the WebGL renderer has no framebuffer object (FBO) / render-to-texture support. Everything renders directly to the screen canvas. Without an intermediate texture to capture a camera's output, there is nothing to feed into a post-processing shader.
Current State
Camera2dextendsRenderableand inherits theshaderpropertycamera.shaderdoesn't work as a post-process: each child'spostDraw()resetsrenderer.customShader = undefined, so the shader is lost after the first child rendersCanvasRenderTargetwraps a canvas + context but is only used as the main screen target — not as an offscreen render targetgl.createFramebuffer()/gl.bindFramebuffer()usage anywhereWhat Needs to Change
1. WebGL Framebuffer Object (FBO) support
Add a
WebGLRenderTargetclass (or extendCanvasRenderTarget) that wraps:WebGLFramebufferWebGLTexturecolor attachment (same size as camera viewport)bind()/unbind()methods to redirect rendering2. Camera render-to-texture
Modify
Camera2d.draw()to optionally render to an offscreen FBO instead of directly to the screen:When
camera.shaderis not set, the render path stays exactly as it is today — no FBO allocation, no overhead.3. Full-screen quad pass
A utility to draw a textured quad covering the camera's viewport using a given shader. This is the standard post-processing blit:
4. Effect chaining (optional / future)
Support an array of shaders on the camera (
camera.effects = [effect1, effect2]), ping-ponging between two FBOs for multi-pass post-processing. This is a natural extension but not required for the initial implementation.API Sketch
Implementation Notes
VIEWPORT_ONRESIZE)ShaderEffect)customShaderin preDraw/postDraw) is unaffectedReferences
Camera2d.draw():src/camera/camera2d.ts:880Renderable.shader/preDraw/postDraw:src/renderable/renderable.js:240-813WebGLRenderer:src/video/webgl/webgl_renderer.jsCanvasRenderTarget:src/video/rendertarget/canvasrendertarget.jsShaderEffect:src/video/webgl/shadereffect.js