Context
I'm currently building a library to precisely record (ideally any) javascript-based animation, even on slow computers or at very high framerate (e.g. 120fps to do motion blur later…). What I did so far was redefining all main functions (e.g. requestAnimationFrame
, setTimeout
, performance.now
…) like window.Date.prototype.getTime = function(){return myfaketime;}
so that they return a fake time or call the function only when I'm ready to play it (this may be after the actual time, or before if the computer is super fast or if the timeout
are scheduled much later), following (and extending to CSS) what is done in Ccapture.js (I'm surprised how this technique works well so far).
Now, my goal is to also deal with video & audio. For now, let's focus on videos. For video, one important issue is that seeking a video via myvideo.currentTime = foo
is not frame-precise (all browsers will sometime randomly be off by a few frames), so I'd like instead to hook into <video>
to replace the displayed content with a fixed frame generated, e.g., via the Web Codec API.
Question
Is it possible to programmatically overwrite the <video>
element so that:
myvideo.setContent(mybuffer)
, where mybuffer
is a buffer/canvas containing the frame that I actually want my video to show. I tried to implement this with shadow root, but seems like video are not compatible with this (at least I get an error when I try…)? Ideally, I'd like to avoid solutions like replacing the <video>
with, e.g., a canvas, as the user-defined animation may internally start/stop video by listing all <video>
tags, and it might be tedious to adapt the code when a new video is added/removed etc. Also this would not be compatible with my second requirement:WebGL2RenderingContext.prototype.texImage2D
and it seems to work (at least I can intercept them), but there are many edge cases (WebGL/WebGL2/GPU, texImage2D/texSubImage…) so I'd prefer to find a simple solution that works directly for all methods.Note that in the end, my library uses pupeetter, so if needed I can even imagine relying on browser extensions to do this instead of plain JS, even if I'd like to avoid this to stay browser-agnostic.
MWE
Here is a MWE with WebGL-based code copy/pasted from ThreeJS examples. Note that you should ideally not modify anything beyond the first (for now empty) <script>
to illustrate the fact that this solution should work without modifying the animation defined by the user.
<!DOCTYPE html>
<html lang="en">
<head>
<title>three.js webgl - materials - video</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, user-scalable=no, minimum-scale=1.0, maximum-scale=1.0">
<script>
// We should ideally only overwrite here the components to play it differently
// Ideally, both the standalone video and the threejs video should show
// (up to the webgl added effects) the same frame, that is generated from, e.g.,
// a canvas with something like:
// var c = document.createElement("canvas");
// var ctx = c.getContext("2d");
// ctx.beginPath();
// ctx.arc(95, 50, 40, 0, 2 * Math.PI);
// ctx.stroke();
</script>
</head>
<body>
<div id="overlay">
<button id="startButton">Play</button>
</div>
<div id="container"></div>
<div id="info">
<a href="https://threejs.org" target="_blank" rel="noopener">three.js</a> - webgl video demo<br/>
playing <a href="http://durian.blender.org/" target="_blank" rel="noopener">sintel</a> trailer
</div>
<div id="testshadow"></div>
Original video:
<video id="video" loop playsinline style="width: 500px;">
<!-- Downloaded from https://download.blender.org/durian/trailer/sintel_trailer-720p.mp4,
must be hosted via https as webgl read of the video raises error as this is insecure -->
<source src="sintel_trailer-720p.mp4" type='video/mp4'>
</video>
Threejs video:
<script type="importmap">
{
"imports": {
"three": "https://cdn.jsdelivr.net/npm/three@0.174.0/build/three.module.js",
"three/addons/": "https://cdn.jsdelivr.net/npm/three@0.174.0/examples/jsm/"
}
}
</script>
<script type="module">
import * as THREE from 'three';
import { EffectComposer } from 'three/addons/postprocessing/EffectComposer.js';
import { RenderPass } from 'three/addons/postprocessing/RenderPass.js';
import { BloomPass } from 'three/addons/postprocessing/BloomPass.js';
import { OutputPass } from 'three/addons/postprocessing/OutputPass.js';
let container;
let camera, scene, renderer;
let video, texture, material, mesh;
let composer;
let mouseX = 0;
let mouseY = 0;
let windowHalfX = window.innerWidth / 2;
let windowHalfY = window.innerHeight / 2;
let cube_count;
const meshes = [],
materials = [],
xgrid = 20,
ygrid = 10;
const startButton = document.getElementById( 'startButton' );
startButton.addEventListener( 'click', function () {
init();
} );
function init() {
const overlay = document.getElementById( 'overlay' );
overlay.remove();
container = document.createElement( 'div' );
document.body.appendChild( container );
camera = new THREE.PerspectiveCamera( 40, window.innerWidth / window.innerHeight, 1, 10000 );
camera.position.z = 500;
scene = new THREE.Scene();
const light = new THREE.DirectionalLight( 0xffffff, 3 );
light.position.set( 0.5, 1, 1 ).normalize();
scene.add( light );
renderer = new THREE.WebGLRenderer();
renderer.setPixelRatio( window.devicePixelRatio );
renderer.setSize( window.innerWidth, window.innerHeight );
renderer.setAnimationLoop( animate );
container.appendChild( renderer.domElement );
video = document.getElementById( 'video' );
video.play();
video.addEventListener( 'play', function () {
this.currentTime = 3;
} );
texture = new THREE.VideoTexture( video );
texture.colorSpace = THREE.SRGBColorSpace;
//
let i, j, ox, oy, geometry;
const ux = 1 / xgrid;
const uy = 1 / ygrid;
const xsize = 480 / xgrid;
const ysize = 204 / ygrid;
const parameters = { color: 0xffffff, map: texture };
cube_count = 0;
for ( i = 0; i < xgrid; i ++ ) {
for ( j = 0; j < ygrid; j ++ ) {
ox = i;
oy = j;
geometry = new THREE.BoxGeometry( xsize, ysize, xsize );
change_uvs( geometry, ux, uy, ox, oy );
materials[ cube_count ] = new THREE.MeshLambertMaterial( parameters );
material = materials[ cube_count ];
material.hue = i / xgrid;
material.saturation = 1 - j / ygrid;
material.color.setHSL( material.hue, material.saturation, 0.5 );
mesh = new THREE.Mesh( geometry, material );
mesh.position.x = ( i - xgrid / 2 ) * xsize;
mesh.position.y = ( j - ygrid / 2 ) * ysize;
mesh.position.z = 0;
mesh.scale.x = mesh.scale.y = mesh.scale.z = 1;
scene.add( mesh );
mesh.dx = 0.001 * ( 0.5 - Math.random() );
mesh.dy = 0.001 * ( 0.5 - Math.random() );
meshes[ cube_count ] = mesh;
cube_count += 1;
}
}
renderer.autoClear = false;
document.addEventListener( 'mousemove', onDocumentMouseMove );
// postprocessing
const renderPass = new RenderPass( scene, camera );
const bloomPass = new BloomPass( 1.3 );
const outputPass = new OutputPass();
composer = new EffectComposer( renderer );
composer.addPass( renderPass );
composer.addPass( bloomPass );
composer.addPass( outputPass );
//
window.addEventListener( 'resize', onWindowResize );
}
function onWindowResize() {
windowHalfX = window.innerWidth / 2;
windowHalfY = window.innerHeight / 2;
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize( window.innerWidth, window.innerHeight );
composer.setSize( window.innerWidth, window.innerHeight );
}
function change_uvs( geometry, unitx, unity, offsetx, offsety ) {
const uvs = geometry.attributes.uv.array;
for ( let i = 0; i < uvs.length; i += 2 ) {
uvs[ i ] = ( uvs[ i ] + offsetx ) * unitx;
uvs[ i + 1 ] = ( uvs[ i + 1 ] + offsety ) * unity;
}
}
function onDocumentMouseMove( event ) {
mouseX = ( event.clientX - windowHalfX );
mouseY = ( event.clientY - windowHalfY ) * 0.3;
}
//
let h, counter = 1;
function animate() {
const time = Date.now() * 0.00005;
camera.position.x += ( mouseX - camera.position.x ) * 0.05;
camera.position.y += ( - mouseY - camera.position.y ) * 0.05;
camera.lookAt( scene.position );
for ( let i = 0; i < cube_count; i ++ ) {
material = materials[ i ];
h = ( 360 * ( material.hue + time ) % 360 ) / 360;
material.color.setHSL( h, material.saturation, 0.5 );
}
if ( counter % 1000 > 200 ) {
for ( let i = 0; i < cube_count; i ++ ) {
mesh = meshes[ i ];
mesh.rotation.x += 10 * mesh.dx;
mesh.rotation.y += 10 * mesh.dy;
mesh.position.x -= 150 * mesh.dx;
mesh.position.y += 150 * mesh.dy;
mesh.position.z += 300 * mesh.dx;
}
}
if ( counter % 1000 === 0 ) {
for ( let i = 0; i < cube_count; i ++ ) {
mesh = meshes[ i ];
mesh.dx *= - 1;
mesh.dy *= - 1;
}
}
counter ++;
renderer.clear();
composer.render();
}
</script>
</body>
</html>
- I can add a method like myvideo.setContent(mybuffer), where mybuffer is a buffer/canvas containing the frame that I actually want my video to show.
Yes, that's possible using, using each of the below, independently
canvas.captureStream()
MediaStreamTrackGenerator
- such that this method is compatible with, e.g., WebGL, that allows a texture to be generated from a video, i.e. I want WebGL to obtain my new frame and not the actual frame of the video.
Yes. You can create your own video frame several different ways. WebCodecs has VideoFrame
. I've used JSON to store video frames in the past. HTMLCanvasElement
and WebCodecs support WebGL.
See this question and answer How to use Blob URL, MediaSource or other methods to play concatenated Blobs of media fragments?. And see here MediaFragmentRecorder for about 10 different ways to record videos in the browser - written before there was a WebCodecs VideoFrame