projects:virtual_window

  • this looks to be Exactly the project, done in 2012!
    • it's worth taking a deep dive into this project. He used 2D video shot from his camera and OpenGL to change the frustrum that's it!
  • this project uses 2 Kinects: 1 outside to capture the depth and 1 inside to capture the person
  • seems very convincing and may be worth a deep dive
  • sometime after 2015 apparently?
  • Available on iOS shows nice parallax tracking as a demo
  • by Algomystic AB
  • use your mouse to change the view with this app
  • They sell expensive holograms but may have free software which can handle RGBD files in some sort of 2d “preview” mode

Video Data

  • Volumetric Video is a video recorded with multiple cameras, allowing 6 degrees of freedom
  • 360 degree “immersive” video is what you see on Youtube. You are standing in only 1 place but can look at various angles.
    • because you cannot move around in space this doesn't seem like it would work.
  • Multi-Angle / Multi-View Video: this is where several cameras capture at the same time. You can jump from camera to camera
    • is it stands this won't work because you are jumping from camera to camera instead of moving smoothly
    • it could be AI would solve this problem somehow but seems beyond the scope, especially in realtime
  • 2.5D video: this is what the roku does in its screensaver. Doesn't seem to fit the bill
  • build simulated video with Nanite using Unreal?
  • Splats are fantastic but only work for still images
    • based on this site maybe this isn't true? There's some reason splats won't work for this…
  • Depth Anything is open source software that uses AI to change an RGB video to RGBD (D=depth) by creating a depth map
    • Owl3D is a commercial wrapper for Depth Anything
    • https://github.com/nagadomi/nunif IW3 is an open source wrapper for Depth Anything
    • with iw3 you'd use ffmpeg to make the side-by-side video. It'd look something like this:
python run_video.py --video-path garden.mp4 --outdir output --input-size 518
ffmpeg -i garden.mp4 -i garden_depth.mp4 -filter_complex hstack output_rgbd.mp4
  • Window Mode is open source software that can change your view orientation of a 3D image of video
    • it appears this can change your camera orientation but not the actual camera X,Y,Z position
    • it could be with a landscape view changing only the camera orientation is enough?
  • record video with iPhone LiDAR app (like Record3D)

Head Movement

  • TrackIR and OpenTrack are both hardware/software combinations that track your head movement via a webcam and translate that movement into the camera angle on your screen
  • Terms to look for: Head-Coupled Perspective, Off-Axis Projection, “Parallax Window” effect
  • As you move, the camera's viewing shape (the frustum) would skew so it looks “through” the monitor at an angle, called Off-Axis Projection
  • the famouse Johnny Lee video. Skip to around 2'45“ for the effect

Filling in the missing background

This is a combination of “inpainting” (guessing the missing color pixels) with “Depth Completion” (guessing the missing depth values).

  • It's possible to use AI to do this but too slow
  • “LaMa” (Large Mask Inpainting) will be too slow
  • “Mesh Stretching” will be the “good enough” method for landscapes

Implementation

  • It looks like this would involve a side-by-side video where the left is the video and the right is a depth mapping of the video
  • using shaders and an engine like Unity, Unreal or three.js you'd move the camera around and render the scene
Realtime
  • It looks like it might be possible to take a live video feed, feed it to Depth Anything, pipe that to Unity as a side-by-side with depth mapping on the right
  • concerns: lag, and you need an awesome GPU
Details

in Unity/Unreal/three.js:

  • Scene: 3D content (or volumetric video capture) inside a virtual box.
  • Camera: placed exactly where your monitor is in the virtual world.
  • Script: updates the Camera Projection Matrix every frame based on your head tracking coordinates.

= Unity Example =

  • modify Camera.projectionMatrix
  • calculate the corners of your screen relative to your eye position.
  • As your head moves right, the left side of the projection pyramid gets “longer” and the right side gets “shorter,” creating the illusion that you are looking through a window.
  • “Vertex Displacement Shader”: This is the core logic. “Read the depth map pixel, move the vertex Z position.”
  • “Unlit Shader”: You don't want the 3D engine's lights to mess up your video lighting.
  • “Shadow Casting Mode: Off”: You must turn this off, or your 3D window will cast weird shadows on itself.
  • Get a GLB or GLTF file at Sketchfab and load it into Windowmode
  • projects/virtual_window.txt
  • Last modified: 2026/01/03 08:33
  • by John Harrison