On-device depth estimation, real-time inference on consumer GPUs, and fluid UI driven by LLMs.
Real-time 2D to 3D video conversion for VR
VRIn generates per-frame depth maps from flat video using MiDaS v2 and reconstructs 3D meshes for VR headset display. Inference runs on a custom C++ engine using DirectCompute GPU shaders.
Alpha at vrin.app · youtube.com/@evr174
Fluid UI and inference runtime for edge devices
C++17 header-only runtime that renders UI and runs inference on edge devices. An LLM generates a declarative script defining the interface and compute graph. The device materializes and executes it. The LLM can update the script at any time over shared-memory IPC.
Source: github.com/me-im-counting/graphene (private — contact for invite)
Graphene fluid UI — headless framebuffer capture from the runtime
Buttons, inputs, sliders, image views, grids. Renders on DX11, SDL, or headless. An LLM writes a script, the device renders the interface. The LLM can change it at any point.
Declarative scripts define both UI and compute graphs. The runtime materializes them with lazy evaluation. Scripts are text — easy for an LLM to generate and modify.
Header-only C++17. No RTTI. Custom allocators. Lock-free shared memory IPC. Targets desktop GPUs down to embedded ARM.
C++ inference engine powering both VRIn and Graphene
Infer is the compute backend shared by both projects. 40+ neural network operators running on CPU, DirectX 11, and OpenCL. ONNX model import via Python toolchain.
Originally built as a C++14 library for VRIn's MiDaS model, then rewritten to C++17 header-only for Graphene with additional backends and operator coverage.