OpenGenerativeUI — How CopilotKit Recreated Claude's Visual Artifacts as Open Source
CopilotKit Deep Agents + LangGraph + Sandboxed iframe — The Pipeline Where AI Generates Charts, 3D, and Diagrams in Real Time
Claude shipped interactive visual artifact generation in conversations, and CopilotKit immediately built an open-source version. The turnaround was fast. The repo even has Claude listed as a contributor — happens automatically when you commit with Claude Code.
The core idea is simple. User enters a prompt, AI agent generates HTML, that HTML renders in an iframe.
Monorepo Structure
Turborepo-based, three packages.
apps/app — Next.js 16 + React 19 + Tailwind CSS 4 frontend. Takes user input via CopilotKit's chat UI (<CopilotChat>), renders generated HTML in sandboxed iframes through the useComponent hook.
apps/agent — Python agent. LangGraph + CopilotKit SDK. Uses "Deep Agents" — the agent selects the right visualization approach from a skills list.
apps/mcp — MCP (Model Context Protocol) server. Provides design system and skill information to the agent.
How the Agent Decides
When a prompt comes in, the agent decides: text response or visual component?
If visual, it picks the technology based on request type. Process explanation gets an SVG flowchart. Data comparison gets a Chart.js bar chart. Algorithm gets a Canvas animation. 3D gets Three.js. Network graph gets D3.js. This decision matrix is baked into the agent prompt.
What gets generated is a complete HTML document. <script> tags load CDN libraries, inline JavaScript draws the visualization. Not React components — plain HTML.
iframe Sandbox Rendering
Generated HTML runs inside <iframe sandbox>, fully isolated from the main page.
The frontend's widgetRenderer hook handles the pipeline: skeleton loading → HTML injection → ResizeObserver for auto-height → fade-in animation. Light/dark theme auto-detection included.
iframe is the security choice. Injecting AI-generated code directly into the main DOM risks XSS. The sandbox attribute opens only allow-scripts, blocking top-navigation and same-origin access.
The Architecture in One Sentence
As one commenter put it: "So it renders generated HTML inside an iframe." That's the whole thing. Whatever tech the AI uses (Chart.js, Three.js, D3), the final output is an HTML string that goes into an isolated iframe. Simple but effective.
The upside: the frontend doesn't need to know what the AI will output. As long as it's HTML, it works. Adding new visualization types requires zero frontend changes.
Supported Visualizations
Algorithm animations (binary search, BFS/DFS, sorting), 3D (Three.js/WebGL/CSS3D), charts (Chart.js — pie, bar, line), diagrams (SVG flowcharts, network graphs), widgets (forms, math plots, interactive tools), sound (Tone.js synthesizer).
Broad range. But ultimately the agent generates HTML+JS code. Quality depends entirely on the LLM's code generation ability.
Honest Take
The idea has merit. Bringing Claude's feature to open source is genuinely useful.
But there are practical limits.
Agent quality is LLM-dependent. Visualization decisions, HTML generation, error handling — all prompt engineering. Different model, different results.
Python agent is mandatory. Can't use just the frontend. LangGraph + CopilotKit Python SDK required on the backend. Deployment complexity goes up.
1.1K stars, just launched. Shipped almost simultaneously with Claude's feature, so code maturity needs time to prove out.
The situation itself is telling — an AI feature recreated as open source, with that same AI as a contributor.
Architecture Breakdown
Click each card to expand details
▶
Monorepo
Turborepo 3-Package Structure
Frontend (Next.js 16), Agent (Python/LangGraph), MCP Server — three independent packages
<CopilotChat> for user input, useComponent hook for iframe rendering
▶
Decision Matrix
Request Type -> Visualization Tech Auto-Selection
| Request Type | Technology | Example |
|---|---|---|
| Process/Flow | SVG | Flowcharts, sequences |
| Data Comparison | Chart.js | Bar charts, pie charts |
| Algorithms | Canvas | Sorting, BFS/DFS |
| 3D | Three.js / WebGL | 3D models, animations |
| Networks | D3.js | Node graphs |
| Sound | Tone.js | Synthesizers |
▶
Security
iframe Sandbox for AI-Generated Code Isolation
allow-scripts permitted. Top-navigation and same-origin access blocked
▶
Repo Status
As of 2026-04
Key Points
User enters prompt via CopilotKit chat UI (e.g., "visualize binary search")
Python Deep Agent decides text vs visual response, selects technology (SVG/Chart.js/Three.js/D3 etc.)
LLM generates complete HTML document — CDN library loading + inline JS visualization
Frontend widgetRenderer hook injects HTML into sandbox iframe, ResizeObserver for auto-height
Skeleton loading → fade-in animation → auto light/dark theme detection