🎨

OpenGenerativeUI — How CopilotKit Recreated Claude's Visual Artifacts as Open Source

CopilotKit Deep Agents + LangGraph + Sandboxed iframe — The Pipeline Where AI Generates Charts, 3D, and Diagrams in Real Time

Claude shipped interactive visual artifact generation in conversations, and CopilotKit immediately built an open-source version. The turnaround was fast. The repo even has Claude listed as a contributor — happens automatically when you commit with Claude Code.

The core idea is simple. User enters a prompt, AI agent generates HTML, that HTML renders in an iframe.

Monorepo Structure

Turborepo-based, three packages.

apps/app — Next.js 16 + React 19 + Tailwind CSS 4 frontend. Takes user input via CopilotKit's chat UI (<CopilotChat>), renders generated HTML in sandboxed iframes through the useComponent hook.

apps/agent — Python agent. LangGraph + CopilotKit SDK. Uses "Deep Agents" — the agent selects the right visualization approach from a skills list.

apps/mcp — MCP (Model Context Protocol) server. Provides design system and skill information to the agent.

How the Agent Decides

When a prompt comes in, the agent decides: text response or visual component?

If visual, it picks the technology based on request type. Process explanation gets an SVG flowchart. Data comparison gets a Chart.js bar chart. Algorithm gets a Canvas animation. 3D gets Three.js. Network graph gets D3.js. This decision matrix is baked into the agent prompt.

What gets generated is a complete HTML document. <script> tags load CDN libraries, inline JavaScript draws the visualization. Not React components — plain HTML.

iframe Sandbox Rendering

Generated HTML runs inside <iframe sandbox>, fully isolated from the main page.

The frontend's widgetRenderer hook handles the pipeline: skeleton loading → HTML injection → ResizeObserver for auto-height → fade-in animation. Light/dark theme auto-detection included.

iframe is the security choice. Injecting AI-generated code directly into the main DOM risks XSS. The sandbox attribute opens only allow-scripts, blocking top-navigation and same-origin access.

The Architecture in One Sentence

As one commenter put it: "So it renders generated HTML inside an iframe." That's the whole thing. Whatever tech the AI uses (Chart.js, Three.js, D3), the final output is an HTML string that goes into an isolated iframe. Simple but effective.

The upside: the frontend doesn't need to know what the AI will output. As long as it's HTML, it works. Adding new visualization types requires zero frontend changes.

Supported Visualizations

Algorithm animations (binary search, BFS/DFS, sorting), 3D (Three.js/WebGL/CSS3D), charts (Chart.js — pie, bar, line), diagrams (SVG flowcharts, network graphs), widgets (forms, math plots, interactive tools), sound (Tone.js synthesizer).

Broad range. But ultimately the agent generates HTML+JS code. Quality depends entirely on the LLM's code generation ability.

Honest Take

The idea has merit. Bringing Claude's feature to open source is genuinely useful.

But there are practical limits.

Agent quality is LLM-dependent. Visualization decisions, HTML generation, error handling — all prompt engineering. Different model, different results.

Python agent is mandatory. Can't use just the frontend. LangGraph + CopilotKit Python SDK required on the backend. Deployment complexity goes up.

1.1K stars, just launched. Shipped almost simultaneously with Claude's feature, so code maturity needs time to prove out.

The situation itself is telling — an AI feature recreated as open source, with that same AI as a contributor.

Architecture Breakdown

Click each card to expand details

Monorepo Turborepo 3-Package Structure

Frontend (Next.js 16), Agent (Python/LangGraph), MCP Server — three independent packages

apps/app — Next.js 16 + React 19 + Tailwind CSS 4. <CopilotChat> for user input, useComponent hook for iframe rendering
apps/agent — Python. LangGraph + CopilotKit SDK. Deep Agents architecture with skill-based visualization selection
apps/mcp — MCP server. Provides design system and skill metadata to the agent
Decision Matrix Request Type -> Visualization Tech Auto-Selection
Request TypeTechnologyExample
Process/FlowSVGFlowcharts, sequences
Data ComparisonChart.jsBar charts, pie charts
AlgorithmsCanvasSorting, BFS/DFS
3DThree.js / WebGL3D models, animations
NetworksD3.jsNode graphs
SoundTone.jsSynthesizers
Security iframe Sandbox for AI-Generated Code Isolation
sandbox attribute — Only allow-scripts permitted. Top-navigation and same-origin access blocked
DOM isolation — Generated code cannot access main page cookies, localStorage, or DOM
Auto-resize — ResizeObserver adjusts iframe height to match content
Repo Status As of 2026-04
Stars: ~1,110
License: MIT
Stack: TypeScript (Next.js 16, React 19, Tailwind 4) + Python (LangGraph, CopilotKit SDK)
Build: Turborepo
Notable: Claude listed as contributor (auto-added via Claude Code commits)

Key Points

1

User enters prompt via CopilotKit chat UI (e.g., "visualize binary search")

2

Python Deep Agent decides text vs visual response, selects technology (SVG/Chart.js/Three.js/D3 etc.)

3

LLM generates complete HTML document — CDN library loading + inline JS visualization

4

Frontend widgetRenderer hook injects HTML into sandbox iframe, ResizeObserver for auto-height

5

Skeleton loading → fade-in animation → auto light/dark theme detection

Use Cases

Adding visual artifact generation to AI chatbots — reference the iframe sandbox pattern Building CopilotKit + LangGraph agent apps — reference the frontend/backend separation When you need security isolation for AI-generated code — sandbox iframe + allow-scripts pattern