Client-Server
Traditional Request-Response Model
Tech & Culture Insights
Traditional Request-Response Model
Bidirectional Real-time Communication
Server→Client Unidirectional Stream
Real-time Events Based on GraphQL Schema
HTTP Callback on Event Trigger
Asynchronous Communication via Message Broker
Asynchronous Task Processing via Queue
Direct Communication Without a Central Server
Exposing Local Servers to the Internet via Reverse Tunnel
Receiving Events via Outbound WebSocket Without Webhooks
Real-time Token-by-Token Response Delivery via SSE
git push → HTTP POST → CI/CD Trigger
Redis-based Background Job Queue System
Direct Video/Audio/Data Transfer Between Browsers
An Open Protocol Connecting AI Models with External Tools in a Standardized Way
The Difference Between Local Development and Cloud SaaS Deployment
Session Persistence & Splitting with a Terminal Multiplexer
A Real-world Case of Switching to QUIC After Struggling with WebRTC Complexity
Using Physical Layer Data of WiFi Signals Like Radar
How Life Patterns, Location, and Behavior Are Tracked Through BLE Signals Alone
Model-View-Controller — The core architecture of Rails
Rails core philosophy — convention over configuration
Roles and conventions for each project folder
HTTP processing layer underneath Rails
One line of resources generates 7 routes automatically
Accessing request parameters and managing allow lists
Inserting common logic before/after action execution
Modules shared across multiple Controllers
ORM — manipulate databases with objects
Version control database schema with code
has_many, belongs_to — defining relationships between models
Ensure data integrity at the Model level
Hooks that auto-execute before/after save/delete
Name and manage reusable queries
Performance enemy — solving with includes
Embed Ruby in HTML and split into reusable fragments
Manage common page structure with layouts
Safe and convenient form building with form_with
SPA-like UX without JavaScript
HTML-centric lightweight JavaScript framework
Rails standard authentication system
Auto defense against Cross-Site Request Forgery attacks
Preventing Mass Assignment attacks
BDD (Behavior-Driven Development) test framework
Clean test data generation
Integration tests for HTTP request/response
Active Job + Sidekiq — process heavy tasks asynchronously
Rails built-in WebSocket — real-time features
Dramatically improve response speed by reducing repeated computations
Frontend asset management for JavaScript, CSS, images
My Convention — data logic in Controller, minimal Helpers
Terminal theme and input optimization
Token usage vs intelligence level control
MCP server, skills, and agent extensions
Create custom agents in .claude/agents
Accelerate workflow with permission pre-approval
Enhanced safety with file/network isolation
Display custom info below the composer
Freely remap all keybindings
Deterministically intervene in the Claude lifecycle
Customize loading spinner verbs
Configure response tone and format
37 settings + 84 environment variables
Pass project-specific context to Claude
Quickly execute features with / commands
Automatic context retention across sessions
Ask side questions without polluting the main conversation
Auto-judge tool usage for long autonomous work sessions
Give instructions to Claude Code by voice
Pass project rules to Cursor
Multi-file automatic editing
Optimized code auto-completion
Pass project rules to Copilot
Use the entire codebase as context
Issue → Plan → Code automatic workflow
CLAUDE.md vs .cursorrules vs copilot-instructions
Give clear instructions to AI coding agents
Work efficiently within token limits
Request tests right after writing code
Connectionist learning model inspired by neurons
Parallel processing architecture based on Self-Attention
The stage of learning general knowledge from massive data
Re-training a pre-trained model for specific tasks
Reinforcement Learning from Human Feedback
Direct preference optimization without a reward model
Efficient fine-tuning that trains only a small number of parameters
Maximizing LLM capabilities through prompt design
Improving accuracy by injecting external knowledge through retrieval
Solving complex problems through step-by-step reasoning
LLMs call external tools to perform real-world tasks
Lightening models by converting weights to lower precision
Transferring knowledge from large models to small models
Efficient scaling by activating only needed experts
Small model drafts predictions, large model verifies
AI that understands both images and text together
Generating images by progressively removing noise
Converting text to natural-sounding speech
LLMs autonomously plan and use tools to complete tasks
Standard protocol connecting AI to external tools
Iteratively improving AI systems based on evaluation (Eval)
Making AI act in accordance with human intent and values
A method to improve AI itself using a constitution (principles)
The phenomenon of AI generating plausible but false content
Converting audio into discrete tokens for LLM-like processing
Rule-based → Deep learning pipeline → Token-based → Unified multimodal
How Pre-training and Fine-tuning work in actual code