diff --git a/documentation/blog/2025-08-25-mcp-ui-future-agentic-interfaces/index.md b/documentation/blog/2025-08-25-mcp-ui-future-agentic-interfaces/index.md new file mode 100644 index 000000000000..bda3ad8ee4ce --- /dev/null +++ b/documentation/blog/2025-08-25-mcp-ui-future-agentic-interfaces/index.md @@ -0,0 +1,149 @@ +--- +title: "MCP-UI: The Future of Agentic Interfaces" +description: Discover how MCP-UI is revolutionizing AI agent interactions by bringing rich, interactive web components directly into agent conversations, making AI more accessible and intuitive for everyone. +authors: + - ebony +--- + + + + +The days of endless text walls in AI agent conversations are numbered. What if instead of reading through paragraphs of product descriptions, you could browse a beautiful, interactive catalog? What if booking a flight seat could be as simple as clicking on your preferred spot on a visual seat map? This isn't science fiction. It's happening right now with MCP-UI. + +In a recent [Wild Goose Case episode](https://www.youtube.com/live/GS-kmreZDgU), we dove deep into MCP-UI with its creators Ido Salomon and Liad Yosef from Monday.com, alongside Block's own Andrew Harvard, to explore how this groundbreaking technology is reshaping the future of agentic interfaces. + + + +## The Problem with Text-Only Interfaces + +Let's be honest, we've all been there. You ask an AI agent to help you shop for shoes, and you get back a wall of text with product names, prices, and descriptions. Then you have to copy and paste URLs, open multiple tabs, and basically do all the work yourself. It defeats the purpose of having an AI assistant in the first place. + +As Ido put it during our conversation: "I think everyone did something had a bunch of text and were like this is terrible why do I have to type all of that and kind of rage quit chat GPT." + +The reality is that text-based interfaces work fine for early adopters and technical users, but they're not the future. They're certainly not going to work for everyone – including our moms, who are increasingly using AI assistants but shouldn't have to navigate complex text responses. + +## Enter MCP-UI: Bridging the Gap + +MCP-UI (Model Context Protocol User Interface) represents a fundamental shift in how we think about AI agent interactions. Instead of forcing users to consume everything through text, MCP-UI enables rich, interactive web components to be embedded directly into agent conversations. + +The core philosophy is brilliant in its simplicity: **Why throw away decades of web UI/UX expertise when we can enhance it with AI?** + +As Liad explained: "We have more than a decade of human targeted interfaces on the web that are built and perfected for the human cognitive limitations and needs, and it doesn't make sense that agents will make us get rid of all of that." + +## How MCP-UI Works + +At its heart, MCP-UI is both a protocol and an SDK that enables: + +1. **Rich UI Components**: Instead of text descriptions, you get interactive catalogs, seat maps, booking forms, and more +2. **Brand Preservation**: Companies like Shopify keep their branding and user experience intact +3. **Seamless Integration**: UI components communicate with agents through secure, sandboxed iframes +4. **Cross-Platform Compatibility**: The same UI can work across different AI agents and platforms + +The magic happens through embedded resources in the MCP specification. When you interact with an MCP server that supports UI, instead of just returning text, it can return rich UI components that render directly in your agent interface. + +## Real-World Examples in Action + +During our demo, we saw some incredible examples of MCP-UI in action: + +### Shopping Made Visual +Instead of reading through product descriptions, users saw a beautiful Shopify catalog with images, prices, and interactive elements. Clicking on items added them to a cart, just like a regular e-commerce experience, but embedded seamlessly in the AI conversation. + +### Travel Planning Reimagined +We watched as users could select airplane seats by clicking on a visual seat map, then have the agent automatically look up weather information for their destination cities, all without leaving the conversation or typing additional commands. + +### Restaurant Discovery +The demo showed how users could browse local restaurants with rich cards showing photos, ratings, and menus, then place orders directly through interactive interfaces, all while maintaining the conversational flow with the AI agent. + +## The Technical Foundation + +From a technical perspective, MCP-UI prioritizes security and isolation. UI components are rendered in sandboxed iframes that can only communicate with the host through post messages. This ensures that third-party UI code can't access or manipulate the parent application. + +The current implementation supports several content types: +- **External URLs**: Existing web apps embedded in iframes +- **Raw HTML**: Custom HTML components with CSS and JavaScript +- **Remote DOM**: UI rendered in separate workers for enhanced security + +For developers, getting started is surprisingly simple. As Andrew demonstrated, you can begin with something as basic as: + +```javascript +return createUIResource({ + type: 'html', + content: '