Say Goodbye to Glue Code: The Unified Platform Approach to AI Apps
Meta Description: Discover how a unified platform with Generative UI helps developers eliminate glue code and rapidly build dynamic, AI-native user interfaces in real time.
Introduction
Building AI-native software is hard work – not because of training models or tweaking algorithms, but because of the glue code that holds everything together. In traditional AI app development, developers spend enormous effort wiring together LLM UI components, backends, and frontends. In fact, industry analysts estimate that up to 70% of development time is spent on integration and “wiring things together,” rather than on core business logic. This connective glue code doesn’t add business value; it’s the tedious plumbing required to make an AI system work end-to-end. In machine learning projects, the actual ML code can be as little as 5%, with the remaining 95% being glue code. The result? Slower development, higher maintenance burdens, and fragile user experiences.
Nowhere is this pain more evident than in AI-powered user interfaces. Teams often pour months into coding custom frontends for AI agents, only to deliver static, clunky interfaces that feel like command lines. Users today expect more – interactive chats, dynamic dashboards, and personalized visualizations – yet traditional UIs can’t easily adapt to the unpredictable outputs of large language models (LLMs). The friction between powerful AI backends and rigid UIs is holding back AI’s true potential. It’s time to bridge this gap and say goodbye to glue code.
Enter Generative UI. This emerging approach promises to build UI with AI itself – letting AI models directly generate dynamic interfaces in real time. A new breed of platforms is making this possible by connecting LLMs straight to frontends. In this blog, we’ll explore how Generative User Interface technology works, why it’s a game-changer for AI app development, and how unified platforms like Thesys C1 enable a frontend for AI agents without the usual glue code. We’ll see how frontend automation through Generative UI can turn natural language prompts into live, LLM-driven product interfaces. The goal: faster development, adaptive user experiences, and finally freeing developers from the glue code grind.
The Hidden Cost of Glue Code in AI Development
Every innovative AI application today carries a hidden tax: the glue code required to integrate AI outputs into a usable product. Glue code refers to all the boilerplate and connecting logic that developers write to link systems together – fetching model results, parsing them, calling APIs, updating UIs, and so on. It’s the “backstage crew” of software, invisible to users but onerous to maintain. In AI projects, glue code explodes because of the complexity of connecting intelligent models with data sources, tools, and interfaces. A seminal Google paper observed that in mature ML systems, only ~5% is “ML code” while at least 95% is glue code (Sculley et al., 2015). This means the vast majority of engineering effort goes into code that does not contribute to the AI’s core logic – it merely plumbs components together.
This imbalance has real consequences. As Forrester analysts noted, developers waste significant time on “repetitive tasks, boring design patterns, and custom code written over and over” to wire up frontends and backends. The creative business logic is often the smallest part of the project, drowned out by integration work. Glue code accumulates into what some call the “Glue Monster,” slowing development velocity and increasing technical debt. Each new feature or tool requires yet more glue, leading to exponentially growing complexity. Maintaining these brittle connections becomes a nightmare – APIs change, team members leave with tribal knowledge, and suddenly even minor updates break the system. In short, glue code is a productivity killer, diverting developer time to plumbing instead of innovation.
Nowhere is this more evident than in the UI layer for AI applications. Traditional frontends are not designed to handle the dynamic, unpredictable outputs of AI. Developers resort to writing translation layers (parsing model responses into UI updates), state management code, and custom UI logic for every possible model output. Consider an AI agent that can output a chart, a form, or a text answer depending on context – implementing a UI for all these responses means glue code between the AI and the interface for each case. As one expert put it, “the code you really need to figure out is the glue code between the APIs” when integrating AI, and this has become “increasingly AI territory” to automate. The current approach forces developers to act as go-betweens, manually translating AI capabilities into user-facing features.
The result is often disappointing. Enterprise teams have spent months building frontends for AI solutions, only to end up with “static, inconsistent, and often disengaging user experiences” that feel tacked on. Users interact with a chat box or a JSON output, far from the rich UI they expect. This glue-code-driven development not only slows time to market but also limits AI adoption – because an AI that’s accessible only via clunky UI is less likely to delight users. The pain is clear: to unlock AI’s value, we must streamline the connection between LLMs and the user interface. That’s exactly what Generative UI aims to do.
From Static Interfaces to Generative UI
How can we eliminate all that glue code? By letting the AI help build the interface itself. Generative UI (Generative User Interface) is a novel approach where the UI is not fully pre-designed by developers, but dynamically generated by AI in real time. In a Generative UI, an LLM doesn’t just produce text or an API response – it produces actual UI components (buttons, forms, charts, layout containers, etc.) as needed, on the fly. In other words, the interface adapts to the user and context at runtime, driven by the AI’s output. This is a radical shift from static screens defined upfront.
In a Generative UI paradigm, developers no longer hardcode every possible screen or workflow. Instead, they provide a design system or set of components, and the AI model decides how to assemble them to fulfill the user’s intent. For example, if a user asks an AI agent to “show sales trends for last year,” the system could generate a line chart component with the relevant data – even if no such specific screen was programmed. The LLM agent user interface materializes in response to the prompt. As Nielsen Norman Group researchers describe, generative UIs leverage AI to create “dynamic, personalized interfaces in real-time,” adapting to user interactions and goalsmedium.com. The UI essentially designs itself around the user’s needs, rather than presenting a one-size-fits-all layout.
This dynamic approach yields several benefits. First, personalization is built-in – each user can get a custom interface tailored to their context and preferences. No more generic dashboards for everyone; the UI can literally be different for a novice versus an expert user, or adapt as a user’s needs change. Second, it enables real-time adaptive UI behavior. The interface can change moment to moment (showing new options, hiding irrelevant elements) based on the AI’s understanding of the situation. This creates a smoother, more intuitive experience, often described as outcome-oriented design – the UI focuses on helping the user achieve their goal, not on static menus. As one Thesys blog put it, when interfaces adapt to each user, “friction goes down and satisfaction goes up” (Generative UI – The Interface that builds itself, just for you.).
Crucially for developers, Generative UI reduces the manual scaffolding that was previously required. A recent Thesys post notes that instead of hardcoding dozens of screens, developers can let an AI model “generate live UI components based on prompt outputs.” This frontend automation “not only accelerates iteration but also enables rich personalization”. In practice, that means a huge reduction in glue code – the AI handles a lot of the conditional UI logic that a developer would otherwise write. The UI becomes an extension of the model’s output. For instance, if the LLM decides the user needs to input additional info, it can generate a form field on the spot, saving the developer from predefining that interaction. Generative UI thus addresses the burning question of “how to generate UI from prompt”: it allows UIs to be created from natural language descriptions in real time.
It’s important to clarify that Generative UI is more than just “prompt-to-UI” design tools. Some tools today can take a prompt and spit out code or a static design (often called prompt-to-design or prompt-to-code). Those assist developers during the design phase, but the output is still a fixed interface. Generative UI, by contrast, means the AI continuously generates and updates the interface at runtime (Generative UI vs Prompt to UI vs Prompt to Design). It’s not a one-off code generation; it’s an ongoing, live dialogue between the AI and the user interface. This makes the application truly AI-native – the UI is intertwined with AI behavior. As a result, building applications with Generative UI feels less like traditional UI programming and more like orchestrating a conversation: you specify the components and rules, and the AI speaks the interface into existence.
The Unified Platform Approach: Connecting LLMs to Dynamic UIs
Achieving Generative UI in practice requires a new kind of infrastructure – one that tightly unifies the AI logic with the frontend. This is where platforms like C1 by Thesys come in. C1 is described as the world’s first Generative UI API, a unified platform that lets developers plug in an LLM on one side and get an interactive UI on the other. In essence, C1 serves as an AI frontend API: instead of returning text, it returns UI components and layouts that can be rendered directly in an application. This eliminates the usual layers of glue code between an AI model and a user interface.
C1 by Thesys eliminates glue code by handling the full stack of UI generation – from interpreting user intent, to designing the interface, to rendering it live. The diagram illustrates how C1 serves as a unified layer connecting LLM outputs to front-end components, automating what used to be manual integration work.
With C1, developers no longer have to manually code the interface responses for each AI output. They simply call the C1 API with a prompt (just as you would call an LLM service), and the response includes structured UI instructions. Under the hood, the platform’s LLM interprets the prompt and generates contextually relevant UI components on the fly. These could be anything from a button set, to a form asking for more info, to a data visualization – whatever the AI deems appropriate. The C1 client library (e.g. a React SDK) then takes those UI specs and renders actual interactive elements in the app. In real time, the user gets a dynamic interface, and all the developer had to do was provide the prompt and display the result.
This unified platform approach means the traditional boundaries between backend logic and frontend code start to fade. There’s no need to write separate parsing or mapping code – “no glue code, no custom frontend,” as Thesys CEO Rabi Shanker Guha highlights on LinkedIn (Thesys LinkedIn post). The platform handles integrating with whichever LLM is used, managing state, and even calling external tools if needed. For example, C1 supports tool integrations via function calling, so the LLM can not only create UI elements but also trigger actions (e.g. fetch data, submit forms) seamlessly within the generated interface. All of this happens through unified APIs, so the developer is abstracted from low-level details (no dealing with HTML events or API glue). In short, C1 acts as the brain and the UI architect in one, letting developers focus on high-level logic.
Thesys C1 is particularly geared towards building frontend for AI agents and copilots – scenarios where the UI needs to evolve with complex agent behaviors. It can generate UI for essentially any use case and data source in real time. Early adopters have used it to create everything from AI dashboard builders (where the AI produces analytics dashboards on demand) to adaptive chat interfaces that inject relevant charts or forms based on conversation context. In fact, more than 300 teams are already using Thesys tools to deploy adaptive AI interfaces, indicating how quickly this unified approach is catching on.
Another strength of the unified platform is consistency and speed. Because the AI is “thinking like a designer” and following a defined design system or component library, the generated UIs remain consistent with your brand and UX standards. Yet they are created instantly, without waiting on a design-develop cycle. As Thesys notes, C1 “integrates in minutes, with just two snippets of code,” and works with any modern framework. This suggests that even existing applications can embrace Generative UI by simply dropping in the C1 component – no large refactors needed. The time to value is dramatically shorter when you don’t have to hand-code every interface. In an era where AI capabilities are evolving fast, that agility is crucial. Development teams can ship new AI-driven features in a fraction of the time, because the front-end is essentially auto-generated in response to the model’s outputs.
Crucially, unified Generative UI platforms also handle a lot of complexity behind the scenes that developers used to manage manually. Thesys’s documentation highlights that C1 takes care of things like real-time UI rendering, state management, and even compliance and security considerations of the UI generation. No need to build a real-time WebSocket channel for updates – it’s built-in. No need to sanitize model output into safe UI – the platform uses predefined components that ensure reliability. By abstracting away these lower-level concerns, a platform like C1 lets developers concentrate on defining the what (the user’s intent, the available components) rather than the how of UI construction. It’s a profound shift: the UI becomes a collaborative output of your code and the AI, orchestrated by the platform.
(For a deeper technical dive into how Generative UI works and how to implement C1, check out the Thesys documentation 📖 (Thesys Docs ↗) which provides guides and examples on building with Generative UI.)
Benefits of Building AI Apps with Generative UI
Adopting a unified Generative UI approach yields tangible benefits across both user experience and development workflow:
- Faster Development and Iteration: With the heavy lifting of UI creation done by the AI, developers can spin up new interfaces or adjust existing ones in a fraction of the time. There’s no need to code every interface variation. This accelerates time-to-market significantly. One McKinsey study found that integrating AI throughout the development cycle allows teams to “spend more time on higher-value work and less on routine tasks,” ultimately speeding up product launches and improvements (Gnanasambandam et al., 2025). Generative UI exemplifies this by automating routine frontend coding. Teams can experiment rapidly – if the AI UI isn’t quite right, you can tweak the prompt or system instructions and see a new iteration instantly, without a full development cycle.
- Reduced Maintenance & Technical Debt: No more brittle glue code to maintain. When the UI logic is largely handled by the platform and model, there’s less custom integration code that can break with updates. This reduces bugs and regressions. It also means fewer points of failure – the unified platform is thoroughly tested to handle UI generation, whereas hand-written glue code is often error-prone. As Gartner has noted in the context of low-code trends, eliminating custom code can cut maintenance effort and costs dramatically (Smith, 2023). By saying goodbye to glue code, organizations can avoid the “Glue Monster” that “saps operational velocity...and delays projects”. Instead, they rely on a robust system to manage integrations cleanly.
- Dynamic, Adaptive User Experiences: Generative UI enables real-time adaptive interfaces that respond to user needs. This leads to genuinely engaging and intuitive UX for AI applications. Rather than forcing users into a static workflow, the interface can guide them interactively. For example, an AI sales analytics app built with generative UI might show a chart, then automatically follow up with a form to drill deeper if it “senses” the user’s intent to explore. Such LLM-driven product interfaces keep users in flow, as if the software understands what they need next. Early evidence suggests this personalization drives higher user satisfaction and adoption. As Thesys reported, when users feel an app “just gets me,” they are more likely to stick around and stay engaged (Generative UI – The Interface that builds itself, just for you.). In business terms, adaptive UIs can boost retention and loyalty by providing a tailored experience for each user.
- Empowering Designers and Product Teams: Generative UI doesn’t eliminate the role of design – it augments it. Designers define the component library and style constraints (the “LEGO blocks,” so to speak), and the AI assembles them. This frees designers from having to foresee every interaction upfront. It also opens the door to outcome-oriented design; designers focus on what the user should achieve, and let the system handle when and how to present interface elements. As a result, product teams can deliver features that would have been too costly to build manually. Need a quick dashboard for a niche data set? The AI can generate it on demand. Generative UI thus acts as an AI UX tool in the designer’s toolkit, automating the tedious parts of UI implementation. It’s telling that even UX thought leaders like Jakob Nielsen see adaptive, AI-built UIs as a major inflection point in design (Moran & Gibbons, 2024).
- Scalability and Future-Proofing: A unified Generative UI platform is inherently scalable and adaptable. Because the UI is generated from high-level instructions, the same system can easily extend to new features or models. If you swap in a more powerful LLM or integrate a new data source, you don’t have to rewrite the whole front end – the generative system adjusts what it presents. This scalability also means organizations can reuse the approach across multiple products (e.g., providing a consistent AI UI across different applications). Additionally, as AI models improve in reasoning and context-handling, the generative interfaces will become even more sophisticated automatically. Embracing this approach now effectively future-proofs your AI product’s frontend. You’re building on an architecture that can evolve with the rapidly advancing AI capabilities, rather than locking yourself into hard-coded screens.
- Lower Barrier to Entry for Users: By delivering more natural, flexible interfaces, AI apps become easier for non-technical users to adopt. A Generative User Interface can let users interact in plain language and receive visual, actionable responses (charts, forms, etc.), as opposed to requiring users to interpret raw model output. This lowers the learning curve and makes AI-driven tools accessible to a broader audience. For example, an AI dashboard builder could let a business user simply ask questions and get a tailored dashboard, without needing a data analyst to build it. Such democratization of AI is a key promise of generative UIs – the interface is no longer a hurdle, but a helper. When AI is embedded into the UI in a user-centric way, organizations can realize value from AI faster, because end-users actually embrace the solutions.
Conclusion
The era of painstakingly stitching together UIs for AI apps is coming to an end. Glue code – that unsung, onerous layer between AI and users – has long been the bane of AI software development, slowing down projects and bloating engineering effort. But with the rise of Generative UI and unified platforms, we finally have a way out. By letting AI models directly generate and manage the user interface, platforms like Thesys C1 enable developers to build UI with AI rather than against it. This unified platform approach connects LLM brains to dynamic, interactive fronts in real time, eliminating the need for custom UI glue code.
The implications are exciting: AI products can be delivered faster, with interfaces that are far more engaging and adaptive than traditional apps. Developers and designers can focus on high-level experience and logic, while the generative system handles the tedious UI details. Users benefit from interfaces that feel alive – UIs that shape themselves to fit the user’s needs, whether it’s a conversational agent surfacing buttons and forms, or an analytics tool creating visualizations on the fly.
In short, saying “goodbye to glue code” means saying hello to a new paradigm of AI-native user interfaces. It’s a shift toward software that is smarter and more user-centric by design, where the front end is no longer a static afterthought but an integral, intelligent part of the application. As more teams adopt this unified approach, we’ll see AI capabilities translated into user value faster and more seamlessly than ever before. The technology and tools are here today – Generative UI is no longer just a buzzword, but a practical strategy to build the next generation of AI-powered products. It’s time to embrace it and leave the glue code behind.
References:
- Lo Giudice, Diego, et al. “Prepare For AI That Learns To Code Your Enterprise Applications (Part 2).” Forrester Blog, 8 July 2021.
- Hsiao, Christina. “Writing Glue Code Is Slowing You Down.” Dataiku Blog, 23 Nov. 2022.
- Sculley, D. et al. “Hidden Technical Debt in Machine Learning Systems.” NIPS, 2015.
- Moran, Kate, and Sarah Gibbons. “Generative UI and Outcome-Oriented Design.” Nielsen Norman Group, 22 Mar. 2024.
- Krill, Paul. “Thesys Introduces Generative UI API for Building AI Apps.” InfoWorld, 25 Apr. 2025.
- Deshmukh, Parikshit. “Generative UI – The Interface that Builds Itself, Just for You.” Thesys Blog, May 2025.
- Deshmukh, Parikshit. “The Future of Frontend in AI Applications: Trends & Predictions.” Thesys Blog, 3 June 2025.