Case Study

Building TestMagicks: A Zero-Backend AI Content Production Agent for Exam Prep Teams

By Drooid Team  ·  7 min read

TestMagicks needed an AI content production agent to help EdTech teams generate practice exams, study guides, and learning materials at scale. The solution: a completely browser-based AI application with zero backend infrastructure, multiple LLM providers, and live document export—all delivered as a single 60KB HTML file.

The Problem: Content Creation at EdTech Scale

EdTech companies creating exam prep materials face a brutal bottleneck: content production is manual, slow, and expensive. TestMagicks was looking for a way to let their teams generate full practice exams, study guides, flashcard decks, and learning modules in minutes instead of hours. Traditional solutions would have required hiring more content writers or licensing expensive educational content platforms. Neither approach was economically viable.

The real need was an AI content production agent that their teams could use directly—something intuitive, fast, and under their control. They wanted to avoid vendor lock-in, keep all their data private, and maintain flexibility to experiment with different AI models. A backend-heavy solution would have added complexity and cost. They needed something different.

Zero-Backend Architecture: Why It Matters

Drooid built TestMagicks as a completely serverless AI content production agent that runs entirely in the browser. Everything happens on the user's machine—no data touches external servers except the direct API calls to LLM providers. This approach unlocked several critical advantages:

The Architecture: AI-to-AI Development

The entire application is a single HTML file—60KB of compressed JavaScript, CSS, and HTML that includes login management, a full chat interface, scheduling capabilities, and real-time document export. Drooid used AI to build the AI content production agent itself. This kind of meta-development—using LLMs to code the application that orchestrates LLMs—accelerated development and kept the scope tightly focused.

The app manages multiple concurrent LLM API calls, handles streaming responses, stores conversation history, and generates formatted documents on the fly. It includes intelligent routing logic to handle different content types—exams need different structure than study guides, which differ from flashcard decks. A single prompt framework adapted to each content type, ensuring consistency across outputs.

Real-World Impact: Transforming Content Production Workflows

TestMagicks teams now generate a month's worth of practice exam materials in a day. A workflow that previously required hiring freelance content writers or spending 40 hours manually creating materials now takes 2-3 hours with the AI content production agent doing the heavy lifting. The output quality rivals hand-written content, with consistent structure, pedagogically sound explanations, and appropriate difficulty scaling.

The tool also enables experimentation at scale. Content teams can quickly test different question formats, difficulty levels, and explanation styles without committing significant time. They iterate based on student performance data, refining the AI's prompts to generate better learning outcomes. This rapid feedback loop would be impossible with traditional manual creation.

Multi-LLM Provider Support Built In

TestMagicks users can switch between OpenAI, Anthropic Claude, Google Gemini, and Groq depending on cost, speed, and quality preferences for different content types. Some content works better with GPT-4's reasoning. Other content benefits from Claude's structured output capability. Groq excels at fast, cost-efficient generation of bulk materials. This flexibility is baked directly into the browser application—no backend configuration required.

Users authenticate directly with each provider using API keys stored locally in their browser. No credential management backend needed. This architecture means TestMagicks never has access to user credentials, reducing security risk and regulatory complexity.

Why Browser-First Won

A traditional approach would have built a backend API layer, authentication service, database, document generation pipeline, and deployed it all to cloud infrastructure. That would have taken longer, cost more to operate, and created ongoing maintenance burden. The zero-backend approach solved the same problem in weeks with near-zero operating costs and complete user data privacy.

The only external dependencies are the LLM APIs themselves. Everything else—the interface, the document generation, the conversation management, the scheduling—runs on the user's machine. This is the future of AI-powered applications: minimal backend, maximum control, and user-owned infrastructure.

The Takeaway

Building the best AI content production agent didn't require building a complex platform. It required understanding that modern browser capabilities are powerful enough to handle real AI workflows. By choosing a zero-backend approach and focusing on a single, well-defined use case—helping EdTech teams produce materials faster—Drooid delivered TestMagicks in weeks. The result is an application that's faster, more private, cheaper to operate, and more flexible than any traditionally architected alternative.

Ready to prove AI works for your business?

We embed with your team, build a focused POC, and show real ROI — before you commit to scaling.

Get in touch →