In late 2025, the Agile Academy platform existed in two languages: German and English. Four months later, it was available in eight. We didn't hire a translation agency. We used AI. And agile principles made all the difference.
How We Translated Our Entire Platform with AI (And What Agile Had to Do with It)
TL;DR
Problem: The Agile Academy platform (Kirby CMS + Ruby on Rails + backend) existed in only 2 languages, but had a growing international audience and needed to be present in more markets.
Approach: We used AI (ChatGPT, Claude, Claude Code) to translate the entire ecosystem iteratively, one language at a time, improving the process after each launch.
Solution: Specialized AI agents with a shared glossary, translation prompt, and verification scripts, replacing manual copy-paste with automated, validated translation pipelines across both platforms.
Outcome: 6 new languages launched in under 3 months. What took 1 month manually (Spanish) took 2 days by the final iteration (Polish).
2 → 8
Languages
440+
Articles per language
1 month → 2 days
Per language launch
The Starting Point
One platform, two codebases, two languages, and a growing international audience.
The Agile Academy is not a single application. It is an ecosystem:
- Kirby CMS Knowledge Base: 440+ articles across 10 sections, each with complex Layout JSON, cross-references, and URL routing per language
- Ruby on Rails Platform: Training booking pages, e-learning courses, checkout flows, product pages, with locale files and views
- Backend Systems: Mailing templates, invoicing, transactional emails
- UI Translations: Menus, footers, breadcrumbs, buttons, and the language switcher itself, across both platforms
All of this existed only in German and English. Meanwhile, we could not provide curated content to non-English or non-German speakers, which was a growing share of our audience.
Translating this wasn't just a content task. Each language required:
- New routes in Kirby's PHP config (article overviews, section slug rewrites, redirects)
- Section-level content files with localized URL slugs
- Overview pages, landing pages, and structural pages (legal, events, author pages)
- ROR locale files for menus, footer, training pages, emails
- Mailing templates in the backend
- Cross-system consistency: the same "articles" word in URLs, the same legal page links, the same menu structure
A traditional translation agency could handle the article text. But nobody was going to write our PHP routes, fix our JSON layouts, or update our Rails locale files. We needed a different approach.
Iteration 1: Spanish
Copy-paste, manual code changes, and a month of grinding. The waterfall approach.
Spanish was our first expansion language. The process looked like this:
- Copy an article's text into ChatGPT or Claude
- Paste the translation back into a new file
- Manually fix the Layout JSON (hoping not to break it)
- Repeat 440 times for Kirby alone
- Separately translate ROR locale files, views, mailing templates
- Manually add routes, section files, overview pages in Kirby
- Manually update locale files and routes in ROR
- Manually create mailing templates in the backend
It took about a month. Only the most important pages were translated in both systems. The quality was inconsistent. Some articles used formal tone, others informal. Agile terms were sometimes translated, sometimes not. Cross-references between articles pointed to non-existent pages.
The infrastructure work was even more painful. Every route had to be written by hand. Every section file had to be created from scratch. URL slugs were inconsistent. The ROR and Kirby platforms had to stay in sync (same menu links, same footer, same legal pages), and keeping track of what was done where was a spreadsheet nightmare.
In agile terms, this was a classic waterfall delivery: one big batch, limited feedback loops, no automation, no shared standards. We shipped it, but we knew we couldn't scale this approach to six more languages.
- Cross-references between articles led to 404 errors because translated articles linked to pages that didn't exist yet or had different URL slugs
- Layout JSON got mangled during copy-paste: broken quotes, missing brackets, entire articles that wouldn't render
- Inconsistent tone: some articles used formal "usted", others informal "tú", with no shared standard
- Agile terms like "Sprint", "Scrum Master", and "Product Owner" were translated into Spanish in some articles but kept in English in others
Iteration 2: Portuguese
First automation, first scripts, first use of Claude Code. A week instead of a month.
For Portuguese, we changed the approach. Instead of copy-pasting one article at a time, we wrote scripts that automated the translation via API calls. Feed in the English source, get back the translated Kirby file.
More importantly, we started using Claude Code, not just for article content, but for the infrastructure work. Claude Code could read our existing Spanish routes and generate the Portuguese equivalents. It could create section files, overview pages, and update config files.
On the ROR side, Claude Code generated locale files by referencing the existing Spanish translations. It updated mailing templates, created new views where needed, and kept the cross-system references consistent.
The result: about one week instead of one month. But the error rate was still high.
The scripts didn't understand context. They would translate "Sprint" into Portuguese. They would break Layout JSON by adding line breaks inside strings. URL slugs were sometimes wrong because the AI didn't know our routing conventions across both platforms.
We spent significant time reviewing and fixing. But critically, we were now inspecting and adapting. Every error we found became a rule for the next iteration. We started a glossary. We refined the translation prompt. We documented the URL conventions that both platforms had to follow.
- The AI translated "Sprint" into "Corrida" and "Scrum Master" into "Mestre Scrum" — no glossary existed yet to prevent this
- Layout JSON broke silently: the AI added line breaks inside JSON strings, producing valid-looking files that crashed on render
- URL slugs were inconsistent across platforms. Kirby had one slug, ROR had another, so menu links pointed nowhere
- Accented characters (ã, ç, é) got escaped as unicode sequences in JSON, displaying as raw codes instead of letters
Iterations 3–6: French, Italian, Dutch, Polish
Specialized agents, institutional knowledge, and two days per language.
By the time we reached French, the system had matured considerably. What made the real difference was splitting the work.
Instead of one large AI context trying to handle everything, we created specialized agents, each focused on one part of the system:
- Content translation agents that handled batches of Kirby articles, following the translation prompt and glossary
- Infrastructure agents that created routes, section files, overview pages in Kirby
- ROR locale agents that translated and created locale files for the Rails platform
- Verification agents that ran scripts to catch broken JSON, wrong URL slugs, missing fields
Each agent had a reduced context window focused on its specific task. This produced much better results than one agent trying to hold the entire platform in memory.
The institutional knowledge was now codified:
- A translation prompt (TRANSLATION_PROMPT.md) defined tone, format, URL rules, and the AI notice requirement
- A glossary (glossary.json) kept agile terms consistent across all languages
- Verification scripts caught errors before they reached production: broken Layout JSON, wrong hreflang references, missing required fields
- A checklist in CLAUDE.md documented every infrastructure step required for a new language (routes, section files, overview pages, structural pages, menu/footer sync)
- Cross-system consistency checks ensured Kirby and ROR stayed in sync
Italian took two days. Dutch took two days. Polish took two days. Each iteration was faster and more reliable. The error rate dropped with each language because the system learned from every previous mistake.
Before and After
The same task. A fundamentally different system.
Iteration 1: Spanish (Manual)
- ~1 month of work
- Copy-paste article by article
- Manual infrastructure across two codebases
- Inconsistent quality and terminology
- No verification, no glossary, no prompt
- Partial coverage (only key pages translated)
- No structured tracking
Iteration 6: Polish (Agents)
- ~2 days of work
- Parallel agent batches across both platforms
- Automated infrastructure from templates
- Consistent quality via prompt + glossary
- Automated verification scripts
- Full coverage (440+ articles, all infrastructure)
- Machine-readable tracking (ai-translations.json)
The Agile Principles at Work
This wasn't just a translation project. It was six Sprints of organizational learning.
Iterative Improvement
Each language was a Sprint. Spanish was Sprint 1: slow, manual, full of learning. Polish was Sprint 6: fast, automated, refined. The improvement was exponential. Each iteration didn't just add a language. It improved the system for all future languages.
Inspect and Adapt
Every error in one language became a prevention rule for the next. Broken JSON? Add a JSON validator to the verification script. Translated "Sprint" into Portuguese? Add it to the glossary. Wrong URL slug convention? Document it in the checklist. The system got smarter because we built feedback loops into it.
Self-Organizing Teams
The specialized agents were, in effect, self-organizing team members. Each had a clear responsibility, the right context for its task, and defined interfaces with the others. The content agent didn't need to know about PHP routes. The infrastructure agent didn't need to parse Layout JSON. Separation of concerns. It works for software and for teams.
Working Software over Documentation
The translation prompt, the glossary, the verification scripts. None of these were designed upfront. They came from real problems we hit during real work. The prompt was rewritten five times. The glossary grew with every language. The scripts were born from errors that slipped through manual review. Living documentation, shaped by actual use, turned out to be far more useful than any spec we could have written upfront.
Responding to Change
Our architecture decisions were driven by errors, not by upfront planning. We didn't predict that Layout JSON would be the biggest source of bugs. We discovered it during Spanish and built validation for it before Portuguese. We didn't plan the agent specialization. We arrived at it after seeing that one large context produced worse results than several focused ones. We ended up with a better setup than anything we could have designed on a whiteboard.
Key Takeaways
AI doesn't replace agile thinking. It amplifies it.
1. AI amplifies your process, for better or worse
If your process is chaotic, AI will produce chaos faster. If your process is disciplined (clear prompts, defined standards, verification loops), AI will produce quality at scale. The same tool that broke our JSON in iteration 1 was producing flawless output by iteration 6. The AI didn't change. Our process did.
2. Context is everything
The glossary, the translation prompt, the checklist, the cross-system consistency rules. These are all forms of context. An AI without context is just a fast translator. An AI with the right context is a team member who understands your conventions, your edge cases, and your quality standards. Building that context is where the real work goes, and it compounds with every new language you add.
3. The human role shifts from executor to architect
In iteration 1, humans did the translating. By iteration 6, humans designed the system, defined the standards, reviewed the output, and improved the process. The work didn't disappear. It shifted to higher-level thinking. Less copying and pasting, more thinking about what makes a good translation, what makes a consistent user experience, and how to catch errors before users do.
4. Start small, ship, learn, repeat
We could have spent months designing the "perfect" translation pipeline before translating a single article. Instead, we started with copy-paste and iterated. Every language taught us something. Every error improved the system. By the time we reached the later languages, we had a pipeline that no amount of upfront planning could have produced, because it was shaped by real problems, not hypothetical ones.