Handed off: 2024

Designing AI-assisted localization workflows for global media teams

Designing AI-assisted localization workflows for global media teams

Despite strong AI translation capabilities, the platform’s workflow made reviewing and editing translated content slow and fragmented. I redesigned key interaction patterns to help teams validate and refine AI output more efficiently.

Role

Product Designer

Team

Product Designer (Me)

2 Designers

UX Researcher

Product Manager

2 Developers

Timeline

Aug–Dec 2024

Tools

Figma (FigJam + Design)

Balsamiq

Storybook

UserTesting

Figma

Balsamiq

Storybook

UserTesting

The Problem

Improving how teams work with AI-generated translations

Improving how teams work with AI-generated translations

Vosyn builds AI-powered tools that help global teams translate and localize content across multiple languages. During my internship, I worked on improving workflows across several internal products used for AI-assisted translation and review.

The system could generate translations quickly, but reviewing and refining those outputs required navigating several disconnected interfaces. My work focused on redesigning those workflows so teams could validate and edit AI-generated content more efficiently.

Because of NDA restrictions, I can only share redacted product interfaces or detailed design artifacts from this project.

Outcomes

Consistency improved when initiative wasn’t required.

Consistency improved when initiative wasn’t required.

64 82

increase in System Usability Score after usability testing

28%

reduction in multi-step task time during localization workflows

5

AI-assisted workflow features prototyped and tested to improve translation review

These improvements helped teams validate machine-generated translations faster and reduced friction across the localization process.

The Journey

Understanding how teams review AI output

Understanding how teams review AI output

The platform already had strong AI translation capabilities, but usability testing revealed that most friction happened after generation. Users struggled to compare original and translated content and often had to move between multiple panels just to review a single translation.

I conducted usability sessions and workflow analysis to understand where users lost time and context. A consistent pattern emerged: translation generation was fast, but validation was slow.



Based on these findings, I explored interaction patterns that kept generation, review, and editing within a single working context. Prototypes focused on reducing navigation overhead and helping reviewers identify where AI output required attention.

Reflection

Designing for verification, not just generation

Designing for verification, not just generation

This project reinforced an important lesson about AI products: generating content is only part of the workflow. Users also need efficient ways to evaluate, correct, and trust machine-generated output.

Small changes to workflow structure had a larger impact than adding new features. By keeping users in a single review context and reducing the cost of verification, the platform became significantly easier to use for teams working with multilingual content.