In the inDrive product design team, AI is already a working tool. It’s used for field UX interviews across different countries, for automating routine tasks in Figma, and for quickly generating realistic visuals from illustrations.
Below are three real-life stories: how designers implemented AI solutions, what challenges they faced, and what results they achieved.
Research Without Intermediaries — Polina Gladkova Experience
Previously, interviews with drivers in Egypt and Latin America were conducted with the help of local colleagues acting as interpreters. This was helpful, but since they were not professional researchers, they often wanted to assist the drivers — suggesting answers or showing where to click in the app.
To make the research more accurate, Polina decided to run the interviews herself using voice-based ChatGPT. The designer speaks Russian, the driver hears the translation into Arabic, the driver answers, and ChatGPT translates back.
In practice, it looked like this:
-
preparing the interview script in ChatGPT beforehand;
-
during the ride, enabling voice chat and setting a translator prompt with dialect clarification (e.g., “Egyptian Arabic”) when needed;
-
after the interview, asking ChatGPT for: a detailed review of the conversation, a summary table for a series (e.g., 10 interviews), recurring patterns/differences, and hypotheses;
-
simultaneously recording audio → transcribing it in another AI tool → feeding the text into ChatGPT for more accurate processing.
Challenges:
- sometimes ChatGPT “got stuck” and repeated the Russian phrase instead of translating it;
- mixed up participants (e.g., attributed responses to the wrong driver);
- during long interview series, it previously lost context.
What helped:
- specifying the dialect in the prompt;
- working via text (audio → transcript → ChatGPT);
- manual monitoring during interviews.
Result:
- time savings of about 3–5× compared to the traditional interpreter scheme;
- “cleaner” experiment: less influence from third parties, calmer one-on-one dialogue;
- in daily work — regular use of ChatGPT for translations and textual feedback on logic/UX.
Automating Localization and Routine in Figma — Sergey Goltsov Experience
Sergey addresses repetitive tasks by building Figma plugins. The approach is simple: take a real pain point (from Figma forums/chats or personal practice), formulate a detailed request, and create a plugin using the combination of ChatGPT + Figma Plugin API documentation.
By his estimate, ChatGPT generates up to 80% of the code; the rest is manual review and refinement (HTML/CSS/JS, testing in the editor and in Figma).
Publicly available plugins by Sergey:
-
Chat Builder — 9,500+ users,featured in Figma Weekly. -
ChartBG — ~3,600 users. -
BorderMockup — ~2,600 users.One internal case was solving a recurring pain point: the monotonous manual work of creating translation keys and linking them to layers in Figma. To address this, Sergei built the
Text to Strings plugin. It scans the entire file (including groups, Frames, and Auto Layout), finds all text layers, and converts them into text variables. The plugin automatically cleans variable names according to API requirements; if the same text is repeated, only one variable is created and all relevant layers are linked to it.
Another internal case was automating translations in Figma. Previously, localizing layouts was tedious routine work: texts had to be copied manually, separate versions of screens created, and updates had to be applied every time something changed.
To remove this monotony, Sergei set up a process based on Figma Variables and the Sheet to Variables plugin. Texts are automatically turned into variable keys, translators work only in Google Sheets, and the designer imports the completed translations via CSV. Once the variables are linked to the layout layers, switching the language in Figma takes just a couple of clicks — and all texts update instantly.
What to keep in mind:
- create highly detailed prompts (even “ask all clarifying questions before generating”);
- manually validate code and cross-check with documentation: the model can affect working logic or suggest outdated API calls;
- The popularity of Chat Builder was boosted by community posts and being featured in an industry digest.
From Illustrations to Realistic Photos — Arthur Sitdikov Experience
In inDrive’s foodtech direction, flat illustrations had long been used. Arthur wanted to test the hypothesis: realistic images of products perform better because people see exactly what they are buying. Conversion tests are still ahead, but the immediate task — to quickly assemble quality assets — was already solved.
How it was done:
- taking existing illustrations (bananas, bread, etc.) and arranging them directly in the layout;
- asking ChatGPT to “re-render” the composition in a realistic style;
- receiving ready-to-use assets with transparent backgrounds and applying them in product and promo. Sample prompt:
“Without changing the composition, make them realistic, like in magazines. On a transparent background.”
What was observed:
- first outputs were often good enough; during long sessions, errors began to appear — usually two attempts per image were enough;
- sometimes the model “forgot” about transparency and left a checkerboard pattern — this had to be hidden.
Result:
-
first acceptable variants in ~15 minutes;
-
a complete set of assets in two evenings instead of lengthy approvals with photoshoots or stock purchases;
-
the approach is already in use in product/promo; in parallel, 3D images were generated for interfaces, and outputs from photoshoots were mixed with AI generations.
Key Takeaways From the Three Cases
- In interviews, voice-based ChatGPT enabled direct contact with respondents, sped up analysis, and reduced interpreter influence (time savings estimated at 3–5×).
- Layout preparation for localization became faster thanks to automatic variable creation and a CSV translation cycle (language switching in one click).
- Visual generation produced realistic assets in two evenings, with first variants in minutes — enough to quickly show ideas to stakeholders and move them forward.
Conclusion
These are three concrete ways designers already use AI in daily work: as a translator and “secretary” for interviews, as a co-author of plugin code, and as a tool for fast, realistic visuals.
In each case, the designers described the limitations they encountered and how they overcame them. The overall benefits are clear: faster speed, less manual routine, and cleaner data for making design decisions.