Google Lens + Socratic

Custom Math Keyboard

A feature to enhance users' learning journeys

Final Design
Launched Fall 2020 on Android. Lens and Socraticare both Google products that used the camera as a main source of input. The camera used a technology called optical character recognization (OCR) that can detect texts in images. Those texts are then "read" by AI to provide relevant results for users.
Why Do We Need This?
Sometimes, the OCR technology would fail. This became extra detrimental because users would get stuck in their end-to-end journey or experience bad query results. The goal then was to create a way for users to address OCR errors and/or be able to edit their queries.
Lots of Brainstorms
Study the landscape of relevant and popular math apps and their keyboard usages. Identified interactions from past data and drawn from familiar elements as starting points for our custom keyboard logic and UI.
Lots of Exploring
Working with interaction designers, we generated low-fidelity mocks to examine holistic flow, basic UI need, and UX logic.
More Mocks
More at a medium-fidelity so we can get a better sense of content/assets and logic.
First Demo
Collaborated with engineers to create the first live prototype for team testing. TLDR, many bugs and challenges left to solve.
Throughout the process, I gathered and received many critical (and often blocking) feedbacks. I used those as guiding points to iterate and solve for user interactions and logic.
Second Demo
Getting a lot closer. Needed to finesse cursor logic and keyboard UI to be more holistically Google.
The Final Prototype
Re-aligned to be more Lens-centric and designed for dark mode as well.
How It Works
Starting with an "error" math query, user can now pull up the keyboard to edit the OCR error and get the relevant results they needed.