CRTypist Mimics How Humans Use Touchscreen Keyboards to Improve Future Autocorrect Systems

CRTypist aims to make it easier to develop future autocorrect systems without the need to enlist hordes of human typists.

Researchers from Finland's Aalto University and University of Jyväskylä, working with colleagues at the University of Birmingham and Google, have developed an engine for generating human-like touchscreen typing using "computational rationality:" CRTypist, which they hope will help make smartphones and tablets more accessible.

"Typing on a phone requires manual dexterity and visual perception: we press buttons, proofread text, and correct mistakes," Antti Oulasvirta, professor at Aalto University and the work's senior author, explains. "We also use our working memory. Automatic text correction functions can help some people, while for others they can make typing harder."

CRTypist aims to deliver better predictive typing and autocorrect systems through simulation. (📹: Shi et al)

The problem with improving automatic text correction systems — perhaps most infamous for unexpected and oft illogical corrections, leading to the term "autocarrot" — is that you need something to correct, which previously has meant either making do with poorly-simulated examples or employing a cohort of actual humans. This, the team explains, is where CRTypist comes in: a simulation that uses "computational rationality" to better mimic the behavior of real users — including using "virtual eyes and fingers" to and working memory and making similar mistakes.

"We created a simulated user with a human-like visual and motor system. Then we trained it millions of times in a keyboard simulator," Oulasvirta explains. "Eventually, it learned typing skills that can also be used to type in various situations outside the simulator. [Now] we can train computer models so that we don’t need observation of lots of people to make predictions. User interfaces are everywhere today – fundamentally, this work aims to create a more functional society and smoother everyday life."

"Its key feature is a reformulation of the supervisory control problem, with the visual attention and motor system being controlled with reference to a working memory representation tracking the text typed thus far," the research team explains. "Movement policy is assumed to asymptotically approach optimal performance in line with cognitive and design-related bounds. This flexible model works directly from pixels, without requiring hand-crafted feature engineering for keyboards."

The team is to present its work, which it admits is "limited to [simulating] skilled typists" at present, at the CHI Conference in Hawai'i this May, followed by the release of the project's data and source code; the paper, meanwhile, is available under open-access terms on the project website.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire:
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles