In pursuit of intuitive interfaces

Taken from a glass negative in the collection of The History Trust of South Australia (circa 1925)
Taken from a glass negative in the collection of The History Trust of South Australia (circa 1925)

Whenever I hear a user interface is ‘intuitive’—or needs to be—a synapse fires. An intuitive digital interface would follow users’ behaviour, regardless of their prior experiences with other interfaces. We’re not there yet.

When pointy, clicky, window-based desktops emerged, users were trained to find functions at the top, contexts at the bottom, and the ‘work’ going on in the middle. Machines running either Windows and MacOS still ship like this. If an Excel user wants to find functionality, they head upwards to a ribbon icon or dropdown menu. If they want to switch to Word, they head downwards to the start-bar or dock.

The internet dented this. On the web, thanks to the hyperlink, functionality is all muddled in with content. Context-switching is more like a fork in the road. Anyone who’s fallen down a wikihole will know.

In my last schoolyear, I helped teach the sprogs basic computer skills: like community service, but for crimes not yet committed. In a couple of hours, I could get 35 kids to be reasonably confident with spreadsheets: diving into cells to construct formulas, and switching around between applications to pull in chunks of data. The keen ones—the ones bitten by the bug—then went on to discover the command-line and, I expect, are now lounging on a domain-name empire or something.

Getting to grips with the pre-Google early web, though, took longer. Just like a spreadsheet, a wide bar across the top would accept input, but the kids would find it harder to make their browser ‘do stuff’. The browser didn’t integrate well with other applications: you could copy out but not paste in. Clicking a link within an email could take you to a specific webpage, but clicking a link on a webpage couldn’t take you to a specific email, only a blank one.

While that early web is no longer recognisable, I still catch users staring blankly at interfaces, just as those kids did in the mid-Nineties. Better browsers, better online services and new internet devices—phones; tablets; watches etc.—tackled many of these paradigm mismatches head-on. Many millions of hours of thought and experimentation have been given over to make functionality, context and content coexist on palm-sized touchscreens.

Creeping homogeny, or more accurately convergent evolution, in interface design has tried to put things where users might expect them based on their previous experiences. But challenges with contemporary interfaces are more complicated and nuanced than simply where navigation should be placed.

An example. On my desktop, all the system settings are in one place, and then all the settings specific to an application are in another. Except these days there are system-wide restrictions on things like accessing the screen or the disk, so every application I install has to beg me to change system preferences before it can function, thereby dividing application settings into two contexts.

It’s worse on my mobile. All preferences for the system, and the apps that shipped with the phone, are in one huge settings menu. But the preferences for individual apps are who-knows-where: sometimes in the app context, sometimes the system context. Users drown in this kind of thing. Who can see my stuff (as a setting) and who’s stuff I can see (as a function) are more closely related in users’ minds than is often to be found in the interfaces they use.

This is complicated because there are so many use-cases. How an app notifies you is one thing, how all apps notify you is another. But users tend not to think like that; they often lean towards task-focussed thinking. So current interfaces trend towards a convergence, leaving interfaces feeling vaguely recognisable but indistinct, which in turn causes further confusion and poorer recall amongst users.

However, rather than all designers continuing to iterate towards the mean, there are two areas of exploration that could significantly improve users’ comprehension of interfaces, which in turn would make interfaces more useful. These areas could perhaps make interfaces truly intuitive, by following human behaviours rather than imposing conventions.

First, users do things fastest in familiar ways. So, passive preferences could follow users around, between apps, contexts, tasks and even functionality. Instead of users having to learn how each device and app can be made to, say, grab a photo from one context and use it in another, the standard way they do this could transcend contexts, just how copy-and-paste does. All the other methods would remain too, but the one that’s top-of-mind for the user would always work as they’d expect, in any context.

Imagine Microsoft Word but it comes as a plain text editor. No bold/italic/etc. The only commands are open, save, copy, and paste.

You get used to it. Then one day you decide you’d like to style some text… or, better, you receive a doc by email that uses big text, small text, bold text, underlined text, the lot.

What the hey? you say.

There’s a notification at the top of the document. It says: Get the Styles palette to edit styled text here, and create your own doc with styles.

You tap on the notification and it takes you to the Pick-Your-Own Feature Flag Store (name TBC). You pick the “Styles palette” feature and toggle it ON.

So the user builds up the capabilities of the app as they go.

Second, this is a good potential AI problem. Designers lean upon what has gone before—what has demonstrably worked, what’s aesthetically appealing and so forth—whereas an AI could start on interface problems unfettered by baggage and metaphors. It would be interesting to think of AI in the context of intuitive design, not as a producer of strange, mangled cat pictures, but as a collegial third eye: unbound by design dogma and able to show us new possibilities from outside or own influences.

What we need are intelligences that help us do useful things in new and better ways, ways which we could not have imagined alone. AIs which are colleagues and collaborators, rather than slaves and masters.

Subscribe to doing.digital in your newsreader with this RSS feed. (what does this mean?)

Latest