• Maximum viable potential

    Taken from The Invention of Printing by Theodore Low De Vinne (1876)
    Taken from The Invention of Printing by Theodore Low De Vinne (1876)

    Internet-era innovations often arrive to tremendous fanfare. When you consider the number of new things being made, the number that go on to be hugely impactful is extremely low. Nevertheless, the trumpeting—proportional to how much funding is at stake—goes on. This one, this one here, is the Next Big Thing. This has the potential to change everyone’s lives, forever.

    The overwhelming majority don’t; some do. Those that have rarely did in the form they were in when they were touted as gamechangers. That’s the nature of it all.

    Early-stage technologies suffer deeply from developer myopia. Technical teams start with the core functionality and over time fan outwards towards the exceptions and edge-cases. But the drawback of a ‘good enough is good enough’ approach is that it seldom is. The initial technologies of Uber and Airbnb did not, for example, make people safer, yet steps to ensure safety are now core to their brands.

    Consider the exclusionary nature of the smartphone. The extent to which they are resilient to your lifestyle has depended on how much you have in common with Californian men: general wealth, education level, larger hands, average climate and so on. Manufacturers removing headphone jacks was a clear signal their smartphones were not ‘for everyone’: only those able and willing to buy and use much more expensive, wireless alternatives, in spite of their shortcomings. Again, myopia.

    I’m reminded of a famous post by developer Patrick McKenzie: Falsehoods Programmers Believe About Names. Where it is necessary to record someone’s name, developers commonly use three fields: a title, a given name and a family name. McKenzie listed extensive exceptions to this formula: it’s enough to make you realise the scale of ‘edge cases’ is vast. He also inspired others to write lists of falsehoods about phone numbers, time and all sorts of other things. Along similar lines is a recent, enlightening post entitles Horrible edge cases to consider when dealing with music.

    What all these edge-cases expose is the kind of developer myopia that plagues new technologies. These simplified assumptions aren’t representative of people’s behaviours and how the world works, and therefore alienate users.

    So, I wonder whether the falsehoods are a good predictor of innovation success. In scientific fields such as medicine, rigorous studies to identify edge-cases are not only commonplace, it’s critical. Therefore, in the digital domain, does the speed at which developers embrace their edge-cases offer an indicator of the extent to which an innovation has potential?

  • What broadcast does that streaming hasn’t replaced

    The BBC's Mobile Television Unit, 1938
    The BBC’s Mobile Television Unit, 1938

    Broadcast media’s transition into the digital era has devalued it. Broadcast is truly excellent when two criteria are met: timeliness and inclusion. It’s the best way to watch sport, for example, as well as events such as the Superbowl, Eurovision and coverage of the Queen’s jubilee. It was the latter that has reminded me of broadcast media’s tremendous strength.

    On-demand services try to stage ‘events’ of their own, with a combination of season launches, one-offs and rampant promotion. But overall, people watching a show within the same couple of weeks isn’t as inclusive as everyone sitting down together. There are water-cooler moments, and then there’s the more powerful feeling that your household is somehow united with millions of others, right this moment.

    Digital-era behaviours around media consumption have changed too, obviously. An enormous breadth of choice sets a hard limit on the possible participation in any particular one. But cut back further is that sense of inclusion that heyday broadcast commanded. It’s analogous to a festival: a massively shared experience.

    In the Eighties, BBC One’s Saturday schedule was built around this model, even if the terminology was different. Live kids’ TV in the morning, sports coverage in the afternoon, live shiny-floor entertainment in the evening, movie at night. On-demand services can reach the same levels of inclusion, but they have to spend tens or perhaps hundreds of millions of dollars to do so.

    It might be a frequency bias, but I feel like I’m noticing repeated references to a golden age of television—but with some saying it’s ending and others saying it’s just beginning. I’m minded to think of Disney’s purchase of, and then investment in, the Star Wars franchise: including making the transition out of the cinema and into the living room. As I write, Obi-Wan Kenobi is mid-season: I don’t know the budget, but you can tell it’s considerable. It likely dwarfs the movie budgets. Massive budget allows for the best writing, stagecraft, acting, directing, editing and marketing.

    These shows are objectively excellent. Not just this franchise: the big productions from all on-demand services are, by any measure, astonishing. Had it been possible to make them 40 years ago, the viewing figures would be stratospheric and viewers’ enthusiasm would be considerable and long-lasting. Today, though, budgets are high, audiences are comparably modest, enthusiasm is only good to middling. All this in spite of these big shows being among the greatest ever made.

    What this demonstrates is the various facets of the behavioural change. Viewers’ dissipation across media channels and changes to consumption habits are well-documented. But in addition, perhaps in result, the sense of timeliness and inclusion has eroded. The digital-era behaviour is based upon feeling like you’re consuming something, rather than you’re involved in something.

  • Better ways to find, with faceted search

    Taken from Ornament with Two Women Blindfolded by Sebald Beham (1527)
    Taken from Ornament with Two Women Blindfolded by Sebald Beham (1527)

    I like faceted search. I think eBay is my favourite search engine of all. But search facets have dropped out of fashion in interface design. If we got back into search facets, then interfaces of all kinds would be better.

    I often remind myself that my usage patterns are not representative. I might even have that tattooed on the insides of my eyelids, just to be sure. I also know eBay isn’t where the cool kids hang out anymore but, as the bathwater drains away, we should remain mindful of the baby. In this case the bathwater is market share, and the baby is quick but also precise online search mechanisms. Or perhaps not even search, but find.

    It’s funny that the term ‘search-engine’ stuck, given all the ways this functionality could have been described. And what you call things over-influences what they do—ask Carphone Warehouse. Okay sure, searching might describe what users do (or did back then), but it doesn’t describe what users want. It’s foolhardy to make sweeping generalisations about users but I think I’m on reasonably safe ground when I assert that users don’t want to be looking; they want to have found. The result of the search is where the dopamine is, not the search itself. There’s no thrill-of-the-chase. Anyway.

    Faceted search is a good, solid, finding-stuff paradigm. Enter something that very broadly describes the thing you’re looking for, and then use facets to turn a huge number of hits into a small selection of precise, usable results. You could type in ‘car’ or pick the equivalent category, then choose age, number of seats, location, price range and so on, and fairly quickly you’ll have found a shortlist of cars fitting your exacting preferences. Faceted search is a really good way to buy shoes, since a user is interested in shoes only of a certain style and size, give or take.

    Free-text searching, even using fairly hardcore search operators, doesn’t compare. I’m a hardened user of search operators, wrapping critical terms in quote-marks and so on. But this requires me to remember all this syntax, which I can’t, and popular search engines seem to adhere to it less strictly with every passing year. Google Shopping might be good for browsing, but is nearly unusable for anything precise, seeing as the only facets are condition, price, colour and seller. Amazon’s search, while more finely faceted, seems to be deliberately imprecise to encourage users to browse. I’d guess they’re measuring dwell-times over particular pictures, to gauge what to recommend later. But again, browsing isn’t finding.

    Google Maps is infuriating for its lack of useful facets. It can plot intricately precise routes from A to B, but can only facet these routes to exclude tolls, motorways or ferries. I can’t avoid tight turns or blind corners, which would be useful in something bigger than a car, but maybe the dataset doesn’t exist. That said, what if I wanted to include something in my route? The cheapest fuel-stop, maybe, a nice bite to eat, or a scenic spot for a leg-stretch. The satnav I bought nearly two decades ago could do this. If I were minded to, I could research it all myself (by searching separately) and set intermediate destinations along my route, but the voice in my head says: you’re the search-engine, you figure it out.

    There used to be lots of ways to find online stuff: not just search but directories, curated lists, webrings and whatnot. The web outgrew many of these approaches but, even so, there also used to be various search paradigms—like some specifically for queries in the form of questions. Now there seems to be only one style of search: jabbing in keywords and hoping for the best. It has contorted online content into a keyword-dominant vernacular: keyword-jabbing, and optimising for it, has made the contemporary web even more categorised than the directories of old. Funny how that came back around. It’s influencing how we behave.

    A poignant, if a little NSFW, example:

    Data on what we search for, pay for and click on is being used to predict our desires and funnel us bespoke(ish) porn.

    At first blush, it might seem like this kind of micro-targeting would just turbo-boost the internet’s existing trajectory, making it even easier for people to find and embrace a diversity of bodies and fetishes. But there’s a fundamental shift here from a world in which we explore a passive sea of content to a world in which porn actively explores and prescribes itself to us. Because this shift stems from deep financial upheaval in the adult industry, the content pushed upon us will likely increasingly reflect what is most profitable, not what is most widely desirable. It could well become narrowing, or at least channelling, rather than broadening.

    This approach is also how ‘natural’-language interfaces—the ones you bark at from across a room—have evolved. Alexa is precise only if you are able to ask for something that the machine can recognise as being near-unique, such as a particular song. It can’t have an ongoing conversation with you about taking a large set of search results and turning it into a handful of sensible choices.

    More widespread faceted search would be an opportunity to take ubiquitous keyword-based searching and explode it into new, useful, powerful interfaces: both useable and precise. eBay has stuck to this model and it’s really good. LinkedIn uses it a lot for finding contacts, jobs and so forth, and it works well: finding something as specific as which of your contacts who know someone at Microsoft would be arduous without it. Holiday sites—hotels, flights and the like—also give users plenty of facets to turn all the world’s options into a manageable shortlist. Users are used to doing it; it’s just not available widely.

    That’s how I’d want to use a natural-language search but Siri, or whichever, can’t handle it. I can’t even remember what I have to shout at my phone to use its natural-language doodah: that’s how little use I have for such a blunt implement. It’d be more interesting for these interfaces—all interfaces—to take the 1990s multi-dropdown facet model like eBay and think about how to bring this level of specificity to contemporary users’ lives. Simply put, facets reflect how people think: must-haves versus negotiables. Certainties versus unknowns. A find-engine would do the same.

  • In pursuit of intuitive interfaces

    Taken from a glass negative in the collection of The History Trust of South Australia (circa 1925)
    Taken from a glass negative in the collection of The History Trust of South Australia (circa 1925)

    Whenever I hear a user interface is ‘intuitive’—or needs to be—a synapse fires. An intuitive digital interface would follow users’ behaviour, regardless of their prior experiences with other interfaces. We’re not there yet.

    When pointy, clicky, window-based desktops emerged, users were trained to find functions at the top, contexts at the bottom, and the ‘work’ going on in the middle. Machines running either Windows and MacOS still ship like this. If an Excel user wants to find functionality, they head upwards to a ribbon icon or dropdown menu. If they want to switch to Word, they head downwards to the start-bar or dock.

    The internet dented this. On the web, thanks to the hyperlink, functionality is all muddled in with content. Context-switching is more like a fork in the road. Anyone who’s fallen down a wikihole will know.

    In my last schoolyear, I helped teach the sprogs basic computer skills: like community service, but for crimes not yet committed. In a couple of hours, I could get 35 kids to be reasonably confident with spreadsheets: diving into cells to construct formulas, and switching around between applications to pull in chunks of data. The keen ones—the ones bitten by the bug—then went on to discover the command-line and, I expect, are now lounging on a domain-name empire or something.

    Getting to grips with the pre-Google early web, though, took longer. Just like a spreadsheet, a wide bar across the top would accept input, but the kids would find it harder to make their browser ‘do stuff’. The browser didn’t integrate well with other applications: you could copy out but not paste in. Clicking a link within an email could take you to a specific webpage, but clicking a link on a webpage couldn’t take you to a specific email, only a blank one.

    While that early web is no longer recognisable, I still catch users staring blankly at interfaces, just as those kids did in the mid-Nineties. Better browsers, better online services and new internet devices—phones; tablets; watches etc.—tackled many of these paradigm mismatches head-on. Many millions of hours of thought and experimentation have been given over to make functionality, context and content coexist on palm-sized touchscreens.

    Creeping homogeny, or more accurately convergent evolution, in interface design has tried to put things where users might expect them based on their previous experiences. But challenges with contemporary interfaces are more complicated and nuanced than simply where navigation should be placed.

    An example. On my desktop, all the system settings are in one place, and then all the settings specific to an application are in another. Except these days there are system-wide restrictions on things like accessing the screen or the disk, so every application I install has to beg me to change system preferences before it can function, thereby dividing application settings into two contexts.

    It’s worse on my mobile. All preferences for the system, and the apps that shipped with the phone, are in one huge settings menu. But the preferences for individual apps are who-knows-where: sometimes in the app context, sometimes the system context. Users drown in this kind of thing. Who can see my stuff (as a setting) and who’s stuff I can see (as a function) are more closely related in users’ minds than is often to be found in the interfaces they use.

    This is complicated because there are so many use-cases. How an app notifies you is one thing, how all apps notify you is another. But users tend not to think like that; they often lean towards task-focussed thinking. So current interfaces trend towards a convergence, leaving interfaces feeling vaguely recognisable but indistinct, which in turn causes further confusion and poorer recall amongst users.

    However, rather than all designers continuing to iterate towards the mean, there are two areas of exploration that could significantly improve users’ comprehension of interfaces, which in turn would make interfaces more useful. These areas could perhaps make interfaces truly intuitive, by following human behaviours rather than imposing conventions.

    First, users do things fastest in familiar ways. So, passive preferences could follow users around, between apps, contexts, tasks and even functionality. Instead of users having to learn how each device and app can be made to, say, grab a photo from one context and use it in another, the standard way they do this could transcend contexts, just how copy-and-paste does. All the other methods would remain too, but the one that’s top-of-mind for the user would always work as they’d expect, in any context.

    Imagine Microsoft Word but it comes as a plain text editor. No bold/italic/etc. The only commands are open, save, copy, and paste.

    You get used to it. Then one day you decide you’d like to style some text… or, better, you receive a doc by email that uses big text, small text, bold text, underlined text, the lot.

    What the hey? you say.

    There’s a notification at the top of the document. It says: Get the Styles palette to edit styled text here, and create your own doc with styles.

    You tap on the notification and it takes you to the Pick-Your-Own Feature Flag Store (name TBC). You pick the “Styles palette” feature and toggle it ON.

    So the user builds up the capabilities of the app as they go.

    Second, this is a good potential AI problem. Designers lean upon what has gone before—what has demonstrably worked, what’s aesthetically appealing and so forth—whereas an AI could start on interface problems unfettered by baggage and metaphors. It would be interesting to think of AI in the context of intuitive design, not as a producer of strange, mangled cat pictures, but as a collegial third eye: unbound by design dogma and able to show us new possibilities from outside or own influences.

    What we need are intelligences that help us do useful things in new and better ways, ways which we could not have imagined alone. AIs which are colleagues and collaborators, rather than slaves and masters.
  • Taken for a ride: an Uber-full of nonsense

    Taken from A Carriage in London by Constantin Guys (1848-56)
    Taken from A Carriage in London by Constantin Guys (1848-56)
    Taken from A Carriage in London by Constantin Guys (1848-56)

    I ordered an Uber to the railway station. The driver reached me within two minutes. This digital behaviour has become highly evident, and is truly habit-forming. Once you’ve hailed a ride through a digital interface, it’s hard to imagine ever going back to calling a dispatcher and waiting, and waiting.

    I acknowledged that I was in a mask, as I’d been negative for only a few days. That’s alright, he said, although he thought the tests were showing everything as Covid now. Oh joy, I thought.

    My Uber rating is surprisingly low, as a result of a single booking more than five years ago: returning from the vet with my cat, howling for freedom in her carrier. I can scarcely afford a disagreement with a taxi driver.

    He continued. He thought the Covid thing was overblown; that the media and government had made a great big deal out of nothing. That politicians and scientists were out to control you. I stuck to neutral noises: not in agreement but not contesting either. The cowardice of a man with a low score.

    He’d had Covid, he said, and it was basically just ‘flu. Oh so he’d had his jabs, then, I replied hopefully. No, he didn’t believe in them. My next noise must have sounded curious. He would’ve got them but he knew so many people that got sick from the vaccines, with blood clots and all. Like who, I regretted asking. Of course, he couldn’t name anyone. I lurched for another topic of conversation.

    Just as it’s hard to return to calling minicab offices, it’s hard to come back to professionally-researched information through mainstream media. Social feels like socialising: a conversation in the pub, surrounded by friends: low-commitment and self-regulating. You would only see the half-truths and manipulation on your timelines if you choose to look, and why would a comfortable person choose discomfort?

    My digital behaviour brought me a taxi ride. His brought him a customer, and a warm bath of misinformation. And by leaving this unchallenged, my Uber score improved by 0.01.

  • What went wrong with social media?

    Taken from Une Discussion littéraire a la deuxième Galerie, by Honoré Daumier (1864)
    Taken from Une Discussion littéraire a la deuxième Galerie by Honoré Daumier (1864)

    This present generation of social media platforms—the big ones; the ones whose logos adorn the side of tradespeople’s’ vans—all started out with a noble endeavour or two. They were built so that people could stay in touch: straightforward, lightweight and low-commitment.

    That’s how it was for a while. Friends discussed and shared. Celebrities of all ranks became accessible, and readily engaged with fans. Strangers came together to share laughter or collective displeasure. Broadcast media and brave brands used it as a means of gathering rapid participation from their newfound communities, for better or worse.

    While this all still goes on, this image of social media is no longer first to spring to mind. Somehow, the well was poisoned. Within their first decade, the experience of social media u-turned and predominantly made people unhappy.

    Before mainstream social media, the functionality had already kind-of existed, at least in primitive and more disjointed form. But it was restricted to those minded and comfortable enough to hunt it out assemble it for themselves. By contrast, the new platforms succeeded on two fronts: simplifying online participation, and creating a groundswell of interest amongst regular people. It was great, at first. Then it wasn’t. Now, it’s net-negative.

    For a while I thought this was caused solely by commercial pressure. Fuelled by venture-capital, the social media firms needed to drench communities in advertising to be viable, and little of it was of high-enough quality or placement to be altogether welcome. This in turn disrupted the community spirit, which in an online setting can already be fragile.

    Then I thought it was a more general shift in Western politics, away from liberalism and towards populist extremes. In the main, the internet had held a liberal, almost egalitarian ideology, in the sense that folks could do as they pleased, within a consensus for some high-level constraints. But social media seemed to be lurching away from this. Political stances became more visible. Those of opposing views would seek each other out. Discussions around observations became arguments around beliefs.

    Later I pondered whether it was the explosion in the online and social media population. But a logical prediction might have been that more users wouldn’t worsen the platform; it might have even improved it. Were the late majority significantly illiberal, compared with early adopters? Hard to prove. Perhaps the increase in anonymous users? Again, tricky: there had been anonymous or pseudonymous net citizens before, and all was just fine. Besides, these population increases don’t tally, in terms of timing, with the descent of the platforms as a whole.

    But now something has happened that has me connecting dots. Twitter has announced yet another policy: a toughening-up on spam tweets and copypasta: those identical posts across multiple accounts that seem particularly popular amongst those with disruptive or political intent. Twitter has all sorts of poorly-enforced rules and policies, so you have to wonder what difference one more would do. But yet another attempt to tidy up the platform set me thinking.

    None of the major social media platforms have ever been lawless. They all set out with terms of use, similar to those a hosting company might impose upon customers to keep illegal, immoral or fattening content off their servers. There were even clauses to promote the good health of the community as a whole. And these terms received regular review, so the platforms can’t be described as ever having been egalitarian to a fault: they’re always been more self-regulated than what went before.

    However, abuse of social media weaves between platform’s rules, and always has. It’s the platforms’ successes—their straightforward, centralised functionality—that has left them wide open. They’re not networks; they’re destinations. So their vulnerability is twofold: they’re abused because they’re so abusable, and because they smothered any alternative.

    Before social media, if you had wanted to drive a particular message towards loads of online eyeballs—the number necessary, say, to distort an election—it’d taken a mammoth effort. Personal publishing was too distributed. Everyone owned and maintained their own patch. It’d take too much persuading.

    The pre-social web was imperfect, but what social media did was demolish it: it took the eyeballs off what had gone before and concentrated most online attention to a small number of places that were vulnerable to subversion.

    The concentration of attention is a characteristic of old-media but, in their case, any slant or bias was generally well-displayed and understood. Publications and broadcasters happily nailed their colours to whatever mast, and the public were broadly free to choose where to align. The difference is these publishers maintained standards, some self-imposed and some regulated. They were not in the habit of handing their front-pages over to whatever fruitcake fancied reaching the readership for whatever end.

    And as social media drained visitors from other personal publishing, it made decreasing sense to persist. Many didn’t. So, the far-less abusable but still egalitarian world of old blogs languished. The execution of Google Reader, in the era when Google still fancied having a social network of its own, caused further damage.

    Twitter trying to toughen up on subtle abuse of the platform has me wondering whether the colossal task of moderating social media is at odds with the business of operating a social media platform, and this is why they barely bother. Not directly because of the risk of a dent in advertising revenue, but for these centralised destinations, small in number but massive in scale, viability and abuse are intrinsic.

  • Why we need to stop saying ‘digital’, and why we don’t

    The Usborne Book of the Future (1979) prophesied the home of 1989 would be media-rich and technology-enabled, with its inhabitants enjoying both new things and new ways to do things.
    The Usborne Book of the Future (1979) prophesied the home of 1989 would be media-rich and technology-enabled, with its inhabitants enjoying both new things and new ways to do things.

    A few years ago, I met the CTO of a manufacturing firm. His company had large production lines in Manchester where they made real things out of raw materials. They were appointing the company where I worked and, as I had the word ‘digital’ in my title, the CTO visited to suss me out. We greeted each other and exchanged business cards.

    “Which one?”, he asked.

    “Which one what?”, I smiled. I couldn’t think of anyone there with my name.

    “Which digital?”, he replied.

    This has floated in the shallows of my memory ever since. A CTO in manufacturing will meet a fair few ‘digital’ folks. There’ll be a digital someone-or-other thinking about the production lines: monitoring; twinning; maintaining and so forth. There’ll be someone else in supply-chain: connecting; streamlining; reporting. There would be an equivalent in HR, finance, logistics, design and so on. I just happened to be the ones thinking about the now-digitalised behaviours of customers, product users, rather than the operation of the company itself. How they choose, how they buy, how they feel.

    Given that every sector is digitising, it can start to feel like the d-word is now irrelevant. In an era where digital pervades all sorts of settings, just has the electrical supply has, there’s an argument for no longer maintaining a false divide. Indeed, it’s been argued for some time that we’d all be best off abandoning the use of the word ‘digital’ altogether.

    These arguments are solid and well-made. Within many most industry verticals, use of the term ‘digital’, to describe capabilities or roles, is indeed largely outdated. But the view that ‘digital’ is no longer needed altogether is the view from inside one of those verticals.

    In short, I agree, ’tis a silly word. But there are catches.

    First, there are still so many edge-cases where it’s useful to delineate between non-digital and digital behaviours—crucially, not old-world versus new, but how the two coexist. Because they do.

    There’s another catch when talking about the digital era in isolation from those that went before, as the many advantages of digitalisation aren’t without drawbacks. Transforming to a digital-first approach introduces new challenges that didn’t previously exist. It can also cause existing challenges to be inherited, perhaps compounded. If implementing ‘digital’ offered a direct replacement for what went before, it could pass without comment. But it doesn’t.

    So, tearing down divisions between physical and digital universes makes good sense in commercial settings. But abandoning ‘digital’ as a distinction for how people sometimes behave, and the era in which they find themselves, would overlook many edge-cases worthy of distinction and examination.

    In the coming days and weeks, I want to share examples of where the delineation between digital-era and previously-observable behaviours is a helpful means of understanding what goes on, and how to improve things for people. Internet-era ways of behaving.

  • Post-office: how could open-plan and virtual meetings coexist?

    The computing division: 'Bonus Bureau' clerks in Washington D.C. calculating benefits for war veterans.
    The computing division: ‘Bonus Bureau’ clerks in Washington D.C. calculating benefits for war veterans.
    The computing division: ‘Bonus Bureau’ clerks in Washington D.C. calculating benefits for war veterans.
    The computing division: ‘Bonus Bureau’ clerks in Washington D.C. calculating benefits for war veterans.

    The widespread adoption of remote working will be seen as one of the most significant digitally-enabled behavioural milestones of this decade. It has limitations, but it also invites the possibility of righting some of the wrongs that have crept into the design of working environments. However, more work is needed to avoid the worst of both.

    In this era, we have mainly open-plan offices and mainly closed-room online meetings. Those sharing physical spaces have good lines of sight but generally poor spaces to work and meet privately and without disturbance. For hybrid and remote workers, it’s the opposite. We might have expected these two working environments to blur. But they didn’t. And they should.

    Technologies permitting near-seamless remote and hybrid working have existed for years. The various merits and drawbacks were discussed at considerable length; occasionally there’d be an impassioned rallying-cry to make work more compatible with life. Still, uptake remained extremely low, and all it took was a global pandemic to turn the corner.

    Healthy workplaces are under attack, principally from the cost of commercial real-estate. Designing office space that is effective at offering a distraction-free work environment, privacy and also a sense of both inclusion and collaborative investment takes far more floor-space than taking down walls and packing desks together.

    Further space can be ‘saved’ by reducing the number of desks and hoping workers will continue to be invested in the place despite not having anywhere to call their own. But out with the bathwater goes the baby.

    For service-sector businesses, and also others with an interest in service efficiency such as primary healthcare, once the barriers of upskilling and driving adoption had been addressed by necessity, it was easier to see the opportunities of working remotely. But in behavioural terms, much is needed to virtualise the casual collaboration that comes with sharing a physical workplace. Everything is a meeting, and a meeting isn’t where work gets done. Meeting discipline has always been poor, and video-calls offer little improvement.

    That said, applying resistance to the convention of meetings is beneficial. Time and again, remote workers have been shown to be more productive than those in the office. And in the era of reducing emissions, avoiding workers’ travel is a big step in the right direction.

    But a substantial and outstanding challenge with remote working is that it is not inclusive. Many people find working from home difficult, particularly those who didn’t, or more likely couldn’t, consider remote working when taking on their home. In particular, it doesn’t suit those in junior roles, and those who live with many other people, especially if those people are trying to work remotely too.

    So, with newly-found flexible and productive working, plus newly-acquired digital behaviours across workforces, what can be done? More technical and behavioural evolution is needed to improve the work environment—physical and virtual—specifically to improve informal workspaces.

    Post-pandemic, hybrid working demands hybrid thinking. It’s worth treating improvements to both physical and virtual work environments as a single and ongoing endeavour, as the learnings from one may equally apply to the other.

  • The touch of a button

    From ‘1999 House Of Tomorrow‘
    From ‘1999 House Of Tomorrow

    Accounts of the future often attempt to describe our evolving relationship with technology. These stories are told by people, for people, with people as the constant. We stay as we are, while the technologies change around us.

    What we can now recognise is that constant, perpetual human behaviours are the greater leap of imagination. Our behaviours change in reaction to, and in spite of, technological evolution. Changes happen chaotically, and unevenly, in ways that are challenging to predict.

    In the digital era, some behavioural reactions are overtly negative, like technology addiction and selfie dysphoria. Others are positive, such as the MeToo movement and the democratisation of knowledge. Many more fall into the grey area in-between: behaviours that carry some benefits but also come with some costs.

    So ‘digital’ can describe both how things work and what people do. Digitally-enabled human behaviours are unlike those that went before. People behave differently once they have crossed a digital threshold. We reset our ideas of how much time, work or value is attributed to an action, and we behave differently as a result.

    Previously, the stimulus for technological advancement had been to allow us to do new things, and the same things more easily. Beyond the digital threshold, the impetus is to keep up with our new expectations.

    Today, digital is what we do, not what is done unto us.

    This is doing.digital.

From elsewhere

Links to some things I’ve been looking at recently.

Roll call

Links to some people whose sites I follow regularly.