• What broadcast does that streaming hasn’t replaced

    Taken from Prediction of Nostradamus by J. J. Grandville (1836)
    Taken from Prediction of Nostradamus by J. J. Grandville (1836)

    Good predictions are hard to make. Predictions made on the hoof may sound inspirational, but are riddled with biases. But good learnings can be drawn from bad predictions.

    David Bowie fans will make themselves known to you faster even than vegans and wild-swimmers. Even six years after his passing, Bowiefication continues so strongly that you might be forgiven for thinking that nobody else had an impact on popular culture between 1962 and 2016.

    Bowie’s career is noteworthy for being long and prolific: releasing 128 singles from 26 albums, even if only five were hits. This is double the output of The Beatles in their eight years together, although the Fab Four’s hit-rate was four times higher as one band, higher still when including their time individual artists.

    Thanks to an interview on the BBC’s Newsnight in 1999, Bowie is often credited with foreseeing the potential of the internet way ahead of the pack. I’m fascinated by this interview. Bowie and the notoriously abrasive interviewer Jeremy Paxman, both having made comfortable careers in conventional media, are grappling with the concept of the media being democratised.

    Paxman adopts as pessimistic stance, as if he sees the internet as a media-delivery tool rather than some sort of cultural movement. But we can’t know whether this is what he thought, since he is a talented interviewer. Either way, Bowie took the bait and began to riff, almost off-the-cuff, about the internet’s forthcoming cultural impact. The core idea he hit upon was the inflection from a small number of massive cultural touchpoints—television interviews, rock stars, etc.—to a massive number of smaller ones. Touchpoints as small as a group of near-anonymous people or, as Bowie calls them, an audience.

    Bowie counters Paxman’s pugilistic pessimism with a sense of cultural potential, but his thesis is incomplete. While his sense of fashion had always been progressive, Bowie was technologically conservative. So, even if he wasn’t articulating ideas as he was having them, his thoughts about the cultural impact of the internet were still quite fresh. In an interview two years earlier, he described the internet as just another tool about which he wasn’t wildly excited.

    Though, as the star uttered these words, his management were already in talks about founding an internet service-provider with Bowie as both investor and public face. BowieNet launched between these two interviews, which might explain both Paxman’s questioning and Bowie’s newfound optimism.

    You can’t sustain such a long artistic career by staying in one place, and Bowie is renowned for reinvention. But for an artist capable of inventing and dropping whole identities as the pop market demanded, his technological skepticism had remained constant.

    Bowie’s imperial phase ran from 1975 to 1985; all his hits were released in this period. It was also in this period that he developed a disliking for Gary Numan: another androgynous, highly-styled British male artist and longstanding Bowie fan. Numan’s futuristic 1979 single Are ‘Friends’ Electric had outperformed Bowie’s Boys Keep Swinging, so the latter and his fans began casually pouring scorn.

    The New Musical Express, a fusty publication that took exception to Numan’s searing electronic disruption of the conventional rock’n’roll format, enjoyed whipping up the vitriol of Bowie fans against emerging artists. In a 1980 interview, egged on by the NME’s Angus MacKinnon, Bowie criticised acts he perceived to have copied him while simultaneously misunderstanding him. In particular, he attacked Numan’s sci-fi themes:

    It’s that false idea of hi-tech society and all that which is… doesn’t exist. I don’t think we’re anywhere near that sort of society. It’s a enormous myth that’s been perpetuated unfortunately, I guess, by readings of what I’ve done in that rock area at least, and in the consumer area television has an awful lot to answer for with its fabrication of the computer-world myth.

    Skipping forward again to the famous 1999 interview with Paxman: in articulating optimism for his newest commercial venture, Bowie inadvertently offers a number of teachings about the futurism, the internet and people in the media.

    1. It’s really hard to make long predictions. It’s hard to research them, formulate them and particularly to communicate them.
    2. We often consider ideas that are new to us as being new ideas. See also the illusion of explanatory depth. This colours our vision of the future: the things that appear to be us to be happening contemporarily, and therefore signalling new trends, may have persisted for some time even in primitive form and we chose to dismiss them.
    3. Pre-internet media people become consumed by the mechanics of the media. Their assessment of culture was seen through a lens of what media people would consume. Post-internet, the same is true but no longer confined to those in the media industry. We often blur commercial impact with cultural impact. For example, those who use ‘woke’ in a derogatory way are consistently, but not necessarily deliberately, describing cultural phenomena that cannot easily be commercialised.
    4. Many technological predictions, in this era, take the incumbent internet and add something to it: sometimes another technology, sometimes an observable consumptive human behaviour, sometimes an ideology. But future behaviours are rarely additive.
    5. Of course, people maintain a bias towards ideas into which they have tangibly invested. A rockstar who has founded in an ISP bearing his name is going to be far more open to the possibilities of a technologically-empowered future than a rockstar who’s fallen out of step with ever-shifting pop trends.
    6. Cultural shift on the internet, free of commercialised influence, isn’t one continuously-progressing thing. Grassroots digital culture and behaviours lurch towards an idea, and then that idea is commercialised. the cycle then repeats. Bowie’s observation about the cultural movement being more significant than the tech giants was right, for about four years. Then after another three years it started looking right again, for about two years and so on.
    7. All recent talk of the liberating possibilities of media-dominating advancements such as the metaverse and crypto can be considered another echo, in which those with tangible investments speak of cultural possibilities. I have no doubt Bowie, if he were still here, would say he found the idea of decentralisation fascinating—full of wild possibilities for artists and their audiences—while also investing heavily in crypto and selling off his creations as NFTs. After a short time, it would show itself to be a new implementation of an old idea: a fan club.
    8. In all things, the question influences the answer. That’s what Bowie teaches us, time and again. Put someone on a pedestal and give them something to fight against, and they will. Today you don’t need to be a rockstar: the pedestals are plentiful and thepugilism comes from all directions.
  • Better ways to find, with faceted search

    Taken from Ornament with Two Women Blindfolded by Sebald Beham (1527)
    Taken from Ornament with Two Women Blindfolded by Sebald Beham (1527)

    I like faceted search. I think eBay is my favourite search engine of all. But search facets have dropped out of fashion in interface design. If we got back into search facets, then interfaces of all kinds would be better.

    I often remind myself that my usage patterns are not representative. I might even have that tattooed on the insides of my eyelids, just to be sure. I also know eBay isn’t where the cool kids hang out anymore but, as the bathwater drains away, we should remain mindful of the baby. In this case the bathwater is market share, and the baby is quick but also precise online search mechanisms. Or perhaps not even search, but find.

    It’s funny that the term ‘search-engine’ stuck, given all the ways this functionality could have been described. And what you call things over-influences what they do—ask Carphone Warehouse. Okay sure, searching might describe what users do (or did back then), but it doesn’t describe what users want. It’s foolhardy to make sweeping generalisations about users but I think I’m on reasonably safe ground when I assert that users don’t want to be looking; they want to have found. The result of the search is where the dopamine is, not the search itself. There’s no thrill-of-the-chase. Anyway.

    Faceted search is a good, solid, finding-stuff paradigm. Enter something that very broadly describes the thing you’re looking for, and then use facets to turn a huge number of hits into a small selection of precise, usable results. You could type in ‘car’ or pick the equivalent category, then choose age, number of seats, location, price range and so on, and fairly quickly you’ll have found a shortlist of cars fitting your exacting preferences. Faceted search is a really good way to buy shoes, since a user is interested in shoes only of a certain style and size, give or take.

    Free-text searching, even using fairly hardcore search operators, doesn’t compare. I’m a hardened user of search operators, wrapping critical terms in quote-marks and so on. But this requires me to remember all this syntax, which I can’t, and popular search engines seem to adhere to it less strictly with every passing year. Google Shopping might be good for browsing, but is nearly unusable for anything precise, seeing as the only facets are condition, price, colour and seller. Amazon’s search, while more finely faceted, seems to be deliberately imprecise to encourage users to browse. I’d guess they’re measuring dwell-times over particular pictures, to gauge what to recommend later. But again, browsing isn’t finding.

    Google Maps is infuriating for its lack of useful facets. It can plot intricately precise routes from A to B, but can only facet these routes to exclude tolls, motorways or ferries. I can’t avoid tight turns or blind corners, which would be useful in something bigger than a car, but maybe the dataset doesn’t exist. That said, what if I wanted to include something in my route? The cheapest fuel-stop, maybe, a nice bite to eat, or a scenic spot for a leg-stretch. The satnav I bought nearly two decades ago could do this. If I were minded to, I could research it all myself (by searching separately) and set intermediate destinations along my route, but the voice in my head says: you’re the search-engine, you figure it out.

    There used to be lots of ways to find online stuff: not just search but directories, curated lists, webrings and whatnot. The web outgrew many of these approaches but, even so, there also used to be various search paradigms—like some specifically for queries in the form of questions. Now there seems to be only one style of search: jabbing in keywords and hoping for the best. It has contorted online content into a keyword-dominant vernacular: keyword-jabbing, and optimising for it, has made the contemporary web even more categorised than the directories of old. Funny how that came back around. It’s influencing how we behave.

    A poignant, if a little NSFW, example:

    Data on what we search for, pay for and click on is being used to predict our desires and funnel us bespoke(ish) porn.

    At first blush, it might seem like this kind of micro-targeting would just turbo-boost the internet’s existing trajectory, making it even easier for people to find and embrace a diversity of bodies and fetishes. But there’s a fundamental shift here from a world in which we explore a passive sea of content to a world in which porn actively explores and prescribes itself to us. Because this shift stems from deep financial upheaval in the adult industry, the content pushed upon us will likely increasingly reflect what is most profitable, not what is most widely desirable. It could well become narrowing, or at least channelling, rather than broadening.

    This approach is also how ‘natural’-language interfaces—the ones you bark at from across a room—have evolved. Alexa is precise only if you are able to ask for something that the machine can recognise as being near-unique, such as a particular song. It can’t have an ongoing conversation with you about taking a large set of search results and turning it into a handful of sensible choices.

    More widespread faceted search would be an opportunity to take ubiquitous keyword-based searching and explode it into new, useful, powerful interfaces: both useable and precise. eBay has stuck to this model and it’s really good. LinkedIn uses it a lot for finding contacts, jobs and so forth, and it works well: finding something as specific as which of your contacts who know someone at Microsoft would be arduous without it. Holiday sites—hotels, flights and the like—also give users plenty of facets to turn all the world’s options into a manageable shortlist. Users are used to doing it; it’s just not available widely.

    That’s how I’d want to use a natural-language search but Siri, or whichever, can’t handle it. I can’t even remember what I have to shout at my phone to use its natural-language doodah: that’s how little use I have for such a blunt implement. It’d be more interesting for these interfaces—all interfaces—to take the 1990s multi-dropdown facet model like eBay and think about how to bring this level of specificity to contemporary users’ lives. Simply put, facets reflect how people think: must-haves versus negotiables. Certainties versus unknowns. A find-engine would do the same.

  • In pursuit of intuitive interfaces

    Taken from a glass negative in the collection of The History Trust of South Australia (circa 1925)
    Taken from a glass negative in the collection of The History Trust of South Australia (circa 1925)

    Whenever I hear a user interface is ‘intuitive’—or needs to be—a synapse fires. An intuitive digital interface would follow users’ behaviour, regardless of their prior experiences with other interfaces. We’re not there yet.

    When pointy, clicky, window-based desktops emerged, users were trained to find functions at the top, contexts at the bottom, and the ‘work’ going on in the middle. Machines running either Windows and MacOS still ship like this. If an Excel user wants to find functionality, they head upwards to a ribbon icon or dropdown menu. If they want to switch to Word, they head downwards to the start-bar or dock.

    The internet dented this. On the web, thanks to the hyperlink, functionality is all muddled in with content. Context-switching is more like a fork in the road. Anyone who’s fallen down a wikihole will know.

    In my last schoolyear, I helped teach the sprogs basic computer skills: like community service, but for crimes not yet committed. In a couple of hours, I could get 35 kids to be reasonably confident with spreadsheets: diving into cells to construct formulas, and switching around between applications to pull in chunks of data. The keen ones—the ones bitten by the bug—then went on to discover the command-line and, I expect, are now lounging on a domain-name empire or something.

    Getting to grips with the pre-Google early web, though, took longer. Just like a spreadsheet, a wide bar across the top would accept input, but the kids would find it harder to make their browser ‘do stuff’. The browser didn’t integrate well with other applications: you could copy out but not paste in. Clicking a link within an email could take you to a specific webpage, but clicking a link on a webpage couldn’t take you to a specific email, only a blank one.

    While that early web is no longer recognisable, I still catch users staring blankly at interfaces, just as those kids did in the mid-Nineties. Better browsers, better online services and new internet devices—phones; tablets; watches etc.—tackled many of these paradigm mismatches head-on. Many millions of hours of thought and experimentation have been given over to make functionality, context and content coexist on palm-sized touchscreens.

    Creeping homogeny, or more accurately convergent evolution, in interface design has tried to put things where users might expect them based on their previous experiences. But challenges with contemporary interfaces are more complicated and nuanced than simply where navigation should be placed.

    An example. On my desktop, all the system settings are in one place, and then all the settings specific to an application are in another. Except these days there are system-wide restrictions on things like accessing the screen or the disk, so every application I install has to beg me to change system preferences before it can function, thereby dividing application settings into two contexts.

    It’s worse on my mobile. All preferences for the system, and the apps that shipped with the phone, are in one huge settings menu. But the preferences for individual apps are who-knows-where: sometimes in the app context, sometimes the system context. Users drown in this kind of thing. Who can see my stuff (as a setting) and who’s stuff I can see (as a function) are more closely related in users’ minds than is often to be found in the interfaces they use.

    This is complicated because there are so many use-cases. How an app notifies you is one thing, how all apps notify you is another. But users tend not to think like that; they often lean towards task-focussed thinking. So current interfaces trend towards a convergence, leaving interfaces feeling vaguely recognisable but indistinct, which in turn causes further confusion and poorer recall amongst users.

    However, rather than all designers continuing to iterate towards the mean, there are two areas of exploration that could significantly improve users’ comprehension of interfaces, which in turn would make interfaces more useful. These areas could perhaps make interfaces truly intuitive, by following human behaviours rather than imposing conventions.

    First, users do things fastest in familiar ways. So, passive preferences could follow users around, between apps, contexts, tasks and even functionality. Instead of users having to learn how each device and app can be made to, say, grab a photo from one context and use it in another, the standard way they do this could transcend contexts, just how copy-and-paste does. All the other methods would remain too, but the one that’s top-of-mind for the user would always work as they’d expect, in any context.

    Imagine Microsoft Word but it comes as a plain text editor. No bold/italic/etc. The only commands are open, save, copy, and paste.

    You get used to it. Then one day you decide you’d like to style some text… or, better, you receive a doc by email that uses big text, small text, bold text, underlined text, the lot.

    What the hey? you say.

    There’s a notification at the top of the document. It says: Get the Styles palette to edit styled text here, and create your own doc with styles.

    You tap on the notification and it takes you to the Pick-Your-Own Feature Flag Store (name TBC). You pick the “Styles palette” feature and toggle it ON.

    So the user builds up the capabilities of the app as they go.

    Second, this is a good potential AI problem. Designers lean upon what has gone before—what has demonstrably worked, what’s aesthetically appealing and so forth—whereas an AI could start on interface problems unfettered by baggage and metaphors. It would be interesting to think of AI in the context of intuitive design, not as a producer of strange, mangled cat pictures, but as a collegial third eye: unbound by design dogma and able to show us new possibilities from outside or own influences.

    What we need are intelligences that help us do useful things in new and better ways, ways which we could not have imagined alone. AIs which are colleagues and collaborators, rather than slaves and masters.
  • Taken for a ride: an Uber-full of nonsense

    Taken from A Carriage in London by Constantin Guys (1848-56)
    Taken from A Carriage in London by Constantin Guys (1848-56)
    Taken from A Carriage in London by Constantin Guys (1848-56)

    I ordered an Uber to the railway station. The driver reached me within two minutes. This digital behaviour has become highly evident, and is truly habit-forming. Once you’ve hailed a ride through a digital interface, it’s hard to imagine ever going back to calling a dispatcher and waiting, and waiting.

    I acknowledged that I was in a mask, as I’d been negative for only a few days. That’s alright, he said, although he thought the tests were showing everything as Covid now. Oh joy, I thought.

    My Uber rating is surprisingly low, as a result of a single booking more than five years ago: returning from the vet with my cat, howling for freedom in her carrier. I can scarcely afford a disagreement with a taxi driver.

    He continued. He thought the Covid thing was overblown; that the media and government had made a great big deal out of nothing. That politicians and scientists were out to control you. I stuck to neutral noises: not in agreement but not contesting either. The cowardice of a man with a low score.

    He’d had Covid, he said, and it was basically just ‘flu. Oh so he’d had his jabs, then, I replied hopefully. No, he didn’t believe in them. My next noise must have sounded curious. He would’ve got them but he knew so many people that got sick from the vaccines, with blood clots and all. Like who, I regretted asking. Of course, he couldn’t name anyone. I lurched for another topic of conversation.

    Just as it’s hard to return to calling minicab offices, it’s hard to come back to professionally-researched information through mainstream media. Social feels like socialising: a conversation in the pub, surrounded by friends: low-commitment and self-regulating. You would only see the half-truths and manipulation on your timelines if you choose to look, and why would a comfortable person choose discomfort?

    My digital behaviour brought me a taxi ride. His brought him a customer, and a warm bath of misinformation. And by leaving this unchallenged, my Uber score improved by 0.01.

  • What went wrong with social media?

    Taken from Une Discussion littéraire a la deuxième Galerie, by Honoré Daumier (1864)
    Taken from Une Discussion littéraire a la deuxième Galerie by Honoré Daumier (1864)

    This present generation of social media platforms—the big ones; the ones whose logos adorn the side of tradespeople’s’ vans—all started out with a noble endeavour or two. They were built so that people could stay in touch: straightforward, lightweight and low-commitment.

    That’s how it was for a while. Friends discussed and shared. Celebrities of all ranks became accessible, and readily engaged with fans. Strangers came together to share laughter or collective displeasure. Broadcast media and brave brands used it as a means of gathering rapid participation from their newfound communities, for better or worse.

    While this all still goes on, this image of social media is no longer first to spring to mind. Somehow, the well was poisoned. Within their first decade, the experience of social media u-turned and predominantly made people unhappy.

    Before mainstream social media, the functionality had already kind-of existed, at least in primitive and more disjointed form. But it was restricted to those minded and comfortable enough to hunt it out assemble it for themselves. By contrast, the new platforms succeeded on two fronts: simplifying online participation, and creating a groundswell of interest amongst regular people. It was great, at first. Then it wasn’t. Now, it’s net-negative.

    For a while I thought this was caused solely by commercial pressure. Fuelled by venture-capital, the social media firms needed to drench communities in advertising to be viable, and little of it was of high-enough quality or placement to be altogether welcome. This in turn disrupted the community spirit, which in an online setting can already be fragile.

    Then I thought it was a more general shift in Western politics, away from liberalism and towards populist extremes. In the main, the internet had held a liberal, almost egalitarian ideology, in the sense that folks could do as they pleased, within a consensus for some high-level constraints. But social media seemed to be lurching away from this. Political stances became more visible. Those of opposing views would seek each other out. Discussions around observations became arguments around beliefs.

    Later I pondered whether it was the explosion in the online and social media population. But a logical prediction might have been that more users wouldn’t worsen the platform; it might have even improved it. Were the late majority significantly illiberal, compared with early adopters? Hard to prove. Perhaps the increase in anonymous users? Again, tricky: there had been anonymous or pseudonymous net citizens before, and all was just fine. Besides, these population increases don’t tally, in terms of timing, with the descent of the platforms as a whole.

    But now something has happened that has me connecting dots. Twitter has announced yet another policy: a toughening-up on spam tweets and copypasta: those identical posts across multiple accounts that seem particularly popular amongst those with disruptive or political intent. Twitter has all sorts of poorly-enforced rules and policies, so you have to wonder what difference one more would do. But yet another attempt to tidy up the platform set me thinking.

    None of the major social media platforms have ever been lawless. They all set out with terms of use, similar to those a hosting company might impose upon customers to keep illegal, immoral or fattening content off their servers. There were even clauses to promote the good health of the community as a whole. And these terms received regular review, so the platforms can’t be described as ever having been egalitarian to a fault: they’re always been more self-regulated than what went before.

    However, abuse of social media weaves between platform’s rules, and always has. It’s the platforms’ successes—their straightforward, centralised functionality—that has left them wide open. They’re not networks; they’re destinations. So their vulnerability is twofold: they’re abused because they’re so abusable, and because they smothered any alternative.

    Before social media, if you had wanted to drive a particular message towards loads of online eyeballs—the number necessary, say, to distort an election—it’d taken a mammoth effort. Personal publishing was too distributed. Everyone owned and maintained their own patch. It’d take too much persuading.

    The pre-social web was imperfect, but what social media did was demolish it: it took the eyeballs off what had gone before and concentrated most online attention to a small number of places that were vulnerable to subversion.

    The concentration of attention is a characteristic of old-media but, in their case, any slant or bias was generally well-displayed and understood. Publications and broadcasters happily nailed their colours to whatever mast, and the public were broadly free to choose where to align. The difference is these publishers maintained standards, some self-imposed and some regulated. They were not in the habit of handing their front-pages over to whatever fruitcake fancied reaching the readership for whatever end.

    And as social media drained visitors from other personal publishing, it made decreasing sense to persist. Many didn’t. So, the far-less abusable but still egalitarian world of old blogs languished. The execution of Google Reader, in the era when Google still fancied having a social network of its own, caused further damage.

    Twitter trying to toughen up on subtle abuse of the platform has me wondering whether the colossal task of moderating social media is at odds with the business of operating a social media platform, and this is why they barely bother. Not directly because of the risk of a dent in advertising revenue, but for these centralised destinations, small in number but massive in scale, viability and abuse are intrinsic.

  • Why we need to stop saying ‘digital’, and why we don’t

    The Usborne Book of the Future (1979) prophesied the home of 1989 would be media-rich and technology-enabled, with its inhabitants enjoying both new things and new ways to do things.
    The Usborne Book of the Future (1979) prophesied the home of 1989 would be media-rich and technology-enabled, with its inhabitants enjoying both new things and new ways to do things.

    A few years ago, I met the CTO of a manufacturing firm. His company had large production lines in Manchester where they made real things out of raw materials. They were appointing the company where I worked and, as I had the word ‘digital’ in my title, the CTO visited to suss me out. We greeted each other and exchanged business cards.

    “Which one?”, he asked.

    “Which one what?”, I smiled. I couldn’t think of anyone there with my name.

    “Which digital?”, he replied.

    This has floated in the shallows of my memory ever since. A CTO in manufacturing will meet a fair few ‘digital’ folks. There’ll be a digital someone-or-other thinking about the production lines: monitoring; twinning; maintaining and so forth. There’ll be someone else in supply-chain: connecting; streamlining; reporting. There would be an equivalent in HR, finance, logistics, design and so on. I just happened to be the ones thinking about the now-digitalised behaviours of customers, product users, rather than the operation of the company itself. How they choose, how they buy, how they feel.

    Given that every sector is digitising, it can start to feel like the d-word is now irrelevant. In an era where digital pervades all sorts of settings, just has the electrical supply has, there’s an argument for no longer maintaining a false divide. Indeed, it’s been argued for some time that we’d all be best off abandoning the use of the word ‘digital’ altogether.

    These arguments are solid and well-made. Within many most industry verticals, use of the term ‘digital’, to describe capabilities or roles, is indeed largely outdated. But the view that ‘digital’ is no longer needed altogether is the view from inside one of those verticals.

    In short, I agree, ’tis a silly word. But there are catches.

    First, there are still so many edge-cases where it’s useful to delineate between non-digital and digital behaviours—crucially, not old-world versus new, but how the two coexist. Because they do.

    There’s another catch when talking about the digital era in isolation from those that went before, as the many advantages of digitalisation aren’t without drawbacks. Transforming to a digital-first approach introduces new challenges that didn’t previously exist. It can also cause existing challenges to be inherited, perhaps compounded. If implementing ‘digital’ offered a direct replacement for what went before, it could pass without comment. But it doesn’t.

    So, tearing down divisions between physical and digital universes makes good sense in commercial settings. But abandoning ‘digital’ as a distinction for how people sometimes behave, and the era in which they find themselves, would overlook many edge-cases worthy of distinction and examination.

    In the coming days and weeks, I want to share examples of where the delineation between digital-era and previously-observable behaviours is a helpful means of understanding what goes on, and how to improve things for people. Internet-era ways of behaving.

  • Post-office: how could open-plan and virtual meetings coexist?

    The computing division: 'Bonus Bureau' clerks in Washington D.C. calculating benefits for war veterans.
    The computing division: ‘Bonus Bureau’ clerks in Washington D.C. calculating benefits for war veterans.
    The computing division: ‘Bonus Bureau’ clerks in Washington D.C. calculating benefits for war veterans.
    The computing division: ‘Bonus Bureau’ clerks in Washington D.C. calculating benefits for war veterans.

    The widespread adoption of remote working will be seen as one of the most significant digitally-enabled behavioural milestones of this decade. It has limitations, but it also invites the possibility of righting some of the wrongs that have crept into the design of working environments. However, more work is needed to avoid the worst of both.

    In this era, we have mainly open-plan offices and mainly closed-room online meetings. Those sharing physical spaces have good lines of sight but generally poor spaces to work and meet privately and without disturbance. For hybrid and remote workers, it’s the opposite. We might have expected these two working environments to blur. But they didn’t. And they should.

    Technologies permitting near-seamless remote and hybrid working have existed for years. The various merits and drawbacks were discussed at considerable length; occasionally there’d be an impassioned rallying-cry to make work more compatible with life. Still, uptake remained extremely low, and all it took was a global pandemic to turn the corner.

    Healthy workplaces are under attack, principally from the cost of commercial real-estate. Designing office space that is effective at offering a distraction-free work environment, privacy and also a sense of both inclusion and collaborative investment takes far more floor-space than taking down walls and packing desks together.

    Further space can be ‘saved’ by reducing the number of desks and hoping workers will continue to be invested in the place despite not having anywhere to call their own. But out with the bathwater goes the baby.

    For service-sector businesses, and also others with an interest in service efficiency such as primary healthcare, once the barriers of upskilling and driving adoption had been addressed by necessity, it was easier to see the opportunities of working remotely. But in behavioural terms, much is needed to virtualise the casual collaboration that comes with sharing a physical workplace. Everything is a meeting, and a meeting isn’t where work gets done. Meeting discipline has always been poor, and video-calls offer little improvement.

    That said, applying resistance to the convention of meetings is beneficial. Time and again, remote workers have been shown to be more productive than those in the office. And in the era of reducing emissions, avoiding workers’ travel is a big step in the right direction.

    But a substantial and outstanding challenge with remote working is that it is not inclusive. Many people find working from home difficult, particularly those who didn’t, or more likely couldn’t, consider remote working when taking on their home. In particular, it doesn’t suit those in junior roles, and those who live with many other people, especially if those people are trying to work remotely too.

    So, with newly-found flexible and productive working, plus newly-acquired digital behaviours across workforces, what can be done? More technical and behavioural evolution is needed to improve the work environment—physical and virtual—specifically to improve informal workspaces.

    Post-pandemic, hybrid working demands hybrid thinking. It’s worth treating improvements to both physical and virtual work environments as a single and ongoing endeavour, as the learnings from one may equally apply to the other.

  • The touch of a button

    From ‘1999 House Of Tomorrow‘
    From ‘1999 House Of Tomorrow

    Accounts of the future often attempt to describe our evolving relationship with technology. These stories are told by people, for people, with people as the constant. We stay as we are, while the technologies change around us.

    What we can now recognise is that constant, perpetual human behaviours are the greater leap of imagination. Our behaviours change in reaction to, and in spite of, technological evolution. Changes happen chaotically, and unevenly, in ways that are challenging to predict.

    In the digital era, some behavioural reactions are overtly negative, like technology addiction and selfie dysphoria. Others are positive, such as the MeToo movement and the democratisation of knowledge. Many more fall into the grey area in-between: behaviours that carry some benefits but also come with some costs.

    So ‘digital’ can describe both how things work and what people do. Digitally-enabled human behaviours are unlike those that went before. People behave differently once they have crossed a digital threshold. We reset our ideas of how much time, work or value is attributed to an action, and we behave differently as a result.

    Previously, the stimulus for technological advancement had been to allow us to do new things, and the same things more easily. Beyond the digital threshold, the impetus is to keep up with our new expectations.

    Today, digital is what we do, not what is done unto us.

    This is doing.digital.

From elsewhere

Links to some things I’ve been looking at recently.

Roll call

Links to some people whose sites I follow regularly.