• A culture of digital discrimination

    Taken from Looks Like A Good Book by Percy Loomis Sperr (ca. 1923–42)
    Taken from Looks Like A Good Book by Percy Loomis Sperr (ca. 1923–42)

    When you book yourself a flight, you are typically given a six-digit alphanumeric string as your reference number. By entering this code into your airline’s app, for example, every detail of your booking can be retrieved. This joining-up of the many datapoints needed for passengers to board is one of the ways in which the travel industry has embraced the digital era. Yet, there are many other areas in which the handling of passengers’ information is disjointed. What causes these discrepancies?

    When returning to Europe from Seoul, the staff at Inchon airport noticed my limp and produced a wheelchair. Inchon is palatial, modern, well-designed and well-run. It is also massive, so I was grateful for the wheels. The staff also rang ahead to my next destination, Charles de Gaulle in Paris, to arrange for me to be met with another wheelchair upon my arrival. But nobody came. Having arrived later than scheduled, and with another flight to catch, there wasn’t time to wait around. I dragged my foot across the airport to another terminal, reaching my connection just in time. My baggage wasn’t so lucky.

    This trip, and a few afterwards, gave me a small insight into the many challenges faced by those who rely upon a wheelchair. The travel industry loves dehumanising abbreviations, so uses the term PRM, or passengers with restricted mobility, to mean anyone who requires assistance moving around. Some airports seem to be excellent at this, but others are somewhere between poor and terrible. If you’re going to be using an airport-owned generic chair, they’re often not there when you arrive. If you need to use your own chair, it can be a long wait before it is brought up from the hold so that you can leave the plane.

    It’s strange that there should be this discrepancy. Many aspects of travel are disrupted by chaotic factors such as the weather, but given airports are all in the business of loading and unloading flights, you might expect that the experience of accessibility services would be fairly universal. Not so.

    The BBCs security correspondent Frank Gardner uses a wheelchair, having taken bullets to the spine from al-Qaida gunmen. Gardner has recounted three occasions (1, 2, 3) when he’s found himself stranded in his seat long after all other passengers have disembarked [update below]. Stories such as this are disturbingly common. In the UK, airports have been warned they face court action by the regulator if they continue to fail disabled passengers.

    In this digitally-enabled era, and in an industry with fine margins and so much automation, it’s amazing this happens at all, let alone so frequently. But then, it isn’t amazing, because systems reflect the prejudices of their creators.

    That six-digit reservation code is the identifier for a Passenger Name Record (PNR). It identifies that record within a Global Distribution System (GDS): a network that facilitates transactions between travel service-providers. When you book a flight, the PNR contains data supplied by you—the passenger’s names, date of birth, contact and payment details, passport information, etc.—and also data from the airline, such as the travel itinerary, booked seats, baggage etc.

    Passengers’ additional requests, such as meal preference or mobility assistance, are all on the PNR as well. So, that one record holds all the data needed for the trip, and can be shared by all the companies and staff that need to access it: the agent and airline, the airport, the security, ground services and special assistance providers. So, with such portability of pertinent data, the system should work flawlessly. But it doesn’t.

    GDS systems have at least two big faults. The first is security: those six-digit PNRs are fairly easy for a hacker to guess by elimination. Secondly, because the whole system is born of a culture where there’s ‘normal’ conditions and then ‘exceptional’ ones such as PRMs, it perpetuates that culture of division. Everyone’s journey is exceptional and everyone, to a greater or lesser degree, requires assistance. Cabin crew know this, but the fact this hasn’t been normalised into the systems underpinning the industry exposes how the prejudices of those systems’ creators and operators persist. And as interoperable systems such as this become adopted and standardised, a passenger can’t escape such prejudice by switching airline.

    This is the great shadow of the digital era: discrimination is becoming more systemic, not less.

    When Gatwick Airport’s current assistance services provider was appointed, its CEO said they would “deliver an exceptional service provision” by “combining innovations in people, processes and systems”. Less than a week after the aviation regulator issued its warning of enforcement action against airports if they keep failing less mobile passengers, Gatwick’s assistance services provider awarded its staff a 15% pay rise. On that same day, a disembarking 82-year-old passenger with restricted mobility tragically fell to his death on a Gatwick Airport escalator, having not been offered assistance services in good time.

    Businesses of all kinds are going through some form of digital transformation, just as the travel industry has. This process is often heralded as a great victory for innovation, service and efficiency. But each will pick up striking limitations and prejudices, by treating what it deems to be most frequent as ‘normal’, and everything else as ‘exceptional’. Designers of these systems would do well, next time they fly, to add a mobility assistance request to their booking.

    [Update: shortly after this post was published, Frank Gardner found himself stranded, again, on a plane that had landed at Gatwick Airport.]

  • Sociable media: the case for promoting positive social norms

    Taken from Pleito de Suegras by José Guadalupe Posada (circa 1880–1910)
    Taken from Pleito de Suegras by José Guadalupe Posada (circa 1880–1910)

    In the UK, around 30% of adult women choose not to dye their hair, and falling steadily. The data are sketchier for men, but it’s more like 60% and falling more sharply.

    This disparity of choice interests me. It doesn’t come as a surprise, but what is harder to pinpoint is the precise cause. My knee-jerk reaction is it’s the long legacy of differences in aesthetic expectations between the genders: disproportionally, women have felt an expectation to appear younger.

    But a singular cause is hard to pinpoint. It’s almost chicken-and-egg*: was the creation of and continued demand for hair-dye caused by a pre-existing expectation, or did the manufacturers of the product create, stimulate and perpetuate this pressure to shift products?

    The answer is probably a lot of both, and that’s what I find interesting in the context of emerging behaviours in digital spaces: effects can also be causes.

    The hair-dye example illustrates that after a while, even if a common behavioural trend had a single, defining origin, the fact people do it becomes a reason for people to do it. People like to colour their hair, so hair dye exists. Because hair dye exists, people like to use it.

    But, because this behaviour is commonplace, people—some more than others—start choosing to do it because they feel there is an expectation upon them. So far, this is the best way I have found to contextualise the relatively sudden demise of digital social spaces, from cordial and discursive into aggressive and hostile.

    Costolo writes: “I’m frankly ashamed of how poorly we’ve dealt with this issue during my tenure as CEO. It’s absurd. There’s no excuse for it. I take full responsibility for not being more aggressive on this front. It’s nobody else’s fault but mine, and it’s embarrassing.”

    “It’s no secret and the rest of the world talks about it every day”, Costolo continues. “We lose core user after core user by not addressing simple trolling issues that they face every day.”

    The challenge of moderating social spaces, even semi-automatically, is considerable. Every social platform has a long, convoluted code of conduct to protect its revenue and, to some degree, its users. Yet, some of the worst behaviours exhibited on social platforms aren’t single users, pushing content through the cracks between terms of use. Rather, much of the worst behaviours come from people—unconnected other than by the platform itself—behaving badly together.

    Group-based moderation is poorly defined and difficult to enforce. It’s like trying to hand out speeding tickets to a stretch of motorists based on the traffic’s average speed: every individual would argue, rightly, that they’re not in control of the group and therefore isn’t able to influence the average speed significantly. Yet, we’ve all been on highways where large numbers of drivers—perhaps the majority—are in excess of the speed limit. When it happens, it’s easy to go with the flow. Herd mentality.

    When social media got going in earnest, it was common to see individual users publicly airing a grievance with a brand, usually for poor service. It was so common that, for a time, it started to seem like the de-facto means of orchestrating customer care, since brands invested heavily in managing their social media reputation.

    Much more common now is for users to swirl, suddenly and rampantly, around a particular issue: a point of politics, or an individuals’ behaviour, or an article cleverly constructed to provoke a response whether or not you’ve read it. People jump on.

    But I think there’s more to it than herd mentality: I think many people do it because they think that’s what they’re supposed to do, either for the sake of their identity or because they think that’s how these things work. This is why I think moderation is failing. Moderation is restrictive, not permissive. Being explicit about what individuals can’t do does little to moderate what groups can do.

    Changes would be needed for social media platforms to curtail bad group behaviours. Currently, it’s hard to imagine how that would not impact the freedom of individuals’ expression. But that’s because of the restrictive overtones of moderation: these things can be extremely subtle, so piling on more restrictions wouldn’t likely help.

    Instead, functional changes to how social media platform works—permissive rather than restrictive—may protect the individual while also curtailing the extremes of the group. There is precedent: brands getting systematic about social-based care suppressed the barrage: for the most part, they didn’t say ‘you can’t complain here’; instead, they found ways of stepping forth to manage the situation actively and elegantly.

    The same may have to be true of the platforms themselves. They need to be able to determine when groups are sharing a joke and when groups are attacking vulnerable people. They need to encourage the former, functionally, and dissuade from the latter, by determining who in the group is participating just because they think this is what participation now means. Today the distinction is subtle, but the platforms need to know it when they see it.

    *By approximately 340 million years, it was the egg.

  • Maximum viable potential

    Taken from The Invention of Printing by Theodore Low De Vinne (1876)
    Taken from The Invention of Printing by Theodore Low De Vinne (1876)

    Internet-era innovations often arrive to tremendous fanfare. When you consider the number of new things being made, the number that go on to be hugely impactful is extremely low. Nevertheless, the trumpeting—proportional to how much funding is at stake—goes on. This one, this one here, is the Next Big Thing. This has the potential to change everyone’s lives, forever.

    The overwhelming majority don’t; some do. Those that have rarely did in the form they were in when they were touted as gamechangers. That’s the nature of it all.

    Early-stage technologies suffer deeply from developer myopia. Technical teams start with the core functionality and over time fan outwards towards the exceptions and edge-cases. But the drawback of a ‘good enough is good enough’ approach is that it seldom is. The initial technologies of Uber and Airbnb did not, for example, make people safer, yet steps to ensure safety are now core to their brands.

    Consider the exclusionary nature of the smartphone. The extent to which they are resilient to your lifestyle has depended on how much you have in common with Californian men: general wealth, education level, larger hands, average climate and so on. Manufacturers removing headphone jacks was a clear signal their smartphones were not ‘for everyone’: only those able and willing to buy and use much more expensive, wireless alternatives, in spite of their shortcomings. Again, myopia.

    I’m reminded of a famous post by developer Patrick McKenzie: Falsehoods Programmers Believe About Names. Where it is necessary to record someone’s name, developers commonly use three fields: a title, a given name and a family name. McKenzie listed extensive exceptions to this formula: it’s enough to make you realise the scale of ‘edge cases’ is vast. He also inspired others to write lists of falsehoods about phone numbers, time and all sorts of other things. Along similar lines is a recent, enlightening post entitles Horrible edge cases to consider when dealing with music.

    What all these edge-cases expose is the kind of developer myopia that plagues new technologies. These simplified assumptions aren’t representative of people’s behaviours and how the world works, and therefore alienate users.

    So, I wonder whether the falsehoods are a good predictor of innovation success. In scientific fields such as medicine, rigorous studies to identify edge-cases are not only commonplace, it’s critical. Therefore, in the digital domain, does the speed at which developers embrace their edge-cases offer an indicator of the extent to which an innovation has potential?

  • What broadcast does that streaming hasn’t replaced

    The BBC's Mobile Television Unit, 1938
    The BBC’s Mobile Television Unit (1938)

    Broadcast media’s transition into the digital era has devalued it. Broadcast is truly excellent when two criteria are met: timeliness and inclusion. It’s the best way to watch sport, for example, as well as events such as the Superbowl, Eurovision and coverage of the Queen’s jubilee. It was the latter that has reminded me of broadcast media’s tremendous strength.

    On-demand services try to stage ‘events’ of their own, with a combination of season launches, one-offs and rampant promotion. But overall, people watching a show within the same couple of weeks isn’t as inclusive as everyone sitting down together. There are water-cooler moments, and then there’s the more powerful feeling that your household is somehow united with millions of others, right this moment.

    Digital-era behaviours around media consumption have changed too, obviously. An enormous breadth of choice sets a hard limit on the possible participation in any particular one. But cut back further is that sense of inclusion that heyday broadcast commanded. It’s analogous to a festival: a massively shared experience.

    In the Eighties, BBC One’s Saturday schedule was built around this model, even if the terminology was different. Live kids’ TV in the morning, sports coverage in the afternoon, live shiny-floor entertainment in the evening, movie at night. On-demand services can reach the same levels of inclusion, but they have to spend tens or perhaps hundreds of millions of dollars to do so.

    It might be a frequency bias, but I feel like I’m noticing repeated references to a golden age of television—but with some saying it’s ending and others saying it’s just beginning. I’m minded to think of Disney’s purchase of, and then investment in, the Star Wars franchise: including making the transition out of the cinema and into the living room. As I write, Obi-Wan Kenobi is mid-season: I don’t know the budget, but you can tell it’s considerable. It likely dwarfs the movie budgets. Massive budget allows for the best writing, stagecraft, acting, directing, editing and marketing.

    These shows are objectively excellent. Not just this franchise: the big productions from all on-demand services are, by any measure, astonishing. Had it been possible to make them 40 years ago, the viewing figures would be stratospheric and viewers’ enthusiasm would be considerable and long-lasting. Today, though, budgets are high, audiences are comparably modest, enthusiasm is only good to middling. All this in spite of these big shows being among the greatest ever made.

    What this demonstrates is the various facets of the behavioural change. Viewers’ dissipation across media channels and changes to consumption habits are well-documented. But in addition, perhaps in result, the sense of timeliness and inclusion has eroded. The digital-era behaviour is based upon feeling like you’re consuming something, rather than you’re involved in something.

  • Better ways to find, with faceted search

    Taken from Ornament with Two Women Blindfolded by Sebald Beham (1527)
    Taken from Ornament with Two Women Blindfolded by Sebald Beham (1527)

    I like faceted search. I think eBay is my favourite search engine of all. But search facets have dropped out of fashion in interface design. If we got back into search facets, then interfaces of all kinds would be better.

    I often remind myself that my usage patterns are not representative. I might even have that tattooed on the insides of my eyelids, just to be sure. I also know eBay isn’t where the cool kids hang out anymore but, as the bathwater drains away, we should remain mindful of the baby. In this case the bathwater is market share, and the baby is quick but also precise online search mechanisms. Or perhaps not even search, but find.

    It’s funny that the term ‘search-engine’ stuck, given all the ways this functionality could have been described. And what you call things over-influences what they do—ask Carphone Warehouse. Okay sure, searching might describe what users do (or did back then), but it doesn’t describe what users want. It’s foolhardy to make sweeping generalisations about users but I think I’m on reasonably safe ground when I assert that users don’t want to be looking; they want to have found. The result of the search is where the dopamine is, not the search itself. There’s no thrill-of-the-chase. Anyway.

    Faceted search is a good, solid, finding-stuff paradigm. Enter something that very broadly describes the thing you’re looking for, and then use facets to turn a huge number of hits into a small selection of precise, usable results. You could type in ‘car’ or pick the equivalent category, then choose age, number of seats, location, price range and so on, and fairly quickly you’ll have found a shortlist of cars fitting your exacting preferences. Faceted search is a really good way to buy shoes, since a user is interested in shoes only of a certain style and size, give or take.

    Free-text searching, even using fairly hardcore search operators, doesn’t compare. I’m a hardened user of search operators, wrapping critical terms in quote-marks and so on. But this requires me to remember all this syntax, which I can’t, and popular search engines seem to adhere to it less strictly with every passing year. Google Shopping might be good for browsing, but is nearly unusable for anything precise, seeing as the only facets are condition, price, colour and seller. Amazon’s search, while more finely faceted, seems to be deliberately imprecise to encourage users to browse. I’d guess they’re measuring dwell-times over particular pictures, to gauge what to recommend later. But again, browsing isn’t finding.

    Google Maps is infuriating for its lack of useful facets. It can plot intricately precise routes from A to B, but can only facet these routes to exclude tolls, motorways or ferries. I can’t avoid tight turns or blind corners, which would be useful in something bigger than a car, but maybe the dataset doesn’t exist. That said, what if I wanted to include something in my route? The cheapest fuel-stop, maybe, a nice bite to eat, or a scenic spot for a leg-stretch. The satnav I bought nearly two decades ago could do this. If I were minded to, I could research it all myself (by searching separately) and set intermediate destinations along my route, but the voice in my head says: you’re the search-engine, you figure it out.

    There used to be lots of ways to find online stuff: not just search but directories, curated lists, webrings and whatnot. The web outgrew many of these approaches but, even so, there also used to be various search paradigms—like some specifically for queries in the form of questions. Now there seems to be only one style of search: jabbing in keywords and hoping for the best. It has contorted online content into a keyword-dominant vernacular: keyword-jabbing, and optimising for it, has made the contemporary web even more categorised than the directories of old. Funny how that came back around. It’s influencing how we behave.

    A poignant, if a little NSFW, example:

    Data on what we search for, pay for and click on is being used to predict our desires and funnel us bespoke(ish) porn.

    At first blush, it might seem like this kind of micro-targeting would just turbo-boost the internet’s existing trajectory, making it even easier for people to find and embrace a diversity of bodies and fetishes. But there’s a fundamental shift here from a world in which we explore a passive sea of content to a world in which porn actively explores and prescribes itself to us. Because this shift stems from deep financial upheaval in the adult industry, the content pushed upon us will likely increasingly reflect what is most profitable, not what is most widely desirable. It could well become narrowing, or at least channelling, rather than broadening.

    This approach is also how ‘natural’-language interfaces—the ones you bark at from across a room—have evolved. Alexa is precise only if you are able to ask for something that the machine can recognise as being near-unique, such as a particular song. It can’t have an ongoing conversation with you about taking a large set of search results and turning it into a handful of sensible choices.

    More widespread faceted search would be an opportunity to take ubiquitous keyword-based searching and explode it into new, useful, powerful interfaces: both useable and precise. eBay has stuck to this model and it’s really good. LinkedIn uses it a lot for finding contacts, jobs and so forth, and it works well: finding something as specific as which of your contacts who know someone at Microsoft would be arduous without it. Holiday sites—hotels, flights and the like—also give users plenty of facets to turn all the world’s options into a manageable shortlist. Users are used to doing it; it’s just not available widely.

    That’s how I’d want to use a natural-language search but Siri, or whichever, can’t handle it. I can’t even remember what I have to shout at my phone to use its natural-language doodah: that’s how little use I have for such a blunt implement. It’d be more interesting for these interfaces—all interfaces—to take the 1990s multi-dropdown facet model like eBay and think about how to bring this level of specificity to contemporary users’ lives. Simply put, facets reflect how people think: must-haves versus negotiables. Certainties versus unknowns. A find-engine would do the same.

  • In pursuit of intuitive interfaces

    Mr Joseph Mathieu examining a plan: taken from a glass negative in the collection of The History Trust of South Australia (circa 1925)
    Taken from a glass negative in the collection of The History Trust of South Australia (circa 1925)

    Whenever I hear a user interface is ‘intuitive’—or needs to be—a synapse fires. An intuitive digital interface would follow users’ behaviour, regardless of their prior experiences with other interfaces. We’re not there yet.

    When pointy, clicky, window-based desktops emerged, users were trained to find functions at the top, contexts at the bottom, and the ‘work’ going on in the middle. Machines running either Windows and MacOS still ship like this. If an Excel user wants to find functionality, they head upwards to a ribbon icon or dropdown menu. If they want to switch to Word, they head downwards to the start-bar or dock.

    The internet dented this. On the web, thanks to the hyperlink, functionality is all muddled in with content. Context-switching is more like a fork in the road. Anyone who’s fallen down a wikihole will know.

    In my last schoolyear, I helped teach the sprogs basic computer skills: like community service, but for crimes not yet committed. In a couple of hours, I could get 35 kids to be reasonably confident with spreadsheets: diving into cells to construct formulas, and switching around between applications to pull in chunks of data. The keen ones—the ones bitten by the bug—then went on to discover the command-line and, I expect, are now lounging on a domain-name empire or something.

    Getting to grips with the pre-Google early web, though, took longer. Just like a spreadsheet, a wide bar across the top would accept input, but the kids would find it harder to make their browser ‘do stuff’. The browser didn’t integrate well with other applications: you could copy out but not paste in. Clicking a link within an email could take you to a specific webpage, but clicking a link on a webpage couldn’t take you to a specific email, only a blank one.

    While that early web is no longer recognisable, I still catch users staring blankly at interfaces, just as those kids did in the mid-Nineties. Better browsers, better online services and new internet devices—phones; tablets; watches etc.—tackled many of these paradigm mismatches head-on. Many millions of hours of thought and experimentation have been given over to make functionality, context and content coexist on palm-sized touchscreens.

    Creeping homogeny, or more accurately convergent evolution, in interface design has tried to put things where users might expect them based on their previous experiences. But challenges with contemporary interfaces are more complicated and nuanced than simply where navigation should be placed.

    An example. On my desktop, all the system settings are in one place, and then all the settings specific to an application are in another. Except these days there are system-wide restrictions on things like accessing the screen or the disk, so every application I install has to beg me to change system preferences before it can function, thereby dividing application settings into two contexts.

    It’s worse on my mobile. All preferences for the system, and the apps that shipped with the phone, are in one huge settings menu. But the preferences for individual apps are who-knows-where: sometimes in the app context, sometimes the system context. Users drown in this kind of thing. Who can see my stuff (as a setting) and who’s stuff I can see (as a function) are more closely related in users’ minds than is often to be found in the interfaces they use.

    This is complicated because there are so many use-cases. How an app notifies you is one thing, how all apps notify you is another. But users tend not to think like that; they often lean towards task-focussed thinking. So current interfaces trend towards a convergence, leaving interfaces feeling vaguely recognisable but indistinct, which in turn causes further confusion and poorer recall amongst users.

    However, rather than all designers continuing to iterate towards the mean, there are two areas of exploration that could significantly improve users’ comprehension of interfaces, which in turn would make interfaces more useful. These areas could perhaps make interfaces truly intuitive, by following human behaviours rather than imposing conventions.

    First, users do things fastest in familiar ways. So, passive preferences could follow users around, between apps, contexts, tasks and even functionality. Instead of users having to learn how each device and app can be made to, say, grab a photo from one context and use it in another, the standard way they do this could transcend contexts, just how copy-and-paste does. All the other methods would remain too, but the one that’s top-of-mind for the user would always work as they’d expect, in any context.

    Second, this is a good potential AI problem. Designers lean upon what has gone before—what has demonstrably worked, what’s aesthetically appealing and so forth—whereas an AI could start on interface problems unfettered by baggage and metaphors. It would be interesting to think of AI in the context of intuitive design, not as a producer of strange, mangled cat pictures, but as a collegial third eye: unbound by design dogma and able to show us new possibilities from outside or own influences.

    • The AI and the Tree (booktwo.org): James Bridle considers a shift in perception of AI from servant to colleague.
  • Taken for a ride: an Uber-full of nonsense

    Taken from A Carriage in London by Constantin Guys (1848-56)

    I ordered an Uber to the railway station. The driver reached me within two minutes. This digital behaviour has become highly evident, and is truly habit-forming. Once you’ve hailed a ride through a digital interface, it’s hard to imagine ever going back to calling a dispatcher and waiting, and waiting.

    I acknowledged that I was in a mask, as I’d been negative for only a few days. That’s alright, he said, although he thought the tests were showing everything as Covid now. Oh joy, I thought.

    My Uber rating is surprisingly low, as a result of a single booking more than five years ago: returning from the vet with my cat, howling for freedom in her carrier. I can scarcely afford a disagreement with a taxi driver.

    He continued. He thought the Covid thing was overblown; that the media and government had made a great big deal out of nothing. That politicians and scientists were out to control you. I stuck to neutral noises: not in agreement but not contesting either. The cowardice of a man with a low score.

    He’d had Covid, he said, and it was basically just ‘flu. Oh so he’d had his jabs, then, I replied hopefully. No, he didn’t believe in them. My next noise must have sounded curious. He would’ve got them but he knew so many people that got sick from the vaccines, with blood clots and all. Like who, I regretted asking. Of course, he couldn’t name anyone. I lurched for another topic of conversation.

    Just as it’s hard to return to calling minicab offices, it’s hard to come back to professionally-researched information through mainstream media. Social feels like socialising: a conversation in the pub, surrounded by friends: low-commitment and self-regulating. You would only see the half-truths and manipulation on your timelines if you choose to look, and why would a comfortable person choose discomfort?

    My digital behaviour brought me a taxi ride. His brought him a customer, and a warm bath of misinformation. And by leaving this unchallenged, my Uber score improved by 0.01.

  • What went wrong with social media?

    Taken from Une Discussion littéraire a la deuxième Galerie, by Honoré Daumier (1864)
    Taken from Une Discussion littéraire a la deuxième Galerie by Honoré Daumier (1864)

    This present generation of social media platforms—the big ones; the ones whose logos adorn the side of tradespeople’s’ vans—all started out with a noble endeavour or two. They were built so that people could stay in touch: straightforward, lightweight and low-commitment.

    That’s how it was for a while. Friends discussed and shared. Celebrities of all ranks became accessible, and readily engaged with fans. Strangers came together to share laughter or collective displeasure. Broadcast media and brave brands used it as a means of gathering rapid participation from their newfound communities, for better or worse.

    While this all still goes on, this image of social media is no longer first to spring to mind. Somehow, the well was poisoned. Within their first decade, the experience of social media u-turned and predominantly made people unhappy.

    Before mainstream social media, the functionality had already kind-of existed, at least in primitive and more disjointed form. But it was restricted to those minded and comfortable enough to hunt it out assemble it for themselves. By contrast, the new platforms succeeded on two fronts: simplifying online participation, and creating a groundswell of interest amongst regular people. It was great, at first. Then it wasn’t. Now, it’s net-negative.

    For a while I thought this was caused solely by commercial pressure. Fuelled by venture-capital, the social media firms needed to drench communities in advertising to be viable, and little of it was of high-enough quality or placement to be altogether welcome. This in turn disrupted the community spirit, which in an online setting can already be fragile.

    Then I thought it was a more general shift in Western politics, away from liberalism and towards populist extremes. In the main, the internet had held a liberal, almost egalitarian ideology, in the sense that folks could do as they pleased, within a consensus for some high-level constraints. But social media seemed to be lurching away from this. Political stances became more visible. Those of opposing views would seek each other out. Discussions around observations became arguments around beliefs.

    Later I pondered whether it was the explosion in the online and social media population. But a logical prediction might have been that more users wouldn’t worsen the platform; it might have even improved it. Were the late majority significantly illiberal, compared with early adopters? Hard to prove. Perhaps the increase in anonymous users? Again, tricky: there had been anonymous or pseudonymous net citizens before, and all was just fine. Besides, these population increases don’t tally, in terms of timing, with the descent of the platforms as a whole.

    But now something has happened that has me connecting dots. Twitter has announced yet another policy: a toughening-up on spam tweets and copypasta: those identical posts across multiple accounts that seem particularly popular amongst those with disruptive or political intent. Twitter has all sorts of poorly-enforced rules and policies, so you have to wonder what difference one more would do. But yet another attempt to tidy up the platform set me thinking.

    None of the major social media platforms have ever been lawless. They all set out with terms of use, similar to those a hosting company might impose upon customers to keep illegal, immoral or fattening content off their servers. There were even clauses to promote the good health of the community as a whole. And these terms received regular review, so the platforms can’t be described as ever having been egalitarian to a fault: they’re always been more self-regulated than what went before.

    However, abuse of social media weaves between platform’s rules, and always has. It’s the platforms’ successes—their straightforward, centralised functionality—that has left them wide open. They’re not networks; they’re destinations. So their vulnerability is twofold: they’re abused because they’re so abusable, and because they smothered any alternative.

    Before social media, if you had wanted to drive a particular message towards loads of online eyeballs—the number necessary, say, to distort an election—it’d taken a mammoth effort. Personal publishing was too distributed. Everyone owned and maintained their own patch. It’d take too much persuading.

    The pre-social web was imperfect, but what social media did was demolish it: it took the eyeballs off what had gone before and concentrated most online attention to a small number of places that were vulnerable to subversion.

    The concentration of attention is a characteristic of old-media but, in their case, any slant or bias was generally well-displayed and understood. Publications and broadcasters happily nailed their colours to whatever mast, and the public were broadly free to choose where to align. The difference is these publishers maintained standards, some self-imposed and some regulated. They were not in the habit of handing their front-pages over to whatever fruitcake fancied reaching the readership for whatever end.

    And as social media drained visitors from other personal publishing, it made decreasing sense to persist. Many didn’t. So, the far-less abusable but still egalitarian world of old blogs languished. The execution of Google Reader, in the era when Google still fancied having a social network of its own, caused further damage.

    Twitter trying to toughen up on subtle abuse of the platform has me wondering whether the colossal task of moderating social media is at odds with the business of operating a social media platform, and this is why they barely bother. Not directly because of the risk of a dent in advertising revenue, but for these centralised destinations, small in number but massive in scale, viability and abuse are intrinsic.

  • Why we need to stop saying ‘digital’, and why we don’t

    The Usborne Book of the Future (1979) prophesied the home of 1989 would be media-rich and technology-enabled, with its inhabitants enjoying both new things and new ways to do things.
    The Usborne Book of the Future (1979) prophesied the home of 1989 would be media-rich and technology-enabled, with its inhabitants enjoying both new things and new ways to do things.

    A few years ago, I met the CTO of a manufacturing firm. His company had large production lines in Manchester where they made real things out of raw materials. They were appointing the company where I worked and, as I had the word ‘digital’ in my title, the CTO visited to suss me out. We greeted each other and exchanged business cards.

    “Which one?”, he asked.

    “Which one what?”, I smiled. I couldn’t think of anyone there with my name.

    “Which digital?”, he replied.

    This has floated in the shallows of my memory ever since. A CTO in manufacturing will meet a fair few ‘digital’ folks. There’ll be a digital someone-or-other thinking about the production lines: monitoring; twinning; maintaining and so forth. There’ll be someone else in supply-chain: connecting; streamlining; reporting. There would be an equivalent in HR, finance, logistics, design and so on. I just happened to be the ones thinking about the now-digitalised behaviours of customers, product users, rather than the operation of the company itself. How they choose, how they buy, how they feel.

    Given that every sector is digitising, it can start to feel like the d-word is now irrelevant. In an era where digital pervades all sorts of settings, just has the electrical supply has, there’s an argument for no longer maintaining a false divide. Indeed, it’s been argued for some time that we’d all be best off abandoning the use of the word ‘digital’ altogether.

    These arguments are solid and well-made. Within many most industry verticals, use of the term ‘digital’, to describe capabilities or roles, is indeed largely outdated. But the view that ‘digital’ is no longer needed altogether is the view from inside one of those verticals.

    In short, I agree, ’tis a silly word. But there are catches.

    First, there are still so many edge-cases where it’s useful to delineate between non-digital and digital behaviours—crucially, not old-world versus new, but how the two coexist. Because they do.

    There’s another catch when talking about the digital era in isolation from those that went before, as the many advantages of digitalisation aren’t without drawbacks. Transforming to a digital-first approach introduces new challenges that didn’t previously exist. It can also cause existing challenges to be inherited, perhaps compounded. If implementing ‘digital’ offered a direct replacement for what went before, it could pass without comment. But it doesn’t.

    So, tearing down divisions between physical and digital universes makes good sense in commercial settings. But abandoning ‘digital’ as a distinction for how people sometimes behave, and the era in which they find themselves, would overlook many edge-cases worthy of distinction and examination.

    In the coming days and weeks, I want to share examples of where the delineation between digital-era and previously-observable behaviours is a helpful means of understanding what goes on, and how to improve things for people. Internet-era ways of behaving.

  • Post-office: how could open-plan and virtual meetings coexist?

    The computing division: 'Bonus Bureau' clerks in Washington D.C. calculating benefits for war veterans.
    The computing division: ‘Bonus Bureau’ clerks in Washington D.C. calculating benefits for war veterans.

    The widespread adoption of remote working will be seen as one of the most significant digitally-enabled behavioural milestones of this decade. It has limitations, but it also invites the possibility of righting some of the wrongs that have crept into the design of working environments. However, more work is needed to avoid the worst of both.

    In this era, we have mainly open-plan offices and mainly closed-room online meetings. Those sharing physical spaces have good lines of sight but generally poor spaces to work and meet privately and without disturbance. For hybrid and remote workers, it’s the opposite. We might have expected these two working environments to blur. But they didn’t. And they should.

    Technologies permitting near-seamless remote and hybrid working have existed for years. The various merits and drawbacks were discussed at considerable length; occasionally there’d be an impassioned rallying-cry to make work more compatible with life. Still, uptake remained extremely low, and all it took was a global pandemic to turn the corner.

    Healthy workplaces are under attack, principally from the cost of commercial real-estate. Designing office space that is effective at offering a distraction-free work environment, privacy and also a sense of both inclusion and collaborative investment takes far more floor-space than taking down walls and packing desks together.

    Further space can be ‘saved’ by reducing the number of desks and hoping workers will continue to be invested in the place despite not having anywhere to call their own. But out with the bathwater goes the baby.

    For service-sector businesses, and also others with an interest in service efficiency such as primary healthcare, once the barriers of upskilling and driving adoption had been addressed by necessity, it was easier to see the opportunities of working remotely. But in behavioural terms, much is needed to virtualise the casual collaboration that comes with sharing a physical workplace. Everything is a meeting, and a meeting isn’t where work gets done. Meeting discipline has always been poor, and video-calls offer little improvement.

    That said, applying resistance to the convention of meetings is beneficial. Time and again, remote workers have been shown to be more productive than those in the office. And in the era of reducing emissions, avoiding workers’ travel is a big step in the right direction.

    But a substantial and outstanding challenge with remote working is that it is not inclusive. Many people find working from home difficult, particularly those who didn’t, or more likely couldn’t, consider remote working when taking on their home. In particular, it doesn’t suit those in junior roles, and those who live with many other people, especially if those people are trying to work remotely too.

    So, with newly-found flexible and productive working, plus newly-acquired digital behaviours across workforces, what can be done? More technical and behavioural evolution is needed to improve the work environment—physical and virtual—specifically to improve informal workspaces.

    Post-pandemic, hybrid working demands hybrid thinking. It’s worth treating improvements to both physical and virtual work environments as a single and ongoing endeavour, as the learnings from one may equally apply to the other.

From elsewhere

Links to some things I’ve been looking at recently.

Roll call

Links to some people whose sites I follow regularly.