• The Togetherati: why good people support damaging causes on social media

    Adapted from The story history of France from the reign of Clovis, 481 A.D., to the signing of the armistice, November, 1918 by John Bonner (1919)
    Adapted from The story history of France from the reign of Clovis, 481 A.D., to the signing of the armistice, November, 1918 by John Bonner (1919)

    Pick any one of the centralised social media platforms, and it feels evermore dogged by a persistent question: what is it for?

    The pitch for social networks has always been kept bland and ambiguous. Now everyone, so it goes, can have a voice: to connect, to share and to join in. But to what, and to what end, has been left to the users.

    Except, it never was left to whimsy. The platforms have nearly always nudged and jibed and prodded users in the direction of certain topics: either things that make them more targetable by advertisers, or things seemingly happening at this very moment to make them engage. Scroll and post, post and scroll. If it were ever in doubt, social media has proven this to form a highly unhealthy basis for any platform.

    It is unrealistic to expect the demand for advertising on social media to apply a steadying influence. Some brands might make a stand if they deem the risks to outstrip the rewards. But the vast majority will do anything for eyeballs. By unfortunate coincidence, we’re also in an era where advertisers value reach over impact simply because it’s easier to quantify. This will certainly revert, but not soon.

    Social media has not been all negative. Some users managed to build something out of their followership: tangible things like money and/or intangible things like influence. But the proportion of these users is vanishingly small, considering the system as a whole. Further, for the most part, the networks afford power to the privileged: those who already have money and influence tend to be those who receive even more via social media.

    Even then, there have been exceptions. Some underrepresented voices have been brought to, or found on, social platforms. This is the principle way in which social media has found positive purpose. However, the means by which this happened became indistinguishable from something much uglier. Socially-beneficial movements and ideological whip-ups are hard to separate in the theatre of social media. Joining in, or the appearance thereof, is gameable.

    Demagoguery, it seems, generates both revenue for social platforms and political influence for the undeserving. Whipping it up, either through targeted advertising or creating more subtle groundswells of particular topics, issues or political stances, isn’t new. But the speed, ferocity and disingenuousness on social media sets it apart from that which went before.

    Groupthink predates social media but has found new dimensions. The semi-anonymity seems to trigger mass disinhibition. Combining these two phenomena leads towards deindividuation: users tend to avoid speaking out to maintain a social standing, while also being far more hate-fuelled than their real selves could manage, or let on. Of course, offline life isn’t peaceful either. But social media inherited all life’s unpleasantries, then amplified and multiplied them across geographic boundaries.

    The deep divisions in attitudes and behaviours on the big platforms is prompting action. Those of more liberal leanings, particularly early-adopters and those seeking a lower-hassle approach, are defecting. They are moving towards networks over which they have more ownership. Federated social platforms feel more like the era before the internet was dominated by a handful of massive platforms, and was all the more cheerful for it.

    The migration will not be absolute, nor should it be. But a move back to federation will at least give a clearer context for what social media is for, and what it means to join in. There will continue to be a togetherati: blindly participating in whatever crackpot cause their chosen dimwit celebrity is using their notoriety to promote, just to feel part of something. But for the rest, it might mean not having to put up with all that unbridled aggression that has worn down the enjoyment of social platforms these last few years.

    In parallel, various indications point to the large, centralised networks becoming less liberal, including organised efforts to deplatform prominent moderate voices. It’s hard to tell at this point whether social platforms’ shuffling along the political spectrum is driven specifically by money, or more generally by power. If the most ‘engaged’ users, i.e. the ones who either would more likely pay or pay attention, are towards the extremes, it may make commercial sense to cater more to them. Or it might be a contemporary, tech-enabled effort by the super-rich to influence democracy in their own interests. Or it could be both, or one leading to the other.

    Even without some grand and maleficent plan, the reasons for ‘users’ to continue to ‘engage’ with the ‘content’ on these centralised ‘platforms’, other than for uninformed political mudslinging, is diminishing. One of the key reasons to stay is perceived (and misleading) value of a large followership. But any migration may not be about number of people using any given system, but rather to redress the diversity of thinking.

    What the togetherati on social platforms has shown is that centralised courts cause centralised thoughts: it’s the worst incarnation of our collective self.

  • The man who sold the future

    Taken from Prediction of Nostradamus by J. J. Grandville (1836)
    Taken from Prediction of Nostradamus by J. J. Grandville (1836)

    Good predictions are hard to make. Predictions made on the hoof may sound inspirational, but are riddled with biases. But good learnings can be drawn from bad predictions.

    David Bowie fans will make themselves known to you faster even than vegans and wild-swimmers. Even six years after his passing, Bowiefication continues so strongly that you might be forgiven for thinking that nobody else had an impact on popular culture between 1962 and 2016.

    Bowie’s career is noteworthy for being long and prolific: releasing 128 singles from 26 albums, even if only five were hits. This is double the output of The Beatles in their eight years together, although the Fab Four’s hit-rate was four times higher as one band, higher still when including their time individual artists.

    Thanks to an interview on the BBC’s Newsnight in 1999, Bowie is often credited with foreseeing the potential of the internet way ahead of the pack. I’m fascinated by this interview. Bowie and the notoriously abrasive interviewer Jeremy Paxman, both having made comfortable careers in conventional media, are grappling with the concept of the media being democratised.

    Paxman adopts as pessimistic stance, as if he sees the internet as a media-delivery tool rather than some sort of cultural movement. But we can’t know whether this is what he thought, since he is a talented interviewer. Either way, Bowie took the bait and began to riff, almost off-the-cuff, about the internet’s forthcoming cultural impact. The core idea he hit upon was the inflection from a small number of massive cultural touchpoints—television interviews, rock stars, etc.—to a massive number of smaller ones. Touchpoints as small as a group of near-anonymous people or, as Bowie calls them, an audience.

    Bowie counters Paxman’s pugilistic pessimism with a sense of cultural potential, but his thesis is incomplete. While his sense of fashion had always been progressive, Bowie was technologically conservative. So, even if he wasn’t articulating ideas as he was having them, his thoughts about the cultural impact of the internet were still quite fresh. In an interview two years earlier, he described the internet as just another tool about which he wasn’t wildly excited.

    Though, as the star uttered these words, his management were already in talks about founding an internet service-provider with Bowie as both investor and public face. BowieNet launched between these two interviews, which might explain both Paxman’s questioning and Bowie’s newfound optimism.

    You can’t sustain such a long artistic career by staying in one place, and Bowie is renowned for reinvention. But for an artist capable of inventing and dropping whole identities as the pop market demanded, his technological skepticism had remained constant.

    Bowie’s imperial phase ran from 1975 to 1985; all his hits were released in this period. It was also in this period that he developed a disliking for Gary Numan: another androgynous, highly-styled British male artist and longstanding Bowie fan. Numan’s futuristic 1979 single Are ‘Friends’ Electric had outperformed Bowie’s Boys Keep Swinging, so the latter and his fans began casually pouring scorn.

    The New Musical Express, a fusty publication that took exception to Numan’s searing electronic disruption of the conventional rock’n’roll format, enjoyed whipping up the vitriol of Bowie fans against emerging artists. In a 1980 interview, egged on by the NME’s Angus MacKinnon, Bowie criticised acts he perceived to have copied him while simultaneously misunderstanding him. In particular, he attacked Numan’s sci-fi themes:

    It’s that false idea of hi-tech society and all that which is… doesn’t exist. I don’t think we’re anywhere near that sort of society. It’s a enormous myth that’s been perpetuated unfortunately, I guess, by readings of what I’ve done in that rock area at least, and in the consumer area television has an awful lot to answer for with its fabrication of the computer-world myth.

    Skipping forward again to the famous 1999 interview with Paxman: in articulating optimism for his newest commercial venture, Bowie inadvertently offers a number of teachings about the futurism, the internet and people in the media.

    1. It’s really hard to make long predictions. It’s hard to research them, formulate them and particularly to communicate them.
    2. We often consider ideas that are new to us as being new ideas. See also the illusion of explanatory depth. This colours our vision of the future: the things that appear to be us to be happening contemporarily, and therefore signalling new trends, may have persisted for some time even in primitive form and we chose to dismiss them.
    3. Pre-internet media people become consumed by the mechanics of the media. Their assessment of culture was seen through a lens of what media people would consume. Post-internet, the same is true but no longer confined to those in the media industry. We often blur commercial impact with cultural impact. For example, those who use ‘woke’ in a derogatory way are consistently, but not necessarily deliberately, describing cultural phenomena that cannot easily be commercialised.
    4. Many technological predictions, in this era, take the incumbent internet and add something to it: sometimes another technology, sometimes an observable consumptive human behaviour, sometimes an ideology. But future behaviours are rarely additive.
    5. Of course, people maintain a bias towards ideas into which they have tangibly invested. A rockstar who has founded in an ISP bearing his name is going to be far more open to the possibilities of a technologically-empowered future than a rockstar who’s fallen out of step with ever-shifting pop trends.
    6. Cultural shift on the internet, free of commercialised influence, isn’t one continuously-progressing thing. Grassroots digital culture and behaviours lurch towards an idea, and then that idea is commercialised. the cycle then repeats. Bowie’s observation about the cultural movement being more significant than the tech giants was right, for about four years. Then after another three years it started looking right again, for about two years and so on.
    7. All recent talk of the liberating possibilities of media-dominating advancements such as the metaverse and crypto can be considered another echo, in which those with tangible investments speak of cultural possibilities. I have no doubt Bowie, if he were still here, would say he found the idea of decentralisation fascinating—full of wild possibilities for artists and their audiences—while also investing heavily in crypto and selling off his creations as NFTs. After a short time, it would show itself to be a new implementation of an old idea: a fan club.
    8. In all things, the question influences the answer. That’s what Bowie teaches us, time and again. Put someone on a pedestal and give them something to fight against, and they will. Today you don’t need to be a rockstar: the pedestals are plentiful and thepugilism comes from all directions.
  • The unrelenting tidal bore of relentless cryptobullshit

    Taken from Découverte des Malfaiteurs by Jacques Callot (1633)

    I’m cautious of being sure of something. Going beyond the point of doubt is akin to fanaticism: but, time and again, the digital era has shown us things are always nuanced.

    I became convinced that Web3, particularly crypto, was completely and sensationally bad. In its contemporary form, could find not one single merit. So I put some pocket-money down and had a close look, to see if it does indeed behave like a credible market. It does not.

    The trouble is that you can’t guarantee stability against an unstable backing; nothing will protect you against the whole market going down. Every algorithmic stablecoin thus far has failed to maintain its peg. Algorithmic stablecoins work until they don’t.

    Stablecoins are a modern form of the wildcat banks of the 1800s, which issued dubious paper dollars backed with questionable reserves. These led to the National Currency Act of 1863, establishing the Office of the Comptroller of the Currency and taking away the power of commercial banks to issue paper notes. At the very least, stablecoins need to be as regulated as banks are. But all of cryptocurrency is a less robust version of existing systems and has any advantage only as long as it gets away without being properly regulated.

    It’s fragile and massively complicated: a whole bunch of loosely defined, flimsy, inefficient, vulnerable, independently-controlled components that are all the weaker for having become interdependent.

    The entire crypto space has been a Jenga stack of interconnected time bombs for months now, getting ever more interdependent as the companies find new ways to prop each other up.

    Which company blew out first was more a question of minor detail than the fact that a blow-out was obviously going to happen. The other blocks in the Jenga stack will have a hard time not following suit. 

    All this stuff has a seemingly-noble idology, but those who subscribe to those ideas know full-well that they can be used for personal gain at wider and greater harm. It’s kept complex for a reason: to be exploitative. The less people understand it, the more likely they are to be taken in, and taken for ride.

    You should not invest in Bitcoin.

    The reason why is that it’s not an investment; just as gold, tulip bulbs, Beanie Babies, and rare baseball cards are also not investments.

    These are all things that people have bought in the past, driving them to absurd prices, not because they did anything useful or produced money or had social value, but solely because people thought they could sell them on to someone else for more money in the future.

    When you make this kind of purchase – which you should never do – you are speculating. This is not a useful activity. You’re playing a psychological, win-lose battle against other humans with money as the sole objective. Even if you win money through dumb luck, you have lost time and energy, which means you have lost.

    Things with no real value are touted as assets. They are not. These tokens are simply a means for charging people for something that ought not to have a purchase price, since they are worthless. Just as Facebook et al. commercialised human interaction, web3 synically commercialises hope.

    It’s common for NFTs and governance tokens to double as speculative assets that can be bought and sold across crypto or NFT exchanges. But it’s questionable whether they have any fundamental value. Many gaming tokens are at best volatile and at worst worthless.

    Yet proponents of crypto gaming try to sell it as the future. Take crypto venture capitalist and Reddit cofounder Alexis Ohanian, who says crypto gaming will allow players to “actually earn value” through accruing assets that have some value in traditional or “fiat” money.

    In essence, he says people would no longer need to “waste time” gaming for leisure.

    And then there’s the true cost. Signals of wealth are, almost universally, made of carbon. Crypto is no different.

    One of the great Bitcoin unknowns has long been the amounts being produced, or “mined,” in what’s believed to be the top locale for mining the signature cryptocurrency: China’s remote Xinjiang region. We got the answer when an immense coal mine in Xinjiang flooded and shut down over the weekend of April 17–18.
    The blackout halted no less than one-third of all of Bitcoin’s global computing power.

    It’s not just the electricity: the sheer consumption that crypto requires is staggering. It’s significally less sustainable, not to mention less valuable, than the least sustainable wealth-indicators that already existed.

    Bitcoin mining requires very specific hardware, designed exclusively to process hashes, and nothing else useful. This hardware evolves very rapidly and is obsolete every 1.5 year.

    We estimate that, each year, around 12 000 tons of specific electronic devices are produced and destroyed. More that 80% of the weight is metal (iron, aluminum, copper, …).

    This means than around 10 000 tons of metal is extracted and transformed each year only for the bitcoin mining industry. This is 4 times more than the amount of gold extracted each year (around 2500 tons).

    Also, the value of bitcoin production is 1/6 of gold production (17 B$ vs 100 B$).

    Hence, overall, 1$ of Bitcoin requires 24 times more mining of metal than 1$ of gold.

    The vast majority of those involved are inadequately protected. The ideological decentralisation of ‘control’ merely makes it near-impossible for everyone to benefit from the long-standing systems designed to protect their interests.

    US copyright law explicitly states that transfers of copyrights and transfers of copies are legally different. Ensuring that NFT owners have the copyrights they think they do is a more complicated problem than it appears.

    The technologies are often touted as revolutionary. They are not.

    The Ethereum virtual machine has the equivalent computational power of an Atari 2600 from the 1970s except it runs on casino chips that cost $500 a pop and every few minutes we have to reload it like a slot machine to buy a few more cycles. That anyone could consider this to be the computational backbone to the new global internet is beyond laughable. We’ve gone from the world of abundance in cloud computing where the cost of compute time per person was nearly at post-scarcity levels, to the reverse of trying to enforce artificial scarcity on the most abundant resource humanity has ever created. This is regression, not progress.

    Where we’ve got to is layer upon layer of flawed idology, fanatical greed and shonky technology. It is one of the truly great embarrassments of our times.

    Bitcoiners believe the ETH community is the ‘woke’ alternative to their strange breed of grift and toxic masculinity. ETH does seem slightly less bonkers, under the direction of its demi-god Vitalik Buterin. One reason for the woke accusation is the ETH community’s switch to Proof of Stake over Proof of Work. Proof of Stake is an alternative to validating transactions that doesn’t involve setting fire to the earth slowly.

    Ironically, web3 is in part a product of the status quo the fanatics claim to be disrupting. If everyone considered it socialising in a more natural, equal way, it would have been summarily dismissed.

    Before we really dive deep into this, it’s worth addressing the elephant in the room: the present interest in this field is because of a dumb speculative bubble that is flowing over from the dumb speculative bubble in cryptocurrencies in general. I really don’t mean to prejudge my conclusion, here, but the cryptocurrency sector has exploded in value for basically the same reason Gamestop did – which is to say, a few people have sincere beliefs that they’re good and valuable, and a lot more people have money kicking around and a desire to get rich.

    It’s an appauling mess. The ideology of freedom being used to make people less free. Far from bypassing the olygopoly of Big Tech, web3 is the antithesis of freedom brought about by pure, exploitative greed.

    In cryptocurrency markets, every coinholder has a financial incentive to be their own marketer in order to increase the value of their own assets and can resort to any means necessary.

    Because of this, much of the Web3 hype being drummed up on Twitter – specifically focused on beginners, those new to Web3 and crypto – is predatory and follows along the lines of a ponzi scheme.

    As Aimee Mann might have put it, it’s not going to stop, til’ we wise up.

  • Living with digital

    Taken from Beleg van Poitiers by Frans Hogenberg (1569)
    Taken from Beleg van Poitiers by Frans Hogenberg (1569)

    Some technological inventions can slip easily into our lives. Others, either immediately or eventually, become hard to live with. Given our contemporary and future constraints, how well will the current crop of digital-era technologies fare?

    The microwave oven has low life-integration requirements. You don’t need to remodel your kitchen to accommodate one; you can just place it on the side and it’s at the right height to use. Although what it does is mysterious, it lights up like a regular oven and shows you what’s going on inside. You can be reassured that you can just spring open the door if need be. This level of reassurance from an appliance should not be underestimated.

    By contrast, the motor vehicle totally changed the built environment, and not for the better. Cars and trucks don’t mix well with cyclists and pedestrians. Many cities have lines, fences and walls to keep them apart. While some places are made car-free for the betterment of others, far more places are car-only. Cars have changed not only how we travel but how and where we build homes. And cars need to be placed out of the way for the 96% of the time they’re not being used. Worst of all, if we wanted to stop using them, we likely couldn’t.

    There are two observable patterns. One concerns the extent to which a particular technology remains coexistable over time. The other is our own awareness of factors that make those technologies more or less coexistable. We are now generally more alert to the fact that things that emit high amounts of carbon—concrete, steel, hamburgers, golf-courses, holidays, signals of wealth—are not as easy to live with as we might have assumed.

    When considering new technologies, I can’t help but have coexistence in mind. For example, technological replacements for cash may have security and portability benefits, but come at a significant carbon cost as well as excluding a proportion of the population. A society going cashless is not a microwave oven, since microwaves didn’t need everyone in your neighbourhood to own them before yours worked.

    The digital era has developed a taste for technologies that thrive on the network effect; where a large user-base is beneficial to each user or, more poignantly, a small user-base is detrimental. Social media—the productisation of interpersonal relationships—made a fantastic amount of money for its key founders, and this must be a strong incentive for others also to develop products with network effects.

    But, many internet-era technologies can also be identified by how hard it is to coexist with them. Their manufacturing costs, in human and planetary terms, are too high. Their operation, in social, political, economic and environmental terms, is too expensive. Their disposal, in any terms you like, is too wasteful. A microwave oven is all these things too, but our standard of coexistence was lower at the point of its invention.

    Cobots—collaborative robots—represent the pinnacle of robotic achievement. Humanoid machines that can work along side us without breaking our limbs have been depicted in art and fiction for centuries. Technologies do exist, but cobots are not commonplace even though, one might imagine, they would revolutionise the home and working environment. The reason is because coexistence is fantastically complex, and robotics people keep learning this the hard way.

    Sergey Smagin, vice-president of the Russian Chess Federation, told Baza the robot appeared to pounce after it took one of the boy’s pieces. Rather than waiting for the machine to complete its move, the boy opted for a quick riposte, he said.

    “There are certain safety rules and the child, apparently, violated them. When he made his move, he did not realise he first had to wait,” Smagin said. “This is an extremely rare case, the first I can recall,” he added.

    Lazarev had a different account, saying the child had “made a move, and after that we need to give time for the robot to answer, but the boy hurried and the robot grabbed him”. Either way, he said, the robot’s suppliers were “going to have to think again”.

    There’s a lag between our standards of coexistence increasing, and technologies raising their bar. Autonomous vehicles are even more hostile pedestrians and cyclists, while also carrying over the high carbon-cost of that which went before. No matter how you fuel them, motor vehicles are high emitters because of how they’re made and how long they last: autonomy does nothing to ease this.

    By extension, any technology that compounds previous oversights of long-term coexistence—cryptobullshit being a prime example—will inevitably fail. If we can’t live with it, we can live without.

  • Paying attention

    Taken from A Reconstruction of the Temple of Antonious and Faustina (above) and a View of the Ruins (below) by Jan Goeree (before 1704)
    Taken from A Reconstruction of the Temple of Antonious and Faustina (above) and a View of the Ruins (below) by Jan Goeree (before 1704)

    The ‘like’ button enjoys near-ubiquity in the contemporary web experience as a signal of approval or solidarity. The effect of likes in the aggregate have been net-negative: encouraging a state of persistent distraction, and skewing attention and power towards evermore polarising content.

    There is something ingenious about the ‘like’ button, in that users get it. It’s an opportunity to fulfil a deeply-held desire to show empathy; to signal a reaction. For something not thought through in great detail, it’s very clever.

    But, consider an alternative reality. Imagine how things might have been if, instead of the like button, someone had implemented another simple, easy-to-grasp function: the ’tip’ button. This exists now in various forms, but imagine an implementation circa 2007, instead of liking or starring, that went on to become ubiquitous. What might that have done for the creator economy? And would cryptobullshit ever have taken hold?

    Whereas the ‘poke’ button (an early function of Facebook that sent a simple notification to someone) was ambiguous, the ‘like’ button on individual items of content gave users much clearer direction. What it didn’t do is provide any value: just a slight dopamine hit. With no other tangible benefit, the act of getting ‘likes’ as some benchmark of popularity had become an end in itself. The web experience is the poorer for it, but it has persisted so long that not many of us remember the web before.

    While the ‘like’ button has been transformative, a ‘tip’ button would have set in train a cascade of interesting, independent movements. What’s important here is to get the conceptual scale right: I’m talking talking cents rather than dollars; pennies rather than pounds. Perhaps even fractions thereof. This approach would’ve encouraged users to be more discerning, while also offering the independent web a revenue opportunity not dependent on advertising.

    But this didn’t happen so, you might wonder, what is there to be gained from pondering it? Because even if micropayments didn’t have their day back then, they may still.

    On the internet, there are basically three kinds of price. There are the individual items you buy plus $5 shipping, there are things you subscribe to at around $12 a month, and then there’s things that are free at the point of use. Many exceptions exist, but let’s start here. If you take on too many of those $12/month subscriptions, things start to feel expensive. What’s more, as times get harder for people, subscriptions are a wise and easy thing to cut.

    That leaves things people don’t pay to use, like Instagram, and things they have shipped to them: neither are particularly well-suited to the long tail of content that the early web offered so well, and could again.

    Also consider paywalls: a product of necessity for traditional media’s stay of execution in the online era. But they don’t work, not really. What paywalls do is create a threshold for accessing individual content to those sufficiently invested in accessing all the content, probably for $12/month; a subscription likely challenged as living costs increase. Meanwhile most people, the overwhelming majority of those who hit the paywall, won’t be able to justify that anyway. Yet there’s no means by which they could chip a few bob into the jar just for whatever it was caught their interest before they hit the wall. Even if the publisher were to adopt one of the contemporary tip-style mechanisms, it’d be too expensive and cumbersome for users.

    Another way is possible: the ‘like’ button of payment. Rather than being put off by a subscription that’s disproportionate to whatever interested them, users are free to scatter minute tokens of appreciation around the web as they please—the kind of appreciation you can take to the bank. Making it work means disrupting the maxim of taking three or four steps to set up a credit-card transaction. The web—mainstream and independent alike—is crying out for a means of bunging a penny into the jar as easily as clicking ‘like’.

  • The Web3 of you and me

    Taken from Le Marché de la Place de L'Annonciade a Florence by Jacques Callot (1617)
    Taken from Le Marché de la Place de L’Annonciade a Florence by Jacques Callot (1617)

    Were the web to iterate once more, it shouldn’t be towards complex ‘investment’ scams and tokenism. A better way is possible: the creator economy, but for real this time.

    I’ve been thinking about industrial action, since trade unions are beginning to lean into the so-called cost-of-living crisis. In this era, there’s disparity between the increase in returns to the wealthy and the pay to the workers. So, in time-honoured fashion, the better-organised unions are flexing their mandate to protect workers’ interests.

    That led my train of thought to the recent Etsy strike: conducted not by staff, but by the makers peddling their handmade wares via the platform (and to a lesser extent, their buyers too). The web has always offered so much promise to independents all kinds, but the reality has often, if not always, fallen short. The action by Etsy sellers is reminiscent of similar, smaller-scale protests by eBay vendors more than a decade ago.

    Creators borrowing and emulating the actions of trade unions is an interesting digitally-enabled behaviour. Although there’s more to be done to ensure trade-unions are representative of all members regardless of their protected characteristics, the movement is generally positive as a means of pushing for reason and fairness on behalf of a workforce. Those working with established technologies, such as trains, are a long way ahead of those working with new technologies, such as internet-enabled ventures. But there are signals that the times they are a-changin’, again, with the likes of Uber recognising a union (in the UK) to represent the interests of drivers.

    For years Uber resisted calls to recognise unions, which had criticised the firm for not granting drivers basic rights such as sick pay or a minimum wage.

    Uber had argued its drivers were freelancers and not entitled to these benefits. But in March it changed its stance after the Supreme Court ruled that its drivers should be classified as workers – a category entitling them to better pay and conditions.

    Now it provides them with a National Living Wage guarantee, holiday pay and a pension.

    And by recognising GMB, the ride-hailing giant has gone a step further, giving a union the right to negotiate on behalf of drivers for the first time.

    Currently, the creator economy depends upon platforms—not only Etsy but Gumroad, Patreon, Amazon, eBay, Indiegogo, Facebook etc. Creators are beholden to the terms of these platforms and, of course, once they become established it’s hard to up-sticks and move away. It’s nearly impossible for creators to own their own customer relationships: the platforms’ network-effect is all-powerful, and everyone knows it.

    What’s noteworthy in the case of Etsy is that both buyers and vendors are customers of, not workers for, the platform. Here, we are seeing the same technologies enabling opposing digital behaviours: the means that make a platform viable also make it possible for customers to gang up on it.

    In his executive bubble in New York, one day into the strike CEO Josh Silverman couldn’t help commenting on the record at a Wall Street Journal event, revealing his strategy to have Etsy compete with Amazon. On the same day, he denigrated sellers in an interview: “Each of our sellers is a blade of grass in a tornado. They’re someone you haven’t heard of.” It’s the kind of tone-deaf comment we’d expect to hear from an out-of-touch CEO sitting high in his tower behind security gates, hoping the angry populace will go away and taking whispered advice from “crisis communications” experts.

    A few weeks after the strike, the topic still wouldn’t go away. At its May 8th quarterly shareholder meeting, Etsy had to answer for the strike to Wall Street investors. In the meeting, Etsy’s Chief Financial Officer (CFO) Rachel Glaser responded that when the fee change went into effect last month, fewer than 1% of sellers went into “temporary vacation mode,” active listings dipped less than 1% during the week and returned to the prior level when the week was over. “Based on past experience and significant research leading up to the change, this was all within our expectations,” she said. 

    But as one commenter to the linked business article wrote, “1%. Meh. But is it really? It was ‘significant’ enough to address it’s ‘insignificance’.” The fact that the strike had such heavy media coverage, that investors are demanding answers, tells us that the strike struck a chord. Unions are on the rise. 

    So what might a further iteration be? Web platforms—enabling users to exploit functionality over the web—are the archetypal vision of Web 2.0. The bundle of technologies that fall under the bracket of Web3, while not the natural successors that the name implies, do offer an alternative: a means by which creators, not platforms, can either exploit the network effect, or bypass it completely.

    Right now, in real terms, the only overlap between creators and Web3 is the nonsense that is NFTs. If we’re honest about it: Web3 is the very pinnacle of Web 2.0: just another bunch of platforms, plus hot air. This will become increasingly apparent as they inevitably begin to fail. What they might leave behind is a pathway to disintermediation: the means to create their own content, products and services, build communities and command their creative direction while doing away with platforms to unite them with buyers and transactions. When Web3 untwists its knickers about cryptobullshit, its true implementation could be post-industrial action.

  • The fall and rise of presenteeism

    Reginald Perrin, played by Leonard Rossiter

    The 1970s sitcom The Fall and Rise of Reginald Perrin is a study of pain. It is blighted, as many comedies of the era are, with numerous cultural traits that are thankfully no longer acceptable: sexual objectification, casual racism and a running mother-in-law gag. But in its dated way, it tells a story of a modern working man: living the life to which he was told to aspire, and hating it. The pain of presenteeism, the suffocation of routine and the lack of positive prospects ultimately lead him to fake his own death.

    In 2009, a poor attempt was made to reboot the show. The central character’s growing depression was finessed and the gags updated, but the clumsy production was not engaging. At the time, I wondered whether working cultures and environments had moved sufficiently forward in three decades for the show’s themes to become unfamiliar. I now realise the reboot was just badly timed. Had the show returned now, in the hybrid-working era, the pain would again be relatable.

    A tension is growing, like the one dramatised by the original Perrin, between the expectations and reality of work. People want flexibility, but employers are yet to find ways to tighten up working culture not to rely on individuals’ visibility. We’ve not been at this long enough to uncover exactly why Zoom-gloom causes headaches. I wonder if it’s our purpose reflex: small at first and then snowballing with every call. It’s only too easy for the pessimism of a worker’s internal monologue to spiral from ‘why am I even on this call’ to ‘what’s the point of it all’.

    I had hunches, everyone had hunches, but now we have data. A survey of 32,000 workers from 17 countries throws up three interesting insights:

    • Two thirds of workers would consider looking for a new job if they were required to return to the office full-time. Those working from home are more inclined to say they are optimistic about the next five years, are more satisfied with employment. Contrary to assumptions, younger people are the most reluctant to return.
    • Those working from home are more likely to feel their work is suffering due to poor mental health, and are also more prone to working much longer hours.
    • Key sources of stress include length of the working day, problems with technology and concerns over job security.

    What this suggests is the term ‘flexible’ working is interpreted differently, as is the need for visibility. This reminds me of a survey that was run across the staff where I worked, in which one of the questions was about ‘work-life balance’. The results were confusing. I suggested that the term was being interpreted in two ways: either to mean a relaxed, informal office atmosphere, or a measure, predicable duration of work allowing folks to stop and enjoy the other aspects of their lives. Some months later when the survey was run again, no mention was made of ‘work-life balance’, instead asking two questions specific to the workplace atmosphere and the ability to stop working. The environment score was higher than before. The duration score was much lower.

    The shortcomings of the Work-as-Lifestyle model now seem obvious. During the work-from-home-wave of the pandemic, millions of Americans suddenly realized that if they wanted to mingle business and pleasure, they might as easily do it from the safety and comfort of their own living rooms. Those playground-style spaces, intended to make work a more joyous, more engaging enterprise, have had the perverse effect of staking an undue psychic claim on employees’ time: Why go home at all, they implied, when you can stay here and keep working?

    Today, the spatial rhetoric of the rise-and-grind work culture seems less and less persuasive. This does not mean we need to return to the bland, fluorescent-lit sterility of an earlier epoch. But for a more efficient, healthier and happier workforce, the time may have come to ditch the bean bags.

    I think about digital-era working cultures a lot. I’m starting to think there are now two behavioural ideas worth pulling apart.

    • Presenteeism was a useful idea five years ago: it articulated something many of us knew to be true about working culture. Now is the time for workers and employers to interrogate it rigorously: the challenge is more than whether someone is ‘there’ or not: it’s that how visible someone is, or is required to be, is highly contextual.
    • Flexible working is also a continuum. A famous example is that parents need to be able to juggle the fixed and flexible commitments through their days. The cost of doing so is they end up working even more, simply because the vernacular for the different flexibilities people need is yet to be worked out.

    In the 1970s, the idea that by now the technology would allow us to work the way we do would have seemed like a considerable improvement, but Zoomageddon has not turned out that way. In being flexible and visible, remote and hybrid workers need to be even more present in front of their webcams: exactly the kind of thing that Perrin would take drastic action to avoid. It’s as if presenteeism had a digital transformation all of its own.

  • The age of imaginary money is ending

    Money

    The metaverse may be little more than a safe space for people who don’t need to worry about the cost of anything.

    I remember attending a conference ten years ago, at a point when internet technologies were fully moving away from web-as-information towards web-as-functionality. The mood was not one of excitement: there was a foreboding overtone to many of the presentations and discussions. Much of what had already happened, such as the rapid rise of social networks and other technology giants, painted a more bleak future than had been anticipated. In a hall of progressive folks, this contrast from previous years’ optimism was conspicuous.

    Something was ending. An age of innocence, perhaps. In turning the web into a place where ‘regular people’ could ‘do things’, something intangible was being lost. I remember feeling guilty for sharing these curmudgeonly thoughts. After all, these technologies are revolutionary precisely because they are open to all. But now I realise the problem wasn’t the people, it was the money. It was all that imaginary Californian money.

    Web3 is a term used to fool us into thinking a next technical generation is both ready and inevitable. It is not. It’s merely an economic adjustment.

    The movement towards what was then called Web 2.0 came about because the existing digital-era business models—with only a handful of exceptions—had failed. Those first enterprises depended on founders stumping up their own money and trying not to run out of it before the demand for their idea finally took off. Individually they failed, collectively they crashed.

    Then there was an inflection, whereby founders would raise eye-watering sums from venture capitalists. There were two outcomes: either these models failed too—with only a handful of exceptions—or the deep pockets of venture-capital subsided the product until consumers’ habits were formed and competitors were quashed. But the sums were so big that eventually the VCs were unwilling or more likely unable to keep backing mad, unviable online business simply on the strength of it becoming the ‘next’ big thing.

    For the past decade, people like me—youngish, urbanish, professionalish—got a sweetheart deal from Uber, the Uber-for-X clones, and that whole mosaic of urban amenities in travel, delivery, food, and retail that vaguely pretended to be tech companies. Almost each time you or I ordered a pizza or hailed a taxi, the company behind that app lost money. In effect, these start-ups, backed by venture capital, were paying us, the consumers, to buy their products.

    It was as if Silicon Valley had made a secret pact to subsidize the lifestyles of urban Millennials. As I pointed out three years ago, if you woke up on a Casper mattress, worked out with a Peloton, Ubered to a WeWork, ordered on DoorDash for lunch, took a Lyft home, and ordered dinner through Postmates only to realize your partner had already started on a Blue Apron meal, your household had, in one day, interacted with eight unprofitable companies that collectively lost about $15 billion in one year.

    The next round of superhype therefore doesn’t rely upon either founders’ investment or VC. Instead, an even darker style of funding has emerged: doing whatever social engineering it takes to get a massive number of people to put their money into a scheme that appears to be a market. It’s no such thing, of course, and no returns will come. The term Web3 is core to manufacturing the newsworthiness needed to make it all convincing.

    The reality is quite different. Many people who previously enjoyed some sense of financial comfort are now finding their costs are rising. Perhaps that makes them more susceptible to ‘investing’ in mad scams. But that particular digital behaviour is secondary. The key differentiator is that the era of Web2.0, it didn’t matter what anything costed. Consumers were happy to buy, VCs were happy to invest, and tech firms were happy in a non-profit model providing some apparently higher-order corporate purpose was being met. No more.

    Mark Zuckerberg has issued a chilling message to Meta Platforms Inc. employees: The company faces one of the “worst downturns that we’ve seen in recent history,” which will necessitate a scaling back of hiring and allocation of resources.

    This new era is not defined by magical currencies or tokens of ownership that you can’t touch. A metaverse in which nobody has to worry about the cost of things sure does sound attractive, particularly to those already wealthy enough not to worry about the cost of things already, but a more likely occurrence is a microverse of cost-consciousness. A world where VC money isn’t forthcoming, consumers are less willing to spend and already-weak trust in digital enterprises is bombarded. It’ll be challenging for those immune from rising costs to persuade everyone else to get excited about the possibilities of the next digital era, one the imaginary money has gone.

  • A culture of digital discrimination

    Taken from Looks Like A Good Book by Percy Loomis Sperr (ca. 1923–42)
    Taken from Looks Like A Good Book by Percy Loomis Sperr (ca. 1923–42)

    When you book yourself a flight, you are typically given a six-digit alphanumeric string as your reference number. By entering this code into your airline’s app, for example, every detail of your booking can be retrieved. This joining-up of the many datapoints needed for passengers to board is one of the ways in which the travel industry has embraced the digital era. Yet, there are many other areas in which the handling of passengers’ information is disjointed. What causes these discrepancies?

    When returning to Europe from Seoul, the staff at Inchon airport noticed my limp and produced a wheelchair. Inchon is palatial, modern, well-designed and well-run. It is also massive, so I was grateful for the wheels. The staff also rang ahead to my next destination, Charles de Gaulle in Paris, to arrange for me to be met with another wheelchair upon my arrival. But nobody came. Having arrived later than scheduled, and with another flight to catch, there wasn’t time to wait around. I dragged my foot across the airport to another terminal, reaching my connection just in time. My baggage wasn’t so lucky.

    This trip, and a few afterwards, gave me a small insight into the many challenges faced by those who rely upon a wheelchair. The travel industry loves dehumanising abbreviations, so uses the term PRM, or passengers with restricted mobility, to mean anyone who requires assistance moving around. Some airports seem to be excellent at this, but others are somewhere between poor and terrible. If you’re going to be using an airport-owned generic chair, they’re often not there when you arrive. If you need to use your own chair, it can be a long wait before it is brought up from the hold so that you can leave the plane.

    It’s strange that there should be this discrepancy. Many aspects of travel are disrupted by chaotic factors such as the weather, but given airports are all in the business of loading and unloading flights, you might expect that the experience of accessibility services would be fairly universal. Not so.

    The BBCs security correspondent Frank Gardner uses a wheelchair, having taken bullets to the spine from al-Qaida gunmen. Gardner has recounted three occasions (1, 2, 3) when he’s found himself stranded in his seat long after all other passengers have disembarked [update below]. Stories such as this are disturbingly common. In the UK, airports have been warned they face court action by the regulator if they continue to fail disabled passengers.

    In this digitally-enabled era, and in an industry with fine margins and so much automation, it’s amazing this happens at all, let alone so frequently. But then, it isn’t amazing, because systems reflect the prejudices of their creators.

    That six-digit reservation code is the identifier for a Passenger Name Record (PNR). It identifies that record within a Global Distribution System (GDS): a network that facilitates transactions between travel service-providers. When you book a flight, the PNR contains data supplied by you—the passenger’s names, date of birth, contact and payment details, passport information, etc.—and also data from the airline, such as the travel itinerary, booked seats, baggage etc.

    Passengers’ additional requests, such as meal preference or mobility assistance, are all on the PNR as well. So, that one record holds all the data needed for the trip, and can be shared by all the companies and staff that need to access it: the agent and airline, the airport, the security, ground services and special assistance providers. So, with such portability of pertinent data, the system should work flawlessly. But it doesn’t.

    GDS systems have at least two big faults. The first is security: those six-digit PNRs are fairly easy for a hacker to guess by elimination. Secondly, because the whole system is born of a culture where there’s ‘normal’ conditions and then ‘exceptional’ ones such as PRMs, it perpetuates that culture of division. Everyone’s journey is exceptional and everyone, to a greater or lesser degree, requires assistance. Cabin crew know this, but the fact this hasn’t been normalised into the systems underpinning the industry exposes how the prejudices of those systems’ creators and operators persist. And as interoperable systems such as this become adopted and standardised, a passenger can’t escape such prejudice by switching airline.

    This is the great shadow of the digital era: discrimination is becoming more systemic, not less.

    When Gatwick Airport’s current assistance services provider was appointed, its CEO said they would “deliver an exceptional service provision” by “combining innovations in people, processes and systems”. Less than a week after the aviation regulator issued its warning of enforcement action against airports if they keep failing less mobile passengers, Gatwick’s assistance services provider awarded its staff a 15% pay rise. On that same day, a disembarking 82-year-old passenger with restricted mobility tragically fell to his death on a Gatwick Airport escalator, having not been offered assistance services in good time.

    Businesses of all kinds are going through some form of digital transformation, just as the travel industry has. This process is often heralded as a great victory for innovation, service and efficiency. But each will pick up striking limitations and prejudices, by treating what it deems to be most frequent as ‘normal’, and everything else as ‘exceptional’. Designers of these systems would do well, next time they fly, to add a mobility assistance request to their booking.

    [Update: shortly after this post was published, Frank Gardner found himself stranded, again, on a plane that had landed at Gatwick Airport.]

    [Update: to better understand the obstacles faced by wheelchair users, The New York Times sent a reporter and a photographer to document one man’s domestic trip.]

    It’s not uncommon for airlines to lose or damage wheelchairs. In 2021, at least 7,239 wheelchairs or scooters were lost, damaged, delayed or stolen on the country’s largest airlines, according to the Air Travel Consumer Report. That’s about 20 per day.

    Because of these risks, many people who use wheelchairs say flying can be a nightmare.

  • Sociable media: the case for promoting positive social norms

    Taken from Pleito de Suegras by José Guadalupe Posada (circa 1880–1910)
    Taken from Pleito de Suegras by José Guadalupe Posada (circa 1880–1910)

    In the UK, around 30% of adult women choose not to dye their hair, and falling steadily. The data are sketchier for men, but it’s more like 60% and falling more sharply.

    This disparity of choice interests me. It doesn’t come as a surprise, but what is harder to pinpoint is the precise cause. My knee-jerk reaction is it’s the long legacy of differences in aesthetic expectations between the genders: disproportionally, women have felt an expectation to appear younger.

    But a singular cause is hard to pinpoint. It’s almost chicken-and-egg*: was the creation of and continued demand for hair-dye caused by a pre-existing expectation, or did the manufacturers of the product create, stimulate and perpetuate this pressure to shift products?

    The answer is probably a lot of both, and that’s what I find interesting in the context of emerging behaviours in digital spaces: effects can also be causes.

    The hair-dye example illustrates that after a while, even if a common behavioural trend had a single, defining origin, the fact people do it becomes a reason for people to do it. People like to colour their hair, so hair dye exists. Because hair dye exists, people like to use it.

    But, because this behaviour is commonplace, people—some more than others—start choosing to do it because they feel there is an expectation upon them. So far, this is the best way I have found to contextualise the relatively sudden demise of digital social spaces, from cordial and discursive into aggressive and hostile.

    Costolo writes: “I’m frankly ashamed of how poorly we’ve dealt with this issue during my tenure as CEO. It’s absurd. There’s no excuse for it. I take full responsibility for not being more aggressive on this front. It’s nobody else’s fault but mine, and it’s embarrassing.”

    “It’s no secret and the rest of the world talks about it every day”, Costolo continues. “We lose core user after core user by not addressing simple trolling issues that they face every day.”

    The challenge of moderating social spaces, even semi-automatically, is considerable. Every social platform has a long, convoluted code of conduct to protect its revenue and, to some degree, its users. Yet, some of the worst behaviours exhibited on social platforms aren’t single users, pushing content through the cracks between terms of use. Rather, much of the worst behaviours come from people—unconnected other than by the platform itself—behaving badly together.

    Group-based moderation is poorly defined and difficult to enforce. It’s like trying to hand out speeding tickets to a stretch of motorists based on the traffic’s average speed: every individual would argue, rightly, that they’re not in control of the group and therefore isn’t able to influence the average speed significantly. Yet, we’ve all been on highways where large numbers of drivers—perhaps the majority—are in excess of the speed limit. When it happens, it’s easy to go with the flow. Herd mentality.

    When social media got going in earnest, it was common to see individual users publicly airing a grievance with a brand, usually for poor service. It was so common that, for a time, it started to seem like the de-facto means of orchestrating customer care, since brands invested heavily in managing their social media reputation.

    Much more common now is for users to swirl, suddenly and rampantly, around a particular issue: a point of politics, or an individuals’ behaviour, or an article cleverly constructed to provoke a response whether or not you’ve read it. People jump on.

    But I think there’s more to it than herd mentality: I think many people do it because they think that’s what they’re supposed to do, either for the sake of their identity or because they think that’s how these things work. This is why I think moderation is failing. Moderation is restrictive, not permissive. Being explicit about what individuals can’t do does little to moderate what groups can do.

    Changes would be needed for social media platforms to curtail bad group behaviours. Currently, it’s hard to imagine how that would not impact the freedom of individuals’ expression. But that’s because of the restrictive overtones of moderation: these things can be extremely subtle, so piling on more restrictions wouldn’t likely help.

    Instead, functional changes to how social media platform works—permissive rather than restrictive—may protect the individual while also curtailing the extremes of the group. There is precedent: brands getting systematic about social-based care suppressed the barrage: for the most part, they didn’t say ‘you can’t complain here’; instead, they found ways of stepping forth to manage the situation actively and elegantly.

    The same may have to be true of the platforms themselves. They need to be able to determine when groups are sharing a joke and when groups are attacking vulnerable people. They need to encourage the former, functionally, and dissuade from the latter, by determining who in the group is participating just because they think this is what participation now means. Today the distinction is subtle, but the platforms need to know it when they see it.

    *By approximately 340 million years, it was the egg.

From elsewhere

Links to some things I’ve been looking at recently.

Roll call

Links to some people whose sites I follow regularly.