• Better ways to find, with faceted search

    Taken from Découverte des Malfaiteurs by Jacques Callot (1633)

    I’m cautious of being sure of something. Going beyond the point of doubt is akin to fanaticism: but, time and again, the digital era has shown us things are always nuanced.

    I became convinced that Web3, particularly crypto, was completely and sensationally bad. In its contemporary form, could find not one single merit. So I put some pocket-money down and had a close look, to see if it does indeed behave like a credible market. It does not.

    The trouble is that you can’t guarantee stability against an unstable backing; nothing will protect you against the whole market going down. Every algorithmic stablecoin thus far has failed to maintain its peg. Algorithmic stablecoins work until they don’t.

    Stablecoins are a modern form of the wildcat banks of the 1800s, which issued dubious paper dollars backed with questionable reserves. These led to the National Currency Act of 1863, establishing the Office of the Comptroller of the Currency and taking away the power of commercial banks to issue paper notes. At the very least, stablecoins need to be as regulated as banks are. But all of cryptocurrency is a less robust version of existing systems and has any advantage only as long as it gets away without being properly regulated.

    It’s fragile and massively complicated: a whole bunch of loosely defined, flimsy, inefficient, vulnerable, independently-controlled components that are all the weaker for having become interdependent.

    The entire crypto space has been a Jenga stack of interconnected time bombs for months now, getting ever more interdependent as the companies find new ways to prop each other up.

    Which company blew out first was more a question of minor detail than the fact that a blow-out was obviously going to happen. The other blocks in the Jenga stack will have a hard time not following suit. 

    All this stuff has a seemingly-noble idology, but those who subscribe to those ideas know full-well that they can be used for personal gain at wider and greater harm. It’s kept complex for a reason: to be exploitative. The less people understand it, the more likely they are to be taken in, and taken for ride.

    You should not invest in Bitcoin.

    The reason why is that it’s not an investment; just as gold, tulip bulbs, Beanie Babies, and rare baseball cards are also not investments.

    These are all things that people have bought in the past, driving them to absurd prices, not because they did anything useful or produced money or had social value, but solely because people thought they could sell them on to someone else for more money in the future.

    When you make this kind of purchase – which you should never do – you are speculating. This is not a useful activity. You’re playing a psychological, win-lose battle against other humans with money as the sole objective. Even if you win money through dumb luck, you have lost time and energy, which means you have lost.

    Things with no real value are touted as assets. They are not. These tokens are simply a means for charging people for something that ought not to have a purchase price, since they are worthless. Just as Facebook et al. commercialised human interaction, web3 synically commercialises hope.

    It’s common for NFTs and governance tokens to double as speculative assets that can be bought and sold across crypto or NFT exchanges. But it’s questionable whether they have any fundamental value. Many gaming tokens are at best volatile and at worst worthless.

    Yet proponents of crypto gaming try to sell it as the future. Take crypto venture capitalist and Reddit cofounder Alexis Ohanian, who says crypto gaming will allow players to “actually earn value” through accruing assets that have some value in traditional or “fiat” money.

    In essence, he says people would no longer need to “waste time” gaming for leisure.

    And then there’s the true cost. Signals of wealth are, almost universally, made of carbon. Crypto is no different.

    One of the great Bitcoin unknowns has long been the amounts being produced, or “mined,” in what’s believed to be the top locale for mining the signature cryptocurrency: China’s remote Xinjiang region. We got the answer when an immense coal mine in Xinjiang flooded and shut down over the weekend of April 17–18.
    The blackout halted no less than one-third of all of Bitcoin’s global computing power.

    It’s not just the electricity: the sheer consumption that crypto requires is staggering. It’s significally less sustainable, not to mention less valuable, than the least sustainable wealth-indicators that already existed.

    Bitcoin mining requires very specific hardware, designed exclusively to process hashes, and nothing else useful. This hardware evolves very rapidly and is obsolete every 1.5 year.

    We estimate that, each year, around 12 000 tons of specific electronic devices are produced and destroyed. More that 80% of the weight is metal (iron, aluminum, copper, …).

    This means than around 10 000 tons of metal is extracted and transformed each year only for the bitcoin mining industry. This is 4 times more than the amount of gold extracted each year (around 2500 tons).

    Also, the value of bitcoin production is 1/6 of gold production (17 B$ vs 100 B$).

    Hence, overall, 1$ of Bitcoin requires 24 times more mining of metal than 1$ of gold.

    The vast majority of those involved are inadequately protected. The ideological decentralisation of ‘control’ merely makes it near-impossible for everyone to benefit from the long-standing systems designed to protect their interests.

    US copyright law explicitly states that transfers of copyrights and transfers of copies are legally different. Ensuring that NFT owners have the copyrights they think they do is a more complicated problem than it appears.

    The technologies are often touted as revolutionary. They are not.

    The Ethereum virtual machine has the equivalent computational power of an Atari 2600 from the 1970s except it runs on casino chips that cost $500 a pop and every few minutes we have to reload it like a slot machine to buy a few more cycles. That anyone could consider this to be the computational backbone to the new global internet is beyond laughable. We’ve gone from the world of abundance in cloud computing where the cost of compute time per person was nearly at post-scarcity levels, to the reverse of trying to enforce artificial scarcity on the most abundant resource humanity has ever created. This is regression, not progress.

    Where we’ve got to is layer upon layer of flawed idology, fanatical greed and shonky technology. It is one of the truly great embarrassments of our times.

    Bitcoiners believe the ETH community is the ‘woke’ alternative to their strange breed of grift and toxic masculinity. ETH does seem slightly less bonkers, under the direction of its demi-god Vitalik Buterin. One reason for the woke accusation is the ETH community’s switch to Proof of Stake over Proof of Work. Proof of Stake is an alternative to validating transactions that doesn’t involve setting fire to the earth slowly.

    Ironically, web3 is in part a product of the status quo the fanatics claim to be disrupting. If everyone considered it socialising in a more natural, equal way, it would have been summarily dismissed.

    Before we really dive deep into this, it’s worth addressing the elephant in the room: the present interest in this field is because of a dumb speculative bubble that is flowing over from the dumb speculative bubble in cryptocurrencies in general. I really don’t mean to prejudge my conclusion, here, but the cryptocurrency sector has exploded in value for basically the same reason Gamestop did – which is to say, a few people have sincere beliefs that they’re good and valuable, and a lot more people have money kicking around and a desire to get rich.

    It’s an appauling mess. The ideology of freedom being used to make people less free. Far from bypassing the olygopoly of Big Tech, web3 is the antithesis of freedom brought about by pure, exploitative greed.

    In cryptocurrency markets, every coinholder has a financial incentive to be their own marketer in order to increase the value of their own assets and can resort to any means necessary.

    Because of this, much of the Web3 hype being drummed up on Twitter – specifically focused on beginners, those new to Web3 and crypto – is predatory and follows along the lines of a ponzi scheme.

    As Aimee Mann might have put it, it’s not going to stop, til’ we wise up.

  • In pursuit of intuitive interfaces

    Taken from a glass negative in the collection of The History Trust of South Australia (circa 1925)
    Taken from a glass negative in the collection of The History Trust of South Australia (circa 1925)

    Whenever I hear a user interface is ‘intuitive’—or needs to be—a synapse fires. An intuitive digital interface would follow users’ behaviour, regardless of their prior experiences with other interfaces. We’re not there yet.

    When pointy, clicky, window-based desktops emerged, users were trained to find functions at the top, contexts at the bottom, and the ‘work’ going on in the middle. Machines running either Windows and MacOS still ship like this. If an Excel user wants to find functionality, they head upwards to a ribbon icon or dropdown menu. If they want to switch to Word, they head downwards to the start-bar or dock.

    The internet dented this. On the web, thanks to the hyperlink, functionality is all muddled in with content. Context-switching is more like a fork in the road. Anyone who’s fallen down a wikihole will know.

    In my last schoolyear, I helped teach the sprogs basic computer skills: like community service, but for crimes not yet committed. In a couple of hours, I could get 35 kids to be reasonably confident with spreadsheets: diving into cells to construct formulas, and switching around between applications to pull in chunks of data. The keen ones—the ones bitten by the bug—then went on to discover the command-line and, I expect, are now lounging on a domain-name empire or something.

    Getting to grips with the pre-Google early web, though, took longer. Just like a spreadsheet, a wide bar across the top would accept input, but the kids would find it harder to make their browser ‘do stuff’. The browser didn’t integrate well with other applications: you could copy out but not paste in. Clicking a link within an email could take you to a specific webpage, but clicking a link on a webpage couldn’t take you to a specific email, only a blank one.

    While that early web is no longer recognisable, I still catch users staring blankly at interfaces, just as those kids did in the mid-Nineties. Better browsers, better online services and new internet devices—phones; tablets; watches etc.—tackled many of these paradigm mismatches head-on. Many millions of hours of thought and experimentation have been given over to make functionality, context and content coexist on palm-sized touchscreens.

    Creeping homogeny, or more accurately convergent evolution, in interface design has tried to put things where users might expect them based on their previous experiences. But challenges with contemporary interfaces are more complicated and nuanced than simply where navigation should be placed.

    An example. On my desktop, all the system settings are in one place, and then all the settings specific to an application are in another. Except these days there are system-wide restrictions on things like accessing the screen or the disk, so every application I install has to beg me to change system preferences before it can function, thereby dividing application settings into two contexts.

    It’s worse on my mobile. All preferences for the system, and the apps that shipped with the phone, are in one huge settings menu. But the preferences for individual apps are who-knows-where: sometimes in the app context, sometimes the system context. Users drown in this kind of thing. Who can see my stuff (as a setting) and who’s stuff I can see (as a function) are more closely related in users’ minds than is often to be found in the interfaces they use.

    This is complicated because there are so many use-cases. How an app notifies you is one thing, how all apps notify you is another. But users tend not to think like that; they often lean towards task-focussed thinking. So current interfaces trend towards a convergence, leaving interfaces feeling vaguely recognisable but indistinct, which in turn causes further confusion and poorer recall amongst users.

    However, rather than all designers continuing to iterate towards the mean, there are two areas of exploration that could significantly improve users’ comprehension of interfaces, which in turn would make interfaces more useful. These areas could perhaps make interfaces truly intuitive, by following human behaviours rather than imposing conventions.

    First, users do things fastest in familiar ways. So, passive preferences could follow users around, between apps, contexts, tasks and even functionality. Instead of users having to learn how each device and app can be made to, say, grab a photo from one context and use it in another, the standard way they do this could transcend contexts, just how copy-and-paste does. All the other methods would remain too, but the one that’s top-of-mind for the user would always work as they’d expect, in any context.

    Imagine Microsoft Word but it comes as a plain text editor. No bold/italic/etc. The only commands are open, save, copy, and paste.

    You get used to it. Then one day you decide you’d like to style some text… or, better, you receive a doc by email that uses big text, small text, bold text, underlined text, the lot.

    What the hey? you say.

    There’s a notification at the top of the document. It says: Get the Styles palette to edit styled text here, and create your own doc with styles.

    You tap on the notification and it takes you to the Pick-Your-Own Feature Flag Store (name TBC). You pick the “Styles palette” feature and toggle it ON.

    So the user builds up the capabilities of the app as they go.

    Second, this is a good potential AI problem. Designers lean upon what has gone before—what has demonstrably worked, what’s aesthetically appealing and so forth—whereas an AI could start on interface problems unfettered by baggage and metaphors. It would be interesting to think of AI in the context of intuitive design, not as a producer of strange, mangled cat pictures, but as a collegial third eye: unbound by design dogma and able to show us new possibilities from outside or own influences.

    What we need are intelligences that help us do useful things in new and better ways, ways which we could not have imagined alone. AIs which are colleagues and collaborators, rather than slaves and masters.
  • Taken for a ride: an Uber-full of nonsense

    Taken from A Carriage in London by Constantin Guys (1848-56)
    Taken from A Carriage in London by Constantin Guys (1848-56)
    Taken from A Carriage in London by Constantin Guys (1848-56)

    I ordered an Uber to the railway station. The driver reached me within two minutes. This digital behaviour has become highly evident, and is truly habit-forming. Once you’ve hailed a ride through a digital interface, it’s hard to imagine ever going back to calling a dispatcher and waiting, and waiting.

    I acknowledged that I was in a mask, as I’d been negative for only a few days. That’s alright, he said, although he thought the tests were showing everything as Covid now. Oh joy, I thought.

    My Uber rating is surprisingly low, as a result of a single booking more than five years ago: returning from the vet with my cat, howling for freedom in her carrier. I can scarcely afford a disagreement with a taxi driver.

    He continued. He thought the Covid thing was overblown; that the media and government had made a great big deal out of nothing. That politicians and scientists were out to control you. I stuck to neutral noises: not in agreement but not contesting either. The cowardice of a man with a low score.

    He’d had Covid, he said, and it was basically just ‘flu. Oh so he’d had his jabs, then, I replied hopefully. No, he didn’t believe in them. My next noise must have sounded curious. He would’ve got them but he knew so many people that got sick from the vaccines, with blood clots and all. Like who, I regretted asking. Of course, he couldn’t name anyone. I lurched for another topic of conversation.

    Just as it’s hard to return to calling minicab offices, it’s hard to come back to professionally-researched information through mainstream media. Social feels like socialising: a conversation in the pub, surrounded by friends: low-commitment and self-regulating. You would only see the half-truths and manipulation on your timelines if you choose to look, and why would a comfortable person choose discomfort?

    My digital behaviour brought me a taxi ride. His brought him a customer, and a warm bath of misinformation. And by leaving this unchallenged, my Uber score improved by 0.01.

  • What went wrong with social media?

    Taken from Une Discussion littéraire a la deuxième Galerie, by Honoré Daumier (1864)
    Taken from Une Discussion littéraire a la deuxième Galerie by Honoré Daumier (1864)

    This present generation of social media platforms—the big ones; the ones whose logos adorn the side of tradespeople’s’ vans—all started out with a noble endeavour or two. They were built so that people could stay in touch: straightforward, lightweight and low-commitment.

    That’s how it was for a while. Friends discussed and shared. Celebrities of all ranks became accessible, and readily engaged with fans. Strangers came together to share laughter or collective displeasure. Broadcast media and brave brands used it as a means of gathering rapid participation from their newfound communities, for better or worse.

    While this all still goes on, this image of social media is no longer first to spring to mind. Somehow, the well was poisoned. Within their first decade, the experience of social media u-turned and predominantly made people unhappy.

    Before mainstream social media, the functionality had already kind-of existed, at least in primitive and more disjointed form. But it was restricted to those minded and comfortable enough to hunt it out assemble it for themselves. By contrast, the new platforms succeeded on two fronts: simplifying online participation, and creating a groundswell of interest amongst regular people. It was great, at first. Then it wasn’t. Now, it’s net-negative.

    For a while I thought this was caused solely by commercial pressure. Fuelled by venture-capital, the social media firms needed to drench communities in advertising to be viable, and little of it was of high-enough quality or placement to be altogether welcome. This in turn disrupted the community spirit, which in an online setting can already be fragile.

    Then I thought it was a more general shift in Western politics, away from liberalism and towards populist extremes. In the main, the internet had held a liberal, almost egalitarian ideology, in the sense that folks could do as they pleased, within a consensus for some high-level constraints. But social media seemed to be lurching away from this. Political stances became more visible. Those of opposing views would seek each other out. Discussions around observations became arguments around beliefs.

    Later I pondered whether it was the explosion in the online and social media population. But a logical prediction might have been that more users wouldn’t worsen the platform; it might have even improved it. Were the late majority significantly illiberal, compared with early adopters? Hard to prove. Perhaps the increase in anonymous users? Again, tricky: there had been anonymous or pseudonymous net citizens before, and all was just fine. Besides, these population increases don’t tally, in terms of timing, with the descent of the platforms as a whole.

    But now something has happened that has me connecting dots. Twitter has announced yet another policy: a toughening-up on spam tweets and copypasta: those identical posts across multiple accounts that seem particularly popular amongst those with disruptive or political intent. Twitter has all sorts of poorly-enforced rules and policies, so you have to wonder what difference one more would do. But yet another attempt to tidy up the platform set me thinking.

    None of the major social media platforms have ever been lawless. They all set out with terms of use, similar to those a hosting company might impose upon customers to keep illegal, immoral or fattening content off their servers. There were even clauses to promote the good health of the community as a whole. And these terms received regular review, so the platforms can’t be described as ever having been egalitarian to a fault: they’re always been more self-regulated than what went before.

    However, abuse of social media weaves between platform’s rules, and always has. It’s the platforms’ successes—their straightforward, centralised functionality—that has left them wide open. They’re not networks; they’re destinations. So their vulnerability is twofold: they’re abused because they’re so abusable, and because they smothered any alternative.

    Before social media, if you had wanted to drive a particular message towards loads of online eyeballs—the number necessary, say, to distort an election—it’d taken a mammoth effort. Personal publishing was too distributed. Everyone owned and maintained their own patch. It’d take too much persuading.

    The pre-social web was imperfect, but what social media did was demolish it: it took the eyeballs off what had gone before and concentrated most online attention to a small number of places that were vulnerable to subversion.

    The concentration of attention is a characteristic of old-media but, in their case, any slant or bias was generally well-displayed and understood. Publications and broadcasters happily nailed their colours to whatever mast, and the public were broadly free to choose where to align. The difference is these publishers maintained standards, some self-imposed and some regulated. They were not in the habit of handing their front-pages over to whatever fruitcake fancied reaching the readership for whatever end.

    And as social media drained visitors from other personal publishing, it made decreasing sense to persist. Many didn’t. So, the far-less abusable but still egalitarian world of old blogs languished. The execution of Google Reader, in the era when Google still fancied having a social network of its own, caused further damage.

    Twitter trying to toughen up on subtle abuse of the platform has me wondering whether the colossal task of moderating social media is at odds with the business of operating a social media platform, and this is why they barely bother. Not directly because of the risk of a dent in advertising revenue, but for these centralised destinations, small in number but massive in scale, viability and abuse are intrinsic.

  • Why we need to stop saying ‘digital’, and why we don’t

    The Usborne Book of the Future (1979) prophesied the home of 1989 would be media-rich and technology-enabled, with its inhabitants enjoying both new things and new ways to do things.
    The Usborne Book of the Future (1979) prophesied the home of 1989 would be media-rich and technology-enabled, with its inhabitants enjoying both new things and new ways to do things.

    A few years ago, I met the CTO of a manufacturing firm. His company had large production lines in Manchester where they made real things out of raw materials. They were appointing the company where I worked and, as I had the word ‘digital’ in my title, the CTO visited to suss me out. We greeted each other and exchanged business cards.

    “Which one?”, he asked.

    “Which one what?”, I smiled. I couldn’t think of anyone there with my name.

    “Which digital?”, he replied.

    This has floated in the shallows of my memory ever since. A CTO in manufacturing will meet a fair few ‘digital’ folks. There’ll be a digital someone-or-other thinking about the production lines: monitoring; twinning; maintaining and so forth. There’ll be someone else in supply-chain: connecting; streamlining; reporting. There would be an equivalent in HR, finance, logistics, design and so on. I just happened to be the ones thinking about the now-digitalised behaviours of customers, product users, rather than the operation of the company itself. How they choose, how they buy, how they feel.

    Given that every sector is digitising, it can start to feel like the d-word is now irrelevant. In an era where digital pervades all sorts of settings, just has the electrical supply has, there’s an argument for no longer maintaining a false divide. Indeed, it’s been argued for some time that we’d all be best off abandoning the use of the word ‘digital’ altogether.

    These arguments are solid and well-made. Within many most industry verticals, use of the term ‘digital’, to describe capabilities or roles, is indeed largely outdated. But the view that ‘digital’ is no longer needed altogether is the view from inside one of those verticals.

    In short, I agree, ’tis a silly word. But there are catches.

    First, there are still so many edge-cases where it’s useful to delineate between non-digital and digital behaviours—crucially, not old-world versus new, but how the two coexist. Because they do.

    There’s another catch when talking about the digital era in isolation from those that went before, as the many advantages of digitalisation aren’t without drawbacks. Transforming to a digital-first approach introduces new challenges that didn’t previously exist. It can also cause existing challenges to be inherited, perhaps compounded. If implementing ‘digital’ offered a direct replacement for what went before, it could pass without comment. But it doesn’t.

    So, tearing down divisions between physical and digital universes makes good sense in commercial settings. But abandoning ‘digital’ as a distinction for how people sometimes behave, and the era in which they find themselves, would overlook many edge-cases worthy of distinction and examination.

    In the coming days and weeks, I want to share examples of where the delineation between digital-era and previously-observable behaviours is a helpful means of understanding what goes on, and how to improve things for people. Internet-era ways of behaving.

  • Post-office: how could open-plan and virtual meetings coexist?

    The computing division: 'Bonus Bureau' clerks in Washington D.C. calculating benefits for war veterans.
    The computing division: ‘Bonus Bureau’ clerks in Washington D.C. calculating benefits for war veterans.
    The computing division: ‘Bonus Bureau’ clerks in Washington D.C. calculating benefits for war veterans.
    The computing division: ‘Bonus Bureau’ clerks in Washington D.C. calculating benefits for war veterans.

    The widespread adoption of remote working will be seen as one of the most significant digitally-enabled behavioural milestones of this decade. It has limitations, but it also invites the possibility of righting some of the wrongs that have crept into the design of working environments. However, more work is needed to avoid the worst of both.

    In this era, we have mainly open-plan offices and mainly closed-room online meetings. Those sharing physical spaces have good lines of sight but generally poor spaces to work and meet privately and without disturbance. For hybrid and remote workers, it’s the opposite. We might have expected these two working environments to blur. But they didn’t. And they should.

    Technologies permitting near-seamless remote and hybrid working have existed for years. The various merits and drawbacks were discussed at considerable length; occasionally there’d be an impassioned rallying-cry to make work more compatible with life. Still, uptake remained extremely low, and all it took was a global pandemic to turn the corner.

    Healthy workplaces are under attack, principally from the cost of commercial real-estate. Designing office space that is effective at offering a distraction-free work environment, privacy and also a sense of both inclusion and collaborative investment takes far more floor-space than taking down walls and packing desks together.

    Further space can be ‘saved’ by reducing the number of desks and hoping workers will continue to be invested in the place despite not having anywhere to call their own. But out with the bathwater goes the baby.

    For service-sector businesses, and also others with an interest in service efficiency such as primary healthcare, once the barriers of upskilling and driving adoption had been addressed by necessity, it was easier to see the opportunities of working remotely. But in behavioural terms, much is needed to virtualise the casual collaboration that comes with sharing a physical workplace. Everything is a meeting, and a meeting isn’t where work gets done. Meeting discipline has always been poor, and video-calls offer little improvement.

    That said, applying resistance to the convention of meetings is beneficial. Time and again, remote workers have been shown to be more productive than those in the office. And in the era of reducing emissions, avoiding workers’ travel is a big step in the right direction.

    But a substantial and outstanding challenge with remote working is that it is not inclusive. Many people find working from home difficult, particularly those who didn’t, or more likely couldn’t, consider remote working when taking on their home. In particular, it doesn’t suit those in junior roles, and those who live with many other people, especially if those people are trying to work remotely too.

    So, with newly-found flexible and productive working, plus newly-acquired digital behaviours across workforces, what can be done? More technical and behavioural evolution is needed to improve the work environment—physical and virtual—specifically to improve informal workspaces.

    Post-pandemic, hybrid working demands hybrid thinking. It’s worth treating improvements to both physical and virtual work environments as a single and ongoing endeavour, as the learnings from one may equally apply to the other.

  • The touch of a button

    From ‘1999 House Of Tomorrow‘
    From ‘1999 House Of Tomorrow

    Accounts of the future often attempt to describe our evolving relationship with technology. These stories are told by people, for people, with people as the constant. We stay as we are, while the technologies change around us.

    What we can now recognise is that constant, perpetual human behaviours are the greater leap of imagination. Our behaviours change in reaction to, and in spite of, technological evolution. Changes happen chaotically, and unevenly, in ways that are challenging to predict.

    In the digital era, some behavioural reactions are overtly negative, like technology addiction and selfie dysphoria. Others are positive, such as the MeToo movement and the democratisation of knowledge. Many more fall into the grey area in-between: behaviours that carry some benefits but also come with some costs.

    So ‘digital’ can describe both how things work and what people do. Digitally-enabled human behaviours are unlike those that went before. People behave differently once they have crossed a digital threshold. We reset our ideas of how much time, work or value is attributed to an action, and we behave differently as a result.

    Previously, the stimulus for technological advancement had been to allow us to do new things, and the same things more easily. Beyond the digital threshold, the impetus is to keep up with our new expectations.

    Today, digital is what we do, not what is done unto us.

    This is doing.digital.

From elsewhere

Links to some things I’ve been looking at recently.

Roll call

Links to some people whose sites I follow regularly.