On inevitability in technical systems

It's easy to fall into the trap of believing that certain choices, tools, or systems are inevitable and can't be changed. This doesn't need to be the case, and shouldn't be.

Share
Photo of a concrete-encrusted gear assembly.
Image by Diacritica, licensed under a Creative Commons Attribution 3.0 Unported license, via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Abandoned_concrete_factory_mechanism.jpg

Things do not inevitably need to be as they are. What feels obvious or given is more often the result of decisions which have been made over time, before being slowly baked into systems and infrastructures. In some cases, we see these things which are apparently inevitable as such because we are not exposed to alternatives, or even to the idea that alternatives can exist. I’m going to discuss this today in the context of technical systems (especially software), but technical systems are by no means the only place where this dynamic happens. One of the reasons I want to look at this problem of the belief in inevitability is because it’s of course very hot right now – the narrative of the inevitability of AI adoption is finally starting to be contested. But it’s in other areas of technology, too. There’s a pervasive sense of dis-empowerment in the way people think and talk about the software and services they use.

This issue is relevant now because of the increasing urgency to move digital infrastructures out of the hands of companies which exploit their market positions, and which have demonstrated a willingness to put political advantage in the United States ahead of responsible business practices towards their customers and the laws of the countries in which those customers are based. Instead of aiming to substitute one-to-one European alternatives into the place that could soon be vacated by US-based Big Tech companies, we have an opportunity to think a little more deeply about what we want from the technologies we use, and which are used on our behalf. I’m a bit of a broken record about this, but we could be trying to implement technologies which are not just “better” in some geographically- or jurisdictionally-determined sense, but better in other ways, including meaningful support for autonomy, and more attention to ethical considerations across the development, implementation, and operational lifecycles. If we want technologies that more closely reflect our values, we need to tackle the defeatism of inevitability narratives.

Having the “better” technologies isn’t enough. We need social, governmental, and educational infrastructures which make it possible for people to use the “better” or more ethical technologies. The technologies exist, and some people are using them. But there’s still an enormous emphasis on a few tools from a few enormous companies. It’s not just that there’s a need to help people learn to get away from technologies that are unethical, we also need to make it easier to use those better or more ethical options from the beginning. When choices are being made, it does need to become easier for people to make better ones. And when choices are being made by public organizations like governments and schools, making the “right” choices needs to be not just easier but also actively encouraged, facilitated, and maybe even required. But none of these steps are achievable without first making it possible to imagine that there are even real choices to be made.

Constrained imagination

When do we make choices about the software we use, about the services we subscribe to, or about the way we complete basic digital tasks? I remember when the iMac was released, with the proposition that it would just work, straight out of the box, and only needed to be plugged in. Even the (basic) software you might want was there, including not just the essential like a web browser, but even tools for creative activities like making music. The only choice you had to make before getting started was the choice to buy the iMac. The purchase of the iMac would provide certain benefits all by itself, but also impose a framework which would structure your future choices – any software you might want to buy later for your iMac would need to be compatible with its operating system. This was a slightly bigger constraint then than it is now. What’s interesting in the case of the iMac is how clear these two stages are: you buy the machine because you are imagining what it can do for you, how it might make your life better – owning the iMac is not inevitable, it is a clear decision, and one made for positive reasons. In the second phase, once you own the computer and are becoming a seasoned user, a degree of inevitability sinks in. The constraints of the machine become your constraints, and its possibilities are your possibilities. The software that isn’t made for your operating system becomes software you now won’t use, and the additions made to the Mac ecosystem are changes that become relevant to you, as a user of that ecosystem.

The early iMac represents a comparatively simple and innocent example of how a choice turns into constrained imagination and inevitability – this slide from deciding to accepting. But getting users either before they learn how to discern between systems, or with the simple requirement to use something, is another good way to create inevitability. Children who are forced by their schools to use systems provided by Google (such as a word processor or a video conferencing tool) are being taught at an early age that the way Google’s products work is the easy way, the intuitive way, or the correct way, because the functionality and feel of the product becomes what is learned and familiar. A company which provides a mass-market tool, but in a required educational context, gains the opportunity to not just be paid by the schools using the service (maybe they pay a little less, as a sweetener to get them into the program), but more importantly, to get users early and teach them that this particular system is how things should work. Children join the pipeline, and it becomes a bigger challenge to exit later. Repeat this for any tool that children are funnelled into at an early age. Learning one thing first, and becoming competent with one product when you haven’t yet learned that different options exist within that product category, is a great way to curtail imagination and curiosity about alternatives. And this is without even considering network effect, and the social aspects of platforms that make them hard to leave.

My final example: Some systems come with the work we choose to do. If a student of, say, graphic design, is taught that there is one correct suite of software which is used in the profession, and if teaching and technical support are geared towards the use of the “standard” software, then there’s a pretty high likelihood that our hypothetical graphic design student will invest time and effort into learning that software. What’s more, if our imaginary student believes that the software taught to them in school is a pre-requisite for success in the profession, it’s possible that they’re going to be less open-minded about other tools, until they see evidence that the alternative tools are capable of meeting the criteria the student has internalized for judging professional quality. The presentation of a tool as “the standard” or “what we use,” in a context where professional norms are being learned, introduces a degree of inevitability to the choice. The student may not even go so far as to feel that they “have to” use the tool – they may simply believe that there aren’t any viable alternatives. “This is what we use in this profession” becomes a totally basic statement, not an articulation of what is, ultimately, a choice that has been made over time, and which can be contested.

The (de)construction of inevitability

We have three examples of how a sort of inevitability is constructed. A self-made choice which then creates system lock-in through learning and investment leads to the constraints of the system becoming hard to contest. Learning one tool as a first engagement with a category of tools makes it the basis against which alternatives are judged (if the existence of alternatives is ever even considered). And the presentation of a tool as the “standard” or “how we do it” can lead to a belief that it’s that tool or nothing. All of these are accretions of choices, and the development of systemic constraints over time. These are, of course, not the only inevitability traps that exist. There are other classics like “If we don’t do it, someone else will” or “It’s happening and we need to keep up.” While the reasoning and dynamics may differ, what all of these traps have in common is the way they constrain imagination and choice, and the way they lead to the belief that change is not possible. So now what?

I’ve written before about failures of imagination, but these inevitability traps are not only a lack of willingness or opportunity to think about other possibilities – they are often a kind of systemic lock-in which takes place off the back of the construction of informal or formal infrastructures. This means that they are difficult to change alone, and they aren’t necessarily subject to the choices of individual users. Change requires collective action and a shared belief that the way things are is not, in fact, inevitable. Breaking out of this trap is difficult, because inevitability narratives are seductive, and keeping things as they are is much easier than creating systemic change.

What, then, can we do to escape these inevitability traps? In my examples, I wrote about individual choices which lead to inevitability; institutional choices in public institutions like schools, which force the use of particular tools and turn them into norms; and (professional) group-level beliefs about “standard” tools and ways of doing things.

Individual choice is perhaps the easiest of these three scenarios. While the installed base of our own knowledge and experience can make change daunting, the choices we make for our personal tools are more in our own hands than in scenarios where tools are being chosen for us. Recognizing where choice and change are easiest is the first step: maybe you can’t leave Facebook, because it’s the only point of contact you have with your extended family, but maybe it is possible to stop using that old Gmail account and switch to something more aligned with your ethics and values.

In the case of the second scenario, there should be an onus on public organizations to be thinking about ethical considerations when doing things like procurement. Better tools exist and can be implemented, but the existence of the tools is not the question. Instead, we have to be thinking about how the implementation will work, not just technically, but also in terms of managing to get people on-board and habituated, and ideally even invested. For tools which are mandatory, this is less of an issue. In these cases, placing values at the forefront of procurement decisions should be non-negotiable. If a section of the citizenry (whether that’s school children or people applying for government benefits) is going to be forced to use a tool or system, an ethical evaluation of the potential tools should be mandatory.

For group-level norms, including those in professions, it’s going to take advocacy, experimentation, and some loud first-movers. Those who are making more ethically-oriented choices shouldn’t be apologetic. Being the squeaky wheel and creating friction may feel uncomfortable, but it has value in making things obvious. Easy compatibility with existing norms may seem like a good goal, but sufficient compatibility which contains a little friction can serve to make the existence of difference visible. Apologizing because your video calling system isn’t Microsoft Teams and looks a little different isn’t necessary – owning the difference and explaining its value can shatter the illusion that there is only one tool that can be used for a given purpose.

Across all areas, there are changes we can make, collectively and individually, to challenge technological inevitability traps. The biggest change, and the one that applies everywhere, is the way we talk about new technologies, and what we place value on. We’re pushing the wrong things. We’re arguing that people need to learn to use AI or be left behind – this buys into a huge inevitability trap. Instead, we should be arguing that simple consumption of the next hot tool is not sufficient for either active citizenship and participation in a democracy, or for future-readiness on the job market. Critical technical literacy is the base skill, not the ability to use what seems like a big deal now. Being able to evaluate the merits of a tool or system, not just technically (already difficult), but on a level which includes considerations of responsibility, ethics, and values alignment is the bigger and more important skill.