Theo J. Bressett
Updated 16/09/2024
Contents
The future of technology sometimes feels like an endless parade of objects. Hoverboards and holographic headsets. Self-driving cars and AI-powered speaker boxes. From classic science fiction novels to the glowing stages of corporate conferences, widgets are the heroes of the imagined future.
There’s a comforting certainty in an object. It gives us something to aim for. We can dissect it into components and divide up the work of making it real. When we release it into the world, we celebrate having finally arrived at that ambitious future. Yet, how many people can actually participate in it? Who can access it through that object and who’s left out?
These are questions that come from reimagining how we build the future, not just what we make. It’s more than methods and practices. It’s the humility of acknowledging that the best solution might come from someone other than ourselves. It’s the courage to resist the temptation of jumping to familiar answers before asking unfamiliar questions.
Most Popular Packages
Uncertainty principles
So many of the strategies that have led us to exclusion have been about avoiding uncertainty. As children, we protect our games from disruption and uninvited intrusions. As designers and engineers, we use mathematical models to homogenize the people we design for. Uncertainty is assumed away as degrees of error or deviations from the norm. It’s dismissed as an edge case.
We now face greater uncertainty than ever before. Not just in the ways digital technologies permeate our societies, but in our connections with one another.
I like to think about the evolution of technological ages in terms of the relationship between humans and machines. The Industrial Age was initially fueled by the mechanization of production that replaced some types of human labor. The Information Age gave rise to human-computer interactions, with complex interfaces that are still largely the product of a fixed set of choices determined by an engineer or designer.
We are now in the midst of the next technological transition. Companies are scrambling to make use of the overwhelming amount of data being collected from people using digital interfaces – nearly 1.7 megabytes of new data per human per second, 90 percent of which is unstructured data, requiring a significant amount of human manipulation to translate it into useful information.
There’s great enthusiasm for the ways that machine learning models can generate, refine, and optimize better designs. As machine learning becomes the backbone of new human-computer interactions, the machines are the ones deciding how to respond to you without human consultation. They are making recommendations to engineering teams on how to improve products using data amassed from millions of people.
But how do we avoid over-dependence on mathematical models that favor statistical probability while undervaluing human diversity? Can big data and machine learning algorithms respect individuality? For every area where technology facilitates human interactions, what extra degree of responsibility must be built into that machine?
As we forge ahead
These moments of technological transition are ideal times to introduce inclusive design. We can engineer these new models to ensure that they don’t lead to exclusion. Without inclusion at the heart of the AI age, we risk amplifying a cycle of exclusion on a massive scale. It won’t be perpetuated just by human beings; it will be accelerated by self-directed machines that are simply reproducing the intentions, biases, and preferences of their human creators.
When we design for inclusion, we are designing for our future selves and our ever-changing abilities. It’s designing how the next generation will treat and care for us. It’s making solutions to uphold the human connections that are most important in our lives. Our dignity, health, safety, and sense of being at home.
Inclusive design is simply good design for the digital age.
Comments