We are drowning in it. Every single day, we wake up, unlock our phones with our faces (thanks, AI), ask Siri something we’re too lazy to Google, and scroll through feeds that have learned to keep us hooked better than we know ourselves. We use technology constantly. We depend on it. We carry it everywhere. But here’s the uncomfortable truth: we have absolutely no idea how most of it works.
And honestly? Thatβs by design.
Weβre living in an era where artificial intelligence is becoming as common as electricityβit’s woven into everything from your mortgage application to the way your therapist’s software decides which patients get priority care. But with every convenience, every algorithm that predicts what we want before we know it ourselves, we’re carrying invisible moral wounds.
Think about it. Every time you interact with technology, youβre making a deal you never signed up for. Your data gets packaged, analyzed, and sold. Your choices get filtered through black boxes designed to optimize for engagement, profit, and controlβnot for your actual wellbeing. We’ve turned ourselves into data points, into behavioral metrics, into exploitable patterns. And we’ve done it without really understanding the cost.
The last time you picked up your phone, did you think about the processors running billions of calculations per second? The sensors tracking your location? The neural networks making decisions about what you should see?
Of course you didn’t. And thatβs the problem.
We’ve become users instead of makers. Consumers instead of creators. Weβre living with technology that’s so complex, so deliberately opaque, that we’re forced to just… trust it. Use it. Hope it doesn’t betray us. But here’s the thing: trust without understanding is just vulnerability.
This is where the Almost Useful Machines come in.
“Unpacking Tech Systems: The Fifth Edition” is not your typical workshop. Itβs a genuine attempt to reconcile with the technology that shapes usβto understand it, critique it, and imagine alternatives.
It does this through a methodology that is as radical as it is simple: open things up. Take them apart. Build something absurd.
Week one starts with something most of us have been told never to do: crack open everyday objects and see whatβs inside. Routers. Old toys. Keyboards. Whatever you can get your hands on. You document everything, map components, and trace circuits. You ask: Who decided this should be sealed shut?
This isn’t just about learning electronics. Itβs about revealing the invisible architecture of the objects we take for granted. Itβs about understanding that every design choiceβfrom the type of screws used to whether something can be repairedβis political. You produce evidence of how technology is built to be closed, proprietary, and controlled.
Then comes the twist.
In week two, you take all that knowledge, all that frustration about planned obsolescence, and you build a machine that does absolutely nothing useful. This is The Machine Paradox, also known as The Almost Useful Machines (TAUMS).
These machines are designed to provoke critical thinking about technology and its relationship to society. Your machine might spin endlessly. It might make noise for no reason. It might contradict itself, achieving nothing. And that’s the point.
Building something deliberately useless is an act of rebellion. Itβs a way to escape the tyranny of productivity. Your useless machine becomes a space for existential purityβfree from the demand to be useful, it can explore randomness, contradiction, and wonder. And in doing so, it asks uncomfortable questions about all the “useful” technology around us: Useful to whom? Optimizing for what? At what cost?
In the age of AI, when machines are learning to think for us, we need more than ever to remember how to think about machines. Not with blind faith. Not with passive consumption. But with curiosity, skepticism, and the kind of hands-on understanding that only comes from taking things apart and putting them back togetherβdifferently.
Ready to open up the black boxes, trace the circuits of control, and build something absurd, contradictory, and beautifully useless?
The following projects showcase how the combination of forensic investigation and speculative design can create powerful, critical artifacts. These are examples of machines built to question the very logic of utility and efficiency in our technological world:
This machine often takes a playful, yet dark, look at the commodification of aggression and digital violence.
A critique of planned obsolescence and the lifecycle of technology.
Concept: A device specifically designed to perpetually sort or process discarded electronic components, but in a way that is utterly inefficient or leads back to the initial state (i.e., back to trash), highlighting the cyclical nature of e-waste.
An exploration of emotional outsourcing and algorithmic control.
Concept: A machine whose output is solely based on its own, inscrutable, or wildly unpredictable “mood,” perhaps reacting to ambient conditions in a non-sensical way, forcing the user to negotiate with its erratic emotional state.
The tyranny of utility and seductive deception.
Concept: A self-centered machine that uses beauty and allure to deceive the user, then rejects them with messages like “Ur not worthy to look at me.”
The hidden, commercial truth behind holiday “magic.”
Concept: A “Jack-in-the-Box” that uses the salvaged electronics of a vintage radio to enact a moment of disappointing revelation.
The Fifth Edition is over, but the work is just beginning. We celebrate the critical makers who passed through our doors.The moral scars of contemporary technology will not disappear, but they can at least be made visible, named and negotiated.
In that sense, Unpacking Tech Systems lives somewhere between fab lab, philosophy studio and design practice: a place where disassembly, reflection and fabrication happen in the same gesture, and where machines are not accepted as destiny but approached as arguments we are allowed to answer.
By Santiago Fuentemilla | CCL R1 O1 F1v1.0 β AI acted as AI for Insight in research, documentation and reflection. All other phases were fully human-led.
Check more machines and experiments on the MDEF’s Students WEBPAGES