Over the past 10 years, TED Talks videos have tracked our ever-tighter relationship with technology — including the tools we use to access it, our interfaces … from keyboards and mice to magic wands and sensory vests. For our guide to this evolving field, we start with MIT Media Lab founder Nicholas Negroponte’s talk from the very first TED, in 1984, in which he makes five predictions about how our relationship to technology will change. Let’s roll through all five:
In his talk, Negroponte asks the question that TED speakers have been trying to answer ever since: “Can it be a little bit more pleasurable to deal with a computer?”
To set the scene, remember that in 1984 you looked at computers via a TV screen, and as Negroponte reminds us: “TV was designed to be looked at from eight times the distance of the diagonal. So you get a 13-inch, 19-inch, whatever, TV, and you should multiply that by eight, and that’s the distance you should sit away from the TV set.” Over the next decades, we saw engineers and designers steadily remove that distance between the individual and interface.
In 2009, Pranav Mistry described his quest to make interfaces from our screens more tangible in our physical environments.
He describes a device that would replace television, bringing images much closer to our bodies. “I actually thought of putting a big-size projector on my head. I think that’s why this is called a head-mounted projector, isn’t it? I took it very literally, and took my bike helmet, put a little cut over there so that the projector actually fits nicely. So now, what I can do — I can augment the world around me with this digital information.” Is he predicting the virtual reality headsets that we would see in 2015? No — he’s working on a way to augment our senses: “I realized that I actually wanted to interact with those digital pixels, also. So I put a small camera over there that acts as a digital eye. Later, we moved to a much better, consumer-oriented pendant version of that, the SixthSense device.”
The functions of SixthSense sound a lot like some of the goals of now-ubiquitous devices like iPhones and activity trackers: “You can carry your digital world with you wherever you go. You can start using any surface, any wall around you, as an interface. The camera is actually tracking all your gestures. Whatever you’re doing with your hands, it’s understanding that gesture.” We’re slowly removing that barrier between human and machine that Negroponte described earlier with TVs.
In 2015, Nonny de la Peña described the power of virtual reality as an opportunity for storytellers and readers alike: “What if I could present you a story that you would remember with your entire body and not just with your mind? My whole life as a journalist, I’ve been compelled to try to make stories that can make a difference and maybe inspire people to care. I’ve worked in print. I’ve worked in documentary. I’ve worked in broadcast. But it really wasn’t until I got involved with virtual reality that I started seeing these really intense, authentic reactions from people that really blew my mind.”
Without the separation between human and machine interface, “you get this whole-body sensation, like you’re actually there,” she says. Her stories introduced readers to a man falling into a diabetic coma while waiting in line for food at a food bank in Los Angeles, life in Syria during the civil war — and the night of the Trayvon Martin shooting. Her stories took on an emotional experience because they involved all of our senses.
Back in 1984, Negroponte described all the steps it took for a user to interact with information on a computer screen using a computer mouse, and our speakers offered some alternatives.
“When you think for a second of the mouse on Macintosh — and I will not criticize the mouse too much — when you’re typing, first of all, you’ve got to find the mouse. You have to probably stop, you find the mouse, and you’re going to have to wiggle it a little bit to see where the cursor is on the screen. And then when you finally see where it is, then you’ve got to move it to get the cursor over there, and then — ‘bang’ — you’ve got to hit a button or do whatever. That’s four separate steps, versus typing and just doing it all in one motion, or one-and-a-half, depending on how you want to count.” Negroponte was critical of how little we used our whole hands during this process.
Fast forward to 2015 and we’ve removed the mouse altogether; at TEDxCERN, Sean Follmer shows us a new touch-based interface that molds our input devices to our needs.
When we use the standard keyboard and mouse for everything from word processing to shopping to gaming, “it doesn’t allow us to interact, to capture the rich dexterity that we have in our bodies. We need new interfaces that can capture these rich abilities that we have and that can physically adapt to us and allow us to interact in new ways.”
A knockout moment in his demo: an interface that allows two people on a Skype call to reach out from the screen. A tool called inFORM “represents people’s hands, allowing them to actually touch and manipulate objects at a distance.”
What’s an exciting alternative to the computer mouse, Negroponte asks in 1984? Maybe, an early generation of touch screens.
“You could build a pressure-sensitive display. And when you touch it with your finger, you can actually, then, introduce all the forces on the face of that screen, and that actually has a certain amount of value. Let me see if I can load another disc and show you, quickly, an example …”
Ten years ago, Jeff Han demoed his breakthrough multi-touch screen at TED to gasps and two standing ovations.
As he describes the device, “It’s about 36 inches wide and it’s equipped with a multi-touch sensor. Normal touch sensors that you see, like on a kiosk or interactive whiteboards, can only register one point of contact at a time. This thing allows you to have multiple points at the same time. They can use both my hands; I can use chording actions; I can just go right up and use all 10 fingers if I wanted to.”
Stepping forward one decade, at TED2016, augmented-reality demos from Meta and Hololens presented new ways that users could interact with data.
Meta’s headset projected holograms, creating an augmented reality that displayed not only what was physically present in the environment, but layers of additional images and information. Meta’s Meron Gribetz presents their design strategy with the goal “To isolate the single most intuitive interface, we use neuroscience to drive our design guidelines, instead of letting a bunch of designers fight it out in the boardroom. And the principle we all revolve around is what’s called the ‘Neural Path of Least Resistance.’”
The second design principle he calls “touch to see,” allowing users to use gestures to move images and information projected before their eyes.
Alex Kipman described Microsoft HoloLens as “the first fully untethered holographic computer. Devices like this will bring 3D holographic content right into our world, enhancing the way we experience life beyond our ordinary range of perceptions.
Negroponte was excited about computers moving into classrooms. The next stage of education could be code, he predicted. This presented a whole new spectrum of how we could “measure intelligence.”
“You give a kid — a 3-year-old kid — a computer and they type a little command and — Poof! — something happens. And all of a sudden … You may not call that reading and writing, but a certain bit of typing and reading stuff on the screen has a huge payoff, and it’s a lot of fun.
In 2016, Reshma Saujani pushed us to think about the ways that coding in education also meant empowering our girls. The Girls Who Code founder showed how coding taught students to celebrate and learn from their mistakes in ways that traditional educational programs did not.
“I started a company to teach girls to code, and what I found is that by teaching them to code I had socialized them to be brave. Coding, it’s an endless process of trial and error, of trying to get the right command in the right place, with sometimes just a semicolon making the difference between success and failure. Code breaks and then it falls apart, and it often takes many, many tries until that magical moment when what you’re trying to build comes to life. It requires perseverance. It requires imperfection.”
Coding, as Negroponte predicted, created a new space in education for a different type of learning and creating. This turned out to be a game changer in education strategy.
In his closing point, Negroponte proposes technology that replaces interfaces altogether with different objects — to sometimes surreal effect.
“We were asked to do a teleconferencing system where you had the following situation: you had five people at five different sites — they were known people — and you had to have these people in teleconference, such that each one was utterly convinced that the other four were physically present. Now, that is sufficiently zany that we would, obviously, jump to the bait, and we did. And we actually went so far as to build CRTs in the shapes of the people’s faces. So if I wanted to call my friend Peter Sprague on the phone, my secretary would get his head out and bring it and set it on the desk.”
In 2014, James Patten introduced new ways for museum visitors to learn about science in interactive exhibits. “I built an interactive chemistry exhibit at the Museum of Science and Industry in Chicago, and this exhibit lets people use physical objects to grab chemical elements off the periodic table and bring them together to cause chemical reactions to happen.”
“And the museum noticed that people were spending a lot of time with this exhibit, and a researcher from a science education center in Australia decided to study this exhibit and try to figure out what was going on. And she found that the physical objects that people were using were helping people understand how to use the exhibit, and were helping people learn in a social way.” There were opportunities for both more introverted and more extroverted learners to interact with machines to improve their experiences!
At TEDxSydney in 2015, Tom Uglow envisioned a world where users could interact with the internet all around them without using screens.
“Your phone is not very natural. And you probably think you’re addicted to your phone, but you’re really not. We’re not addicted to devices, we’re addicted to the information that flows through them. Reality is richer than screens.”
Imagine a forest where “children might have an opportunity to visit an enchanted forest guided by a magic wand, where they could talk to digital fairies and ask them questions, and be asked questions in return… I’m very excited by the possibility of getting kids back outside without screens, but with all the powerful magic of the Internet at their fingertips.”
Another version of this information without screens came from David Eagleman’s talk in 2015, where he unveiled a vest that added another dimension to human senses.
Our brains were not created to understand the scope of the entire universe, he says. “Now, what this means is that our experience of reality is constrained by our biology, and that goes against the commonsense notion that our eyes and our ears and our fingertips are just picking up the objective reality that’s out there. Instead, our brains are sampling just a little bit of the world.”
In a dramatic moment, he unveils the vest that processes data from the internet into patterns for our bodies to interpret, adding a new sense that could help “an astronaut being able to feel the overall health of the International Space Station, or, for that matter, having you feel the invisible states of your own health, like your blood sugar and the state of your microbiome, or having 360-degree vision or seeing in infrared or ultraviolet.” Maybe the next sense, the next interface would be further augmentations to our bodies, creating a true symbiosis between person and machine.
The machines we’ve encountered over the last few years help us connect between, feel more deeply, and interact more seamlessly with technology. It’s exciting to see how many of Nicholas Negroponte’s predictions came to fruition in ways he couldn’t imagine.
Read more here:: From 1984 to 2016: TED Talks about interfaces to our technology