You touch your phone an average of 2,617 times per day – more, if you’re a heavy user. That’s 18,000 times a week. Nearly one million times a year. Enough that all those swipes, taps, drags, flicks and pinches feel both hard-wired and totally natural.
Of course, our interactions with our touch screens are neither of those things: They’re deliberately strategized, user-tested and designed toward specific purposes, by people with self-serving goals. And Ben Grosser, an artist whose work interrogates power and technology, thinks we ought to pay more attention to both.
Last week, Grosser released the first in a series of three videos that will examine how we interact with our touch screens and how those interactions are represented in modern television and movies. This first one is a supercut from Netflix’s “House of Cards,” already pretty well-known for its depictions of technology. Watching it, you get the sense that the characters’ phones, laptops and tablets are far more than tools – they’re caressed and protected like amulets, and they seem to have their own sort of inherent power.
Or, as Apple writes in its interface guidelines for app developers: “people interact with an iOS device by performing gestures on a touch screen. These gestures elicit a close personal connection” between the device and its user.
We caught up with Grosser to ask him more about the weird, tactile relationship between people and their phones and the way that plays out on the screen.
Q: What first got you interested in this idea of touch interfaces and the way we interact with them (… or they interact with us, as the case may be)?
A: At some point a couple years ago I started becoming self-aware of just how delicate my own manipulations of trackpads and touch screens had become. With all of the gestural support these interfaces now support, it almost seemed as if my hands were engaged in a dance with the computer.
In fact, in a study for this project (before I knew it would lead to this supercut), I shot video of my own hand using the trackpad on my laptop for a long period just so I could study it. I was amazed at how soft and intimate these movements are. That got me asking questions about how and why my hands were moving these ways, focused on who or what was directing these movements. In other words, how are the designs of software changing not just how we move, but when, and why? Whom might such directed movements benefit?
Q: Can you elaborate on that? You seem to suggest certain touch interfaces encourage certain types of behaviors or emotions – what kinds of behaviors? How does that differ by device?
A: Let’s start with the material properties of many touch-based interfaces: smooth flat glass. I don’t know about you, but when I encounter such a surface, I feel physically compelled to lightly run my fingers over it, to gauge just how smooth it really is. This light touching initiates the relationship between user and device, and arguably sets the default interaction as careful and deliberate.
Beyond the glass is the software interface. We touch a button to activate a function, or swipe our finger to scroll the text. In other words, we use our fingers to drive the software. But just as much as we direct the software with our fingers, that software directs us back.
Software determines whether we swipe or tap, drag or hold. Software creates the conditions for combinations of these actions, so that moving a file from one folder to another on my laptop becomes swipe, tap, slow two-finger drag, double-tap, hold, drag, and tap again. This little dance is utilitarian of course, but it is also a choreographed movement I’ve learned so well that I never think of it this way at all.
Several factors affect how these movements differ across devices. Size of the device is a big one, where the swipe to go “back” on a small phone can be quite different than the similar action on a large tablet. Interface conventions can affect this, leading us to lightly tap small clickable text or to heartily stab a large 3-D button.
I can consciously choose to resist the way the device wants me to move (e.g. by daintily flicking my finger in tiny movements to scroll a huge long list), but the software is certainly suggesting, through its design, the “correct” way to move. You might even say that software wants its users to move in certain, prescribed ways, and that defying these prescriptions can feel “wrong” or misguided.
Q: Average users don’t really think about this stuff, as you’ve pointed out. We tend to forget that all these interfaces have been consciously designed with specific ends and values in mind. Why is it important that users be alert to that fact?
A: Yes, while many think of software as neutral, software is designed by humans, and as such both reflects and projects the ideologies of those who write, fund, and direct its production. For example, Mark Zuckerberg might argue that Facebook merely facilitates communication and connection. Yet Facebook’s design decision to quantify social interaction through the counting of “likes,” shares, and friends has tremendous effects on what people write as a status or who they choose to “friend.”
One reason it’s important for users to think critically about the software they use is that not only might it be designed to encourage or direct a specific kind of action or reaction (such as clicking ads), but software might also have embedded within it unintended biases or worldviews.
A classic example is the fact that computer vision face detection algorithms have trouble finding faces with dark skin. Why? Not because the programmers set out to create a racist algorithm, but because the developers tested it on their own light skinned faces. Here the software’s limited capacity to see faces of all colors reflects Silicon Valley’s poor ability to prioritize diversity in hiring. To bring it back to Facebook, that site’s constant quantification of social interaction through the counting of “likes” and friends points to the Valley’s neoliberal worldview that sees everything – including friendship and conversation – as a market to measure.
Whether intended or unintended, because software abstracts and hides so much of the labor that went into its creation, users need to examine software through a critical lens, questioning the assumptions it makes and the affordances it grants.
Q: Let’s talk about “House of Cards,” specifically, since that’s the case study for your first video. How would you characterize the role of technology in that show?
A: Technology plays many different roles in “House of Cards.”
Sometimes the tech is overtly present, such as when smartphones or tablets appear in the actor’s hands, helping us identify with them. Sometimes the tech is present in a secondary way, such as when touch-based software interfaces direct actor movement and performance. Software itself becomes a frequent plot element (e.g. social networks or big data surveillance), driving action, producing conflict, or enabling mischief.
But throughout the show there is also unseen and unmentioned technology in the background. I’m talking about the show’s reliance on viewer tracking data to guide everything from character and plot development to how the show looks and sounds. This approach – measuring viewer interest and activity to give viewers more of what they (seem to) want – was pioneering in TV, but is also part of a wide scale trend in everything from social media to search engines.
This leaves me thinking about the potential conflict between the show’s narratives of digital connection/technological ubiquity and Netflix’s viewer data surveillance. Might one enable acceptance of the other? Or more broadly, how are the show’s technology stories positively contributing to Silicon Valley’s overarching narrative of technology as neutral? As someone who questions that narrative, I mean for this project to encourage critical observation of software’s role in contemporary culture.
Q: One thing that made this supercut really interesting to me is the fact that so many of these touch interactions, when viewed all together, felt very alienating and non-intimate. That’s the exact opposite of how we usually think about touch screens, right? The word “intimacy” tends to get thrown around a lot in connection with them. Am I totally projecting that sense of alienation? And what are we supposed to make of it, as consumers of culture/tech?
A: I see examples of both intimate and distant/alien touch interactions in the show.
For example, we sometimes see characters taking their time with an interaction, such as when Jackie dramatically touches Remy’s name on her phone to call him, or when Doug literally caresses the trackpad on his laptop as he looks at photos of a woman he’s obsessed with. We also see hesitation as gesture, such as when Doug pauses before deleting Rachel’s contact info from his phone.
But as you noted, much of the touching is very quick, strong, or deliberate, projecting power, confidence, and assertiveness. We see examples of this with the many phone calls or texts by Frank and Claire. I would posit that these performances of power through interface not only portray those characters as in control, but possibly also elicit empathy for them at the same time (despite those characters having questionable ethics or motives). In other words, forceful and deft manipulation of technology might help us accept that while we can’t see or know what they do or how they do it, that their actions are somehow essential and necessary.
I find this an interesting parallel to the overall narratives of technology and software within contemporary culture. We’re not supposed to worry about how software is made or who makes it; instead we should marvel at what it lets us do. Unquestionably technology makes possible amazing things, but it’s also our responsibility to examine it from a critical perspective. Anything less risks ceding agency and control over our own lives to a small set of entrepreneurs and programmers.
(c) 2016, The Washington Post · Caitlin Dewey