AI and Design: Putting People First

A discussion on how designers can harness and humanize AI’s vast potential

Today’s product designers face a question their predecessors—or even their younger selves—never had to ponder: Will artificial intelligence solve this problem in a unique way? More and more, the answer is yes, with the caveat that AI isn’t a universal solution but something that in the right instance can improve an experience, by offering people new kinds of predictive information, personalized services, or even a deeper understanding of their own needs. For designers, this technology glimmers with opportunity while raising a whole host of new questions: Is AI a material, a tool, or both? How can we become AI-fluent, to ensure that algorithmic decision-making translates into a meaningful experience for everyone?

New guidance may help pave the way: PAIR’s People + AI Guidebook and Material Design patterns for the ML Kit API each offer tactics and advice for creating products with AI. “We’re setting up the scaffolding so our users can understand this new technology,” says Material Design creative director Rachel Been. Yet building that framework requires a thoughtful, nuanced approach that’s deeply rooted in human needs. We sat down with Been, Öznur Özkurt, a design manager at DeepMind Health, and Jess Holbrook, a PAIR lead and one of the creators of the People + AI Guidebook, to better understand how designers can harness and humanize AI’s vast potential.

Google Design: What does “human-centered AI” mean to each of you?

Jess Holbrook: Right now, AI is the new flashy technology. We’re in the early days compared to the design eras of personal computing, the web, and mobile, but we’re still seeing the same trends we saw with previous waves of technology: You experiment, your product looks really cool, and then you stumble by building an experience that doesn’t address a real human need or aspiration. We’re reminding people of what we know works, which is to put people first and work from there. It’s an evergreen approach—if you start with people then any exploration, product design, or research you do will have a fruitful path.

Rachel Been: Human-centered AI is also about managing the unpredictability of AI and ML. As designers, we need to be flexible and ready to react to new questions: What if the user gets an error? What if she wants more transparency into what the AI is doing? How do you onboard her, so she understands it? With more traditional design patterns, you have a linear progression of the user experience. With AI, we have a different set of considerations.

Jess: Right, because AI allows you to escape the scale of cause-and-effect relationships that humans are used to. But taking this technology that operates beyond the human scale, and explaining it so people can actually understand it—that’s a fundamental human-centered AI design challenge.

Öznur Özkurt: How we explain the technology is an interesting matter. When we first started working with AI and ML, we thought we’d need to show users the inner workings of the algorithms in order to get them to use the technology: where data comes from, and what calculations come out of it. But we found that people don’t necessarily need to understand the math behind the algorithm to trust it; the algorithm can show the user what it’s thinking by outlining what it sees. In our work on eye disease, for example, machine learning models in digital imaging can pick out signs of a condition, like lesions or irregular fluids, and then make a recommendation to the clinician on the condition that might be developing. You can make sense of the result without needing to fully understand the calculation. We want to create a narrative that’s less like a user manual, more like decision-making support for the user.

Rachel: When we designed patterns for ML Kit, there were moments in the demo experience when the Object Detection API—which uses visual search to identify an object—would immediately recognize an object, with no latency or no delay. It was actually a terrible user experience because it worked too quickly for the user to comprehend. It brings up this question of whether we should cater to people’s idea of how computation works, and inhibit speed in order to give the user a moment to understand the action occurring. It’s important to not heroize technology, and instead see it as a tool that can lead to a better experience for the user.

Google Design: The idea of seeing AI as a tool gets at another shift for designers, which is the introduction of AI as a design material. How do you interpret that?

Rachel: I’ve been thinking about this: Is artificial intelligence a material or is it a tool? It’s malleable and can mold to the user like a material, because it can “remember” the user’s inputs. But we’re also using AI as a tool to shape front-end user experiences.

Öznur: Our research team focuses on creating algorithms that are specifically tailored to predicting health conditions, so that’s an example of AI as a material. Whereas a group like Google Photos, if they add intelligent thinking to their product’s search feature to create a new sorting function for your images, then the AI becomes more of a tool for people to use.

Jess: I typically think of materials as having useful boundaries—you know when it breaks. You know label makers, where you punch in letters on plastic and little white letters punch out? It’s a beautiful example of using the limitations of a medium as the design, because you essentially use a failure case of applying too much pressure to another material to create the interface. On the AI side, sometimes we kick around that idea: How can you show the limitations of the material to help people understand its capabilities better?

Rachel: We’ve already been able to play with limitations in machine learning for interesting purposes, specifically in art—experiments like creating music or an entire science fiction movie that’s auto-generated by ML. The limitations of the technology create an Uncanny Valley effect that’s a little awkward, but that’s the part we appreciate as art—we perceive it as interesting. When a similar off-ness occurs in a utilitarian context, like I can imagine it might for Öznur in healthcare, or when we’re trying to be assistive with something sensitive, those limitations feel very unfortunate. It depends on the use case: The limitations can be fun and artistic, but when it comes to something like healthcare, that’s not the case.

Google Design: How do you balance experimentation with the need to create standards that give designers a shared language?

Rachel: It feels important to introduce a degree of consistency, so we can all share and build on that. The statements in the People + AI Guidebook can be read as more suggestive than declarative; it’s somewhere between tactical guidance and philosophical best practices. But it does introduce some conceptual frameworks. Activity on the front end of an app or service might shift dramatically as better hardware, better data, or augmented reality comes into play, but at least there’s a mental model for understanding what's happening.

Jess: The guidebook is meant to be tactical in the sense that designers should be able to read through it in the morning and go change the way they work later that afternoon. Right now there are high-level conversations happening around ethics and morality in AI, as well as business angles of AI, but no matter what your motivation, a human-centered approach will be better for you. So the guidance is the connective tissue between those conversations and the available tools.

Öznur: Right, instead of trying to make sense of a completely undefined field, you play within boundaries and have something to bounce your ideas back off. Establishing a method of explainability, and then using levers to get feedback from users—these are principles worth establishing. As designers, we’re used to working within constraints anyway.

Jess: Everybody appreciates a good constraint.

Öznur: At the same time, this is not a sticker sheet. You need to do some translation with the guidance and say, Okay, these principles emerged from a combination of people who work in this field, but how does this apply to my day-to-day job? That’s the fun part, actually, learning from this, and adding something from yourself.

Google Design: If AI is rapidly evolving, how do you go about future-proofing guidance? Is that even possible?

Rachel: There’s a huge difference between releasing guidance on the codification of a digital button and creating guidelines on AI, which is ever-changing. So it’s our responsibility as designers to give users a heads up, explaining when a piece of tactical advice is grounded in years of research and standardization and won’t change, compared to guidelines for AI that will change because this is a rapidly evolving field.

Jess: From the beginning, we’ve always said that our goal is not to make Google the one and only expert. Our goal is to give away this knowledge to the next thousand experts, who can then start to share it with the next 10,000 experts, who will keep sharing from there. This is one of these building blocks in that process. For right now, this is version 1.0.

Start designing human-centered AI products with the People + AI Guidebook.