Carla Diana
Human-Centered Robots and the Future of Design
In this brand new episode of Brave UX, Carla Diana challenges how we think about robots and automation 🤖, explores the future of public transport and mobility in America 🚎, and reveals why designers must take responsibility for how their work is used ⚖️.
Highlights include:
- Why do we need to rethink how we design and define robots?
- What would it take for Americans to embrace public transport?
- How can designers prevent their work from being misused?
- How is generative AI shaping social robotics?
- Do we have to accept more fatalities caused by autonomous driving?
Who is Carla Diana?
Carla is the founder and designer-in-residence of the Interaction Design Program at Cranbrook Academy of Art 🎨, where she leads a groundbreaking program dedicated to exploring the future of smart objects, immersive environments, and intelligent systems.
Carla’s influence extends to her role as former Head of Design and now Design Advisor at Diligent Robotics 🏥, where she focuses on humanising robots that assist healthcare workers. Prior to Diligent Robotics, Carla held roles at top innovation design firms Smart Design and frog design, where she worked on a range of products from robots to connected home appliances.
She is the author of LEO the Maker Prince 📖, the first children’s book that introduces the world of 3D printing through an engaging narrative, inspiring young minds to explore the possibilities of making and innovation. Her latest book, My Robot Gets Me, explores how human-centred design makes smart technologies more intuitive and relatable.
Carla has been a speaker at well-known conferences and events such as SXSW, TEDx, and the Interaction Conference, sharing her insights on the future of design and technology 🚀. Her writing has also been featured in prominent publications like Fast Company, The New York Times, and Popular Science .
Transcript
- Carla Diana:
- As certainly designers, but even as consumers, we shouldn't just accept what is being offered to us at point blank. We should have conversations and think about and be critical and say, well, what does it mean to have a nanny robot? And what is that human interaction? Is that replacing and is that a value that we support and why or why not?
- Brendan Jarvis:
- Hello, and welcome to another episode of Brave UX. I'm Brendan Jarvis, managing founder of The Space InBetween, the behavior-based UX research partner for enterprise leaders who need an independent perspective to align hearts and minds. You can find out more about me and what we do at thespaceinbetween.co.nz.
- Here on Brave UX though, it's my job to help you to keep on top of the latest thinking and important issues affecting the field design. I do that by unpacking the stories, learnings, and expert advice of a diverse range of world-class leaders.
- My guest today is Carla Diana. Carla is an influential designer, author and educator whose work is at the forefront of the intersection between technology and human-centered product design. She's currently the founder and designer and residence of the Interaction Design Programme at Cranbrook Academy of Art, where she leads a groundbreaking programme dedicated to exploring the future of smart objects, immersive environments, and intelligence systems.
- Carla's influence extends to her role as former head of design and now a design advisor at Diligent Robotics where she focuses on humanising robots that assist healthcare workers.
- Prior to Diligent Robotics, Carla held roles at top innovation, design firms, smart design and frog design, where she worked on a range of products from robots to connected home appliances.
- She's the author of Leo The Maker Prints, the first children's book that introduces the world of 3D printing through an engaging narrative, inspiring young minds to explore the possibilities of making an innovation. Her latest book, My Robot Gets Me, explores how human-centered design makes smart technologies more intuitive and relatable.
- Carla has been a speaker at well-known conferences and events such as South by Southwest TEDx and the Interaction Conference sharing her insights on the future of design and technology. Her writing has also been featured in prominent publications like Fast Company, the New York Times and Popular Science, and now she's here with me for this conversation on Brave UX. Carla, a very warm welcome to the show.
- Carla Diana:
- Thank you. Thanks for having me. Thanks for that great introduction. I really appreciate your interest in my work.
- Brendan Jarvis:
- Oh, you certainly made it interesting and easy for me to cover off all those wonderful highlights. One of the things that I didn't cover off in the intro is that you currently live in suburban Detroit, but you grew up in New York City where I understand that your parents didn't have a car, but it sounded to me like that was intentional because you've also previously spoken about how they used to take you overseas and explore to explore the world, but they always chose to do that via public transport rather than get rental cars. Why were your parents so intentional about how you got around when you were travelling?
- Carla Diana:
- Well, what's interesting is living in Detroit, I'm just completely immersed in this American car culture where your question is very interesting because they would have had to be intentional to have a car because when you grow up in New York City, you don't need one. And so it was never even something that was never a choice, it was just the default state of being a New Yorker. So to me, what has been uncomfortable has been living in suburban Detroit or just larger, even if I lived downtown Detroit, frankly, I would need a car and how much that demand is of the average valley. So I have a new project that I'm working on for this coming May, an exhibition that explores this topic from a kind of utopian design point of view. Could it be different here in Detroit? But yeah, I mean my parents didn't even have driver's licences and neither did I until I think I was 20 or 21 and graduated. My first degree was in mechanical engineering because I simply wanted to make things and you do, that's what you pursue such a thing and a lost of the manufacturing jobs would be in suburban New Jersey or someplace where I wouldn't be able to access it by Subway. So I said, all right, well, I better get a driver's place since than a car
- Brendan Jarvis:
- Necessity forced your hand there. I understand Detroit has recently been going, been going through a bit of a resurgence. There was a period there where there seemed to be quite a lot of urban decay going on in Detroit. It was a bit sort of downtrodden and the city spirits weren't so high, but that's been changing lately. And I also understand that Detroit used to have a fantastic public transport system back in the day. I'm not quite sure just how long ago that may have been, which is somewhat similar to me here in Auckland. We used to have trams and other kinds of forms of public transport that then completely disappeared as roads took over. You mentioned your project that you're working on at the moment to explore that sound like the future of transport in Detroit. What are the key threads or themes that you're working on in that particular project?
- Carla Diana:
- Yeah, so what you're saying is absolutely correct. Detroit in the thirties and forties had one of the best public transportation systems in the country, and now most of those, they were sold off and dispersed for a variety of reasons. It's very easy to say, oh, not a lot of industry shut it all down, which is certainly no small part of it, but it's a much more complex than that as I've been looking into it. But what I moved here for this amazing opportunity to start this two year master of fine art programme in design and technology at a really historic institution know for design and art. And anyway, everyone folks asked me, well, how do you like being there? And I've always felt like, well, everything's wonderful except the lack of public transportation really bothers me from a community point of view, from the point of view of the isolation that gets bred by every individual being in their own little box and bubble and having to establish parking lots and parking garages in those public spaces.
- So that's your first entry. I mean, I sit at many cafes at a table looking out at a parking lot, and so it made me feel like I need to do something about this, and I come at it from my own point of view. So my field expertise, mental career has actually been focused around the design of smart objects, as you mentioned, and a little more specifically in the last decade around design for social robotics, a specific field in robotics that focuses on how humans interact with machines and how machines are designed for us to understand that interaction. And what came together for me was that right now there is a lot of work going on around autonomous vehicles and there's a lot of design work that needs to happen from a social point of view. We have vehicles in the street with pedestrians, we have vehicles in the street, and certainly even if we eventually had all autonomous vehicles, we would have a period of time where we had human drivers and autonomous vehicles and those drivers, and there are so many subtleties that we don't think about like making eye contact or gesturing or when a pedestrian kind of steps halfway into the road, what does a driver do?
- So a lot of my peers and friends in the field of design for robotics we're working on this. And so I started learning more about how that dovetailed with urban planners and what urban planners saw as the future for autonomous vehicles. And there are a couple of different ways that it can go. And the default way would be for these isolating boxes that we're all in that drive me crazy to just become autonomous. And then we all have these little living rules that we have a car that's even more expensive that our family has to maintain. Or we could take a look at this confluence of electrification, automation and sharing economy and think about solutions that are more community-wide where we have shared autonomous vehicles and where we have, in some of my readings, there's a book I love that's called Ghost Road, that's by Anthony and Townsend.
- It talks about what they're software trains because a lot of cities like Detroit will say, well, we don't have the infrastructure anymore, but there's a sense in which with network vehicles then the cloud is the infrastructure and the networking is the infrastructure. So you could potentially in a shared system have trains that come together and then break apart to go the last mile or the last two miles. And so that got me really excited thinking about this kind of system as a potential future for Detroit. And so the project that I'm working on is an exhibition that will take place at a place called New Lab. It's part of Michigan Central in downtown Detroit. Michigan Central is the old train station which was just recently renovated and opened and had a number of public events and is going to be a tech centre for Ford. And then there's a more independent space that's called New Lab that is a hub for technology startups. So for, that's some very long answer, but essentially the project I'm working on is supported by a grant from the Knight Foundation, and it is me sharing this vision of instead of people call Detroit, Detroit is known as Low Town, and my project is called Mobility Town Out of Our Cars and Into the World. That's the name of the project.
- Brendan Jarvis:
- That's a great name. And you mentioned, I think you described cars as, did you call them isolating boxes or something to that effect,
- Just translating for others in case they missed that subtle jab? I was curious about that description and in particular as it relates to the social aspects of public transport. Because if you've ever travelled the subway in New York, which I know you have, but if people listening have or they've travelled the tube in London or any major cities infrastructure when it comes to shared transport like that people are together but still alone and most often glued in their own mini isolating boxes to the screens that are in front of their faces. And so I'd be curious to understand from you what thinking if any, or what the design challenge might around how in the future transport, how we might imagine transport in the future, how we might introduce more sort of social permission for people to engage with each other when they're in those shared spaces?
- Carla Diana:
- Yeah. Well, I think that's certainly an accurate possible prediction, and I see exactly what you're talking about. I will say that the project that I'm working on will be pretty broad strokes on collaborating with a projection mapping artist named Amichi Nakamura who does these lovely creations with very minimal creatures. And we're painting a picture on the walls of what this community-based system might be like. But also from what I'm thinking about for Detroit, I think feel like your question is drilling down to a real, and I love the book alone together by Sherry Circle. It really delves into, I know that her research really delves into that, but I'm thinking about just some other moments, for example, just walking because I feel like the biggest resistance that I've been getting when I talk to people about the project is around folks not wanting to give up or having this mental model of every individual having the car.
- And I feel like there are two big asks of what I'm going, this picture I'm going to be painting. One is that you may have to wait maybe 10 or 15 minutes as opposed to being able to just jump in your car if you're waiting for this system to put riders together. And so that's big ask and you may have to walk half a kilometre, quarter of a mile, something like that. But in my point of view, I love what that suggests might happen socially. It means you're why the project is called out of our Career and into the world is this idea of, you know what, I'd love to walk a quarter of a mile. Yeah. If it's too miles, four kilometres is a long way. Half a kilometre, not so much, but it opens our eyes to, and then it opens opportunity for the boutiques or smaller businesses or community oriented events.
- In addition, what's being proposed by urban planners is we would have, if we had these shared vehicles, we would actually have many fewer cars on the road as well as in car. So then we have a lot more opportunity for community spaces, green spaces or park spaces or event spaces, what have you. And so then you have people outside rocking, coming in contact with one another for good and for bad, and just more of an awareness of ourselves and our shared spaces outside of our homes and our garages. And I do think once and then certainly sharing the interior space of whatever these vehicles are, these software trains is bringing that friction, which I mean, I went to high school in the New York City subway. I was a 13-year-old spoon girling in a uniform. I'm just going to say it, stuff happening. And it's why can the afraid of public spaces, and it was all good. I mean, I'm laughing, it's not all good, right? It's like the interior of a New York City subway car is real. But at the same time, I think what you were describing, and there's still that problem, everyone's going to be in that car on their screens and not necessarily saying, what do you want you do, but there's a lot of hope for that 15 minute walk and those three blocks and the new green spaces. So that's what I think about.
- Brendan Jarvis:
- Yeah, it's very easy to say. It won't necessarily be any better from a social perspective than what we currently have, but you never know unless you give it a go. I suppose underlying all of this, what you're saying sounds like it would require a massive social shift in the way in which people in Detroit, or probably more broadly in America, see what they think about what they value and how they want to live their lives. And I imagine that that's going to run up against some fairly deeply ingrained social beliefs.
- Carla Diana:
- Yeah, I think that's the biggest challenge with the message that I'm trying, which is why the project is really focused on the message as opposed to any kind of particulars of a vehicle design. And originally this project started because of the success of the My Robot Gets Me book. And then I had a publisher who was interested because I had mentioned this interest in public transportation and autonomous vehicles. And so it started out as a book proposal and some of the reviews from experts came back a little harsh, a little saying, you know what? These are ideas that academics have been urban planners have been talking about for many years. The bigger problem is changing people's minds and changing the mental model for, I forgot how you put it, for want of a better phrase, of what we expect to have as American families, as American cities. That's a huge challenge.
- Brendan Jarvis:
- It is a huge challenge. And I think without going into this too deeply, getting out of my depth here, but I imagine that some of the rational reasons for creating a more efficient public transportation system, the allocation of the resource shared across the demand of the people that want to engage with it, versus keeping a car in your park for 90% of the time and then using it for the other 10%, like all of those rational reasons conflated with a bunch of emotional thinking around the kind of scientific utopian, but maybe perhaps crossing over into what some people may consider the collective or the commune. It sort of veers into that territory for some people.
- Carla Diana:
- Yeah, no one's definitely collectivism, definitely asking people to think of our communities as a shared space as opposed to a collection of individual families and homes.
- Brendan Jarvis:
- Well, it's certainly a bold endeavour, and I definitely wish you all the best. I think it's an important narrative to start to have a more conversation about. So I hope it goes really well. You mentioned My robot gets me, and I wanted to talk to you about that. I just refresh people's memory here. So you explore in the book how social design can be applied to smart technologies to make them more relatable and human centred, or at least that was my take on things. And you've previously spoken about how the term robot or robots is problematic. So much so that you've said that we should deprogram ourselves from the popular sci-fi association that we hold when we think of a robot. What's wrong with the way we currently think about robots?
- Carla Diana:
- The essence of it are all of the stories that we have in our subconscious around seeing a robot as another human, the human ish entity, and then so much baggage that goes along with it. And some of it's good, some of it is along the lines of what I talk about in the book. Some of it is, Hey, if you can just hand your appliance, I'm holding a cup and say, Hey, can you clean all of these things up and put them in that cupboard? And I can just do it the way I wouldn't have to programme that. I mean, that'd be a lot of code, need a lot of testing and code. If I could just do those gestures and point and hand things and have something that had hands and eyes and it could nod, and that's the field of social robotics. It's all thinking about all of those gestures and nuances and building those into, so I'm kind of anthropomorphism in that sense.
- But then there is just this larger narrative around thinking about, first of all, just the need for what I just described. And if I then ask someone, tell me what that appliance looks like, now I'm calling it an appliance deliberately. If I call them a robot, suddenly folks think it's, well, it's on two legs and it walks in your space. And so first of, there's so many layers of why that's concerning, first of all, from just a really pragmatic design and engineering point of view. Even just having something ambulatory as opposed to let's say on wheels, having it have two arms with fingers, all of the every little motor that needs to go in every knuckle and those kinds of things, it's wasteful. So there's that, right? As opposed to thinking of maybe it's just pyramid that rolls into your space and because it's a pyramid, because of the needs of the situation, a pyramid with cup holders in it and it suction cups.
- So there's a practical point of view, but then there's also just all of the fears and the horror stories that we've been fed about robots and robotics. And there's the very, very complex conversation around jobs and automation, which it's a very big topic, and ultimately I feel like that's a capitalism problem and not a technology problem. Again, it's kind of a distribution of resources and thinking communally, but that really relies on corporate entities to be doing the right thing. And I mean, there are so many, I think, sociopolitical questions around that. But from getting back to your core question, from a design point of view, when I bump against it, it was often when I give talks, one of the questions that inevitably comes up. So for Diligent Robotics, for example, I was the head designer. I essentially drew this robot and made it in CAD and worked with this amazing, amazing team of engineers and the company founders, Andrea Tamma and Vivian Chu.
- And we used the core value of the company is social robotics. And our design philosophy really embraces social robotics. So it has ahead, it has kind of eyes, they're just LED rings, but it can gesture cannot, it can have gaze. So in other words, you feel like the robot is looking at you and it can have shared attention. So if there's a spill and I look at the spill, the robot can look at the spill, it has gestures it anyway, human and question comes up is isn't this a variable thing to robots and technical jobs? And when I hear that question, well, first of all, the robots Moxie is the name of the particular robot. All the company's diligent, the robots moxie. This produced by diligent is specifically geared to a real painful need in the healthcare industry to relieve nurses of work that's so exhausting that they're quitting in droves, that they're clocking 10 kilometres, actually clocking 10 miles.
- But in the hallways of the hospitals, they're sitting inside of closets doing assemble work for kids and things like that. Instead of being at a patient side, there's so much value to having the nurses not quit and get exhausted. So there's that, but granted, that's this particular sector, but even something like, let's say a school bus driver I think is better served sitting with the kids and monitoring them than sitting behind the wheel. If we can have that part of it be automated. So I think about it, the core of question I think often comes from fear. And the fear cooks from the use of the term robot.
- If I think of that nurse's assistant, and even the fact I'm calling it an assistant, I mean as part of for design approach is to have the robot be a member of the team. But frankly, the robot is a tool. It's a tool. It happens to be a tool that can travel the hallways and go to the pharmacist and pick up meds and go in the elevator. All of the challenges that I love design projects, but it's a tool. It's a tool for nurses to fulfil to their job. But once we call it a robot, then it starts to be an employee, but it's not actually an employee. So there are benefits to it. And actually, one of the things that I think Diligent has enjoyed and that I've enjoyed, sometimes there are a couple of these robots in the hospital near me. Sometimes I go, I was there with a friend who needed to fill a prescription and I just stopped someone and I said, yeah, I heard I did this terrible thing playing. I heard a robot in this hospital. Isn't that true? Yeah, I totally did this sort of fly on the wall. And I love people light up and they go, you and the nurses will greet her. Like I always saying, Hey girl, which brings up questions. I'm genuine, but if like, Hey girl, do you got those meds?
- Yeah, it's a robot. We designed a robot for sure, playing on the definition, the sci-fi definition of what we know about robot. But at its heart and tool, it's a tool. It's a tool. If we have a dishwasher in our kitchen, it may have all the smarts and autonomy of what we would call a robot. And we not have a problem with that though. We give it arms and legs woven around in your home. That's also got a lot of questions. I mean, it's what love this field. There are a lot of questions. But yeah, at the end of the day, we do let our history, the history of popular media, I think take over in the way we think about things.
- Brendan Jarvis:
- So to summarise, James Cameron and people like him have ruined robots for the rest of us picking up on the job conversation, you've previously been asked about exactly what you've been talking about with regard to robots and the potential job losses that humans may experience. There's a lot of fear and assumption wrapped into those questions that are actually probably more like statements rather than questions. But that aside, when you were asked about this, I heard you respond and you were asked about how you as a designer ensured that the technology like moxie you were working on wasn't put to use in such a way that it would result in outcomes that you didn't intend, such as people losing their jobs. And what you said, and I'll quote you now, was you said a number of things, but one of them was there's really no way to necessarily ensure what our objects will do in the future. And as much as that might be uncomfortable for some people to hear listening to this, that certainly seems to me to be more representative of the truth of the matter or the facts that most designers experience out there while working for others. What is the best that designers can expect or do when working for companies that are making products that have the potential to be abused?
- Carla Diana:
- Yeah, I mean, this is a great question, especially since I work with young people just entering the field of design. It's a great question. It's an important question. When I was an undergraduate, there was an ethics class in engineering school that was part of our studies, and I think that certainly doing your homework around who a potential employer is when you're taking a job, whether it's a freelance job or a full-time job, asking that employer questions about what the larger roadmap and strategy is for that product. I mean, if it's a full-time job or even I think we're at a place in design history where asking what a company's values is an understood dialogue that is appropriate for a designer to ask. And if they don't get an answer and aren't given an answer, it's appropriate for that designer to say, I respectfully decline this opportunity.
- I think that designers in particular are in a really difficult place. Was it Victor Peck in design for the real world who said, I'm going to mangle the quote, but something like design was one of the most dangerous fields that there is, right? I think there is, because the decisions that we make as designers then can impact the product that then is reproduced in the a hundred thousand or the millions, right? And the design decisions are important and it's subtleties, even typography and colour, suggestions of gender, suggestions of race, all of these things. And I think that what makes it really difficult is what I had said that you're quoting me on, is you may have this understanding of what the use of this product will be, but you don't have the power to enforce that. It's a leap of faith, and I think we have the power to choose the projects that we work on, and that's also difficult.
- We all need to make a living. And you may work for a firm that says, well, we're going to, I mean, but back to even Mad Men episodes in the cigarettes, and I've worked in places where there were questions about things that might be habit for me and do we want to design these things? But are they helping to break a habit? Are they helping to enforce the, I think, yeah, or does this encourage people to this cosmetic thing, encourage people to spend money or even, and there were things we don't even know before we kicked off the show, I had my dog Penny who was rescued from a research lab, and then there are still dogs that are used for cosmetics research and you can't, as a designer even know everything about, I mean, I suppose you can, but there are things that you might not know a foer, and it's challenging.
- Brendan Jarvis:
- Yeah, it certainly is. It's making me recall that, and I'll mangle the quote or the reference that nature staring into the abyss. You've got to be willing to do it, but you may not like what you find if you keep staring at it into it too long. I think ties back into what you were saying earlier around robots being viewed or they should be viewed as tools. And when it comes to designers own thinking about themselves and their relation to their work, there's a higher level of awareness, or at least a smarts. I think that's outside of the practise of design that we are called to engage in and thinking more broadly about the contribution we are making so that we inadvertently aren't being used as tools within a system that we're not conscious of. Even if we can't prevent every potential aberration that could happen with what we're doing. Coming back down now off the precipice of all the ethical concerns around the work and thinking about the process that you've gone through, the approach to designing these more human-like behaviours into robots, even if they may not be that sort of anthropomorphic style type robot, I understand one of the techniques that you've used and perhaps borrowed from another field to explore this has been something called the flower sack exercise. What is the flower sack exercise
- Carla Diana:
- I borrowed that from? I learned about that through, it's one of the things I love about teaching because I wind up teaching and then blurring into other fields, and I was teaching a course that was for animators, and I learned that the animation students as part of their degree are required to do this flower exercise. Essentially what they're tasked to do is create an animation of a flower sack, so essentially a soft rectangle and make it seem as though it's expressing emotions. And I think that the way that the exercise works is one way that it might work is giving each student five emotions, let's say. So it's got to be, and it would have to feel bashful or celebratory, authoritative, let's say, right? How do you make, and I always loved seeing once I learned about this, just looking through how animation students have handled that exercise. And it was very interesting to me from a robot design point of view because my GU view has always been that we really play on abstraction rather than falling into the trap of elaborate anthropomorphism. And so there's an anthropomorphic effect to the flower sack exercise for sure. It's turned into, it's using gestures that we can read as humans, but it's challenging the student to abstract that as much as possible.
- Brendan Jarvis:
- I really like how you've previously talked about the design of social robotics or in terms of how you've imbued those human characteristics into them so that we can understand them. And I'll paraphrase you here. You've said that the field of social robotics in part is a field that tries to design machines so that people only need to bring what they already know innately as humans to those interactions, which as I said earlier, I think it's a really beautiful idea and I can't think of a more pure form of realising a perfect user experience than just showing up and being able to naturally and effortlessly interact with this robot. How easy or difficult is that in reality to achieve?
- Carla Diana:
- No, and you paraphrase it perfectly, and it's incredibly difficult. And first of all, and I think in some ways, I think as designers, we intuitively would've always designed things that way, but the technology hasn't enabled it. But we're at this kind of perfect storm of having sophisticated sensors and broadband and connectivity so that we can have cloud robotics so that it's not just computing happening locally, but then you can have some really sophisticated responses. But it traditionally would, I've been working on product design for a long time, so I'm used to the ways that we designed things were kind of would vary canned responses where even with and speech was something that was largely off the table, let alone gesture, like gesture, really difficult from a computational and a hardware software perspective. So now when you can have cameras where I don't even know how cheap a camera might be in terms of a component product design, I mean certainly a dollar or two and depending on quantities, all that kind of stuff.
- So we're slowly getting to a point where the technology is there. Then the biggest question is around how we're defining the interaction. So in the book I talk about six rings of what a team should think about and talk about and always come back to as a core. So that's the framework of my robot gets me. And the first one is presence. What is the physical thing? Is it a kiosk, is it a desktop, is it a table, is it a bowl? That kind of thing. And then the second one is expression. Does it have lights, does it? And then the third one is interaction. Well, once it's expressing outwards, how does it get the sensor, what I just described? And then the next one is context. And context is one of the most challenging ones because it's in certain circumstances you can isolate questions of context, but inside a vehicle, for example, someone's likely sitting, they're likely inside the vehicle, but then in a kitchen maybe you don't know.
- And when I talk about context, so I'm talking about everything that goes into deciding how a product should respond. So what's the temperature in the room, how loud is it? What time of day is it? What's going on politically? What are people's state of mind? I mean, all of that's context is this product used outdoors or indoors, all of those questions and they really affect what an appropriate response is. And then the lasting in the framework is ecosystem. So products working together, but just staying on context for a minute, I've been thinking about this recently because I've been watching a television programme on Apple TV that's called Sunny And to this, is this a spoiler? It's just going to be a spoiler if I talk. There's a trashcan robot. There's a trashcan robot that figures into a part of this television series, and there's a really talented software engineer who comes across this trashcan robot and says, this robot's terrible because it doesn't know the difference between, so it knows how to pick up a can that's trash, but its definition was just anything that wasn't defined in its set of what trash is or what trash isn't.
- And then this engineer goes on to train the robot and gets to this sort of victorious moment when the training is these really, it's consists of these really sophisticated conversations where the robot is saying, oh, picking this up makes you sad, so I shouldn't pick it up. It really gets into these nuances of emotion and how the objects, each object has a different emotional value, and why is it, why is this cup trash and that one's lot? And I thought, ah, yeah, what context? That's it. That's what I'm talking about.
- Brendan Jarvis:
- This is sounding like that perhaps like a holy grail of intelligence or discernment going on here in the way in which the robots interpreting the context, because a lot of that stuff's implied, I would imagine. How would you know that my favourite cup, even though it's really old and looks like trash, how would you know that was my favourite cup and not pick that up? A lot of that stuff we just, I suppose, take for granted in our surroundings in our day-today.
- Carla Diana:
- Yeah. And then what can you reasonably expect of a product? I mean, that's pretty sophisticated, right? To know that that's your old cup and is that on the robot or is that on you? I think to put that then I guess there's the human test too. Would a person throw that cuff out and could they reasonably be expected to understand that it had sentimental value?
- Brendan Jarvis:
- And do people really want to buy robots that you have to parent? Because if you've already got kids, I'm not sure you want something else that you kind of have to coach and train to that degree. There might be an expectation that it would magically know coming to technology that seems like magic to a lot of people perhaps outside of our field, a lot of the general population at the moment, and that technology is generative ai. There seems like there could be an overlap going on here between say something like social robotics and something like generative ai. How, if at all, is generative AI impacting the field of social robotics?
- Carla Diana:
- AI has been a big part of the robotics projects I worked on because it is cords camera vision, so camera vision and it's cord navigation. So a lot of the robots that I've worked on, like the robot with diligent in the hospital as well as some other consulting projects that I've had with clients, when it involves navigation, it's a pretty sophisticated task to understand that that's a door and that's a chair and that's a person's foot. And so AI has been used for that for a long time before this recent explosion of public interest, which I think has more to do with large language models than it does really. There's certainly a lot going on in computing as well. It's been happening, but the large language models, I think what in to the public imagination and the design industry for sure. So I'm struggling a little bit to really think about generative with the kinds of things that I work on, but it's an interesting question because I think that it has to do, I think certainly again, large language, Laura, certainly how that robot's going to respond in terms of now all of the conversational interfaces that we're starting to see.
- And I'm interacted with an eight I robot, and I'm going to forget what the project is called, but there are two extremely humanoid robots that are set up in the lobby of the sphere in Las Vegas, and a person just approaches them. There's somebody there with a microphone that pass because there's a big crowd of people and you ask that robot any question and the robot responds. So certainly being able to be convincing with this crowd by not having command responses and being able to come up with those responses on the fly, I think is a part of it. And certainly even certainly the gesture, this robot had a lot of, it had a lot of gestures. It was able to point at things and nod, and it had actually facial. I mean, it was a very sophisticated mechanical thing. But yeah, I guess I'm thinking also about, I think there's tonnes of opportunity even around, let's say you have a frame that acts like a monitor, that also acts like a window, and does that change then? Depending on the time of day, I think, yeah, I think we could go on and on with thinking about it. Did you have something in mind with your question?
- Brendan Jarvis:
- Not particularly, but hearing you talk about speech a little earlier and then thinking about the capabilities of the Las Vegas robots, it kind of seems like that will be made a lot easier than it would've been in the past. Like you say, there's no need to do these branching, conversational, canned response type trees. You can kind of just let it respond and coat it to do so in a more human natural kind of way. Serious question. Now, seeing as you were in Las Vegas, was it red or black?
- Carla Diana:
- I had a hard time with Las Vegas. I live basically in a nature preserve right now. That's what CRE is. I had a really hard time, but I also knew that given what life field is that, especially what I teach, I felt like I need to see what the state of the art of what's going on in Vegas is.
- Brendan Jarvis:
- We, at the very beginning of our conversation, spoke briefly about autonomous vehicles, and I want to come to a piece that you wrote in 2016 called Don't Blame the Robots Blame Us, which was for popular science. And in that piece, you looked at the role of both the autonomous driving systems and the human beings that are involved, that are using them as the title of your article suggests, how are we the Humans to Blame and which humans were you talking of? Were you talking of the humans as in the designers or engineers or the end users of the autonomous driving products?
- Carla Diana:
- That's a great question and it's very important because by us, I was really talking about designers. I was really talking by and for designers, and my premise in the piece was, and I was asked to comment on the Tesla autopilot, and there had been a fatality recently when that piece was written, and so there were a few uss in there. I think one of the uss for me at that time was certainly marketing and even just the use of the term autopilot because a lot of what was happening at that time, so it was not full autonomy yet giving it a name that has a suggestion that it is full autonomy. And there were many instances of people posting on YouTube where they were napping or reading, and even though it was made clear to them upon purchasing the vehicle and there's messaging in the car that you needed to still maintain awareness of the road, people weren't doing that.
- I also blame sci-fi movements for that as well, right? Had desire like yeah, kids's finally here, or HartBeat alum Bug or whatever this, but I think that at that time I was in some really intense conversations with S friends and were working around design of the interiors of autonomous vehicles and all of the nuances of having the interior communicate to people what is happening and what I can talk about in the piece is the robot brain and that we have a certain mental model. If we think it's a fully autonomous thing, then we're going to treat it as such. So as designers, we need the interior of the vehicle to message to us, let us know what it doesn't. We're very myopic as humans. I think this is part of why we make humanoid things because we're in our, we know as much as we can know about this getting philosophical, but about the world, the experience of being human. So we kind of replicate that experience or we expect that experience from primates. When we look at gorillas and chimps, we think that they're doing the same thing we're doing. It's very myopic. And so getting into an autonomous or semi-autonomous vehicle where we think that it sees or it thinks about things the same way. So the challenge for designers, I think is to communicate that robot and let us know how decisions are being made by the vehicle. I think that's a big challenge, right,
- Brendan Jarvis:
- Somewhat semi provocative and direct question. Do you feel that as a society we are going to have to accept that there will be many more fatalities caused by these systems in pursuit of the greater good when it comes to the environment?
- Carla Diana:
- I think that's a pretty straightforward question. I mean, I think one of the things that caught my interest in autonomous vehicles were the statistics of how many people die in human error car fatalities and comparing those numbers to what the prediction is for a fully autonomous system. One of the books that I really loved and also had a super is it's also very provocative and difficult is a book called Door to Door. And I don't have the author in top of mind, but I think it's easy to search up. And that book gets into some excruciating detail around the true numbers of fatalities that happen from human error in vehicles and how much that can potentially be reduced. I think the difficult thing is that we're, in this phrase I use with technology sometimes, we're still in the teenage years of autonomous vehicle testing. And so I think important that that testing be done responsibly and with limits.
- And I think a lot of it's happening on let's say campuses where it's contained, where the potential variables for what the autonomous system needs to understand are limited, and then branching out from that, even if it means that it takes several more years. I mean, it's a pretty difficult and complex question and dependent on all of the circumstances, but there has been vesting that was, or vehicles where people didn't know that the vehicles were just sent out into public city streets, and I think there are a lot of issues with that. But yeah, that door to door book, I listened to it as an audio book on a long distance trip, and wow, it wasn't hard because the wave that it talked about human error and automobile accidents is really rough, but it's also a very fascinating book around autonomy and not just autonomous vehicles, but also all of the larger systems around freight and harbour and trucking and all of that.
- Brendan Jarvis:
- Was it written by Edward Hues? Does that ring a bell? Yes,
- Carla Diana:
- That's the one.
- Brendan Jarvis:
- Yeah. So for people listening, it's called door to door, the magnificent, maddening, mysterious world of transportation. Yeah, sounds like a really good read. And we were speaking before we hit record about you have also been a podcast host. You used to co-host the Roby podcast, and I noticed that after 101 episodes, you literally ran out of experts to interview, and the podcast came to its sort of natural conclusion. How satisfied are you with the speed at which the field of social robotics has been developing?
- Carla Diana:
- So the founder of that podcast and my co-host, Dr. Tom Golo is a PhD psychologist. So I really loved that project, and we unfortunately don't have a public archive of it, but we had fantastic guests who were at the forefront of thinking and talking about every aspect of social robotic. We were disappointed with the speed at which social robots have geared in let's say, home appliances. There was fascinating project that I was involved with in some early consulting stages that was called Jibo. It was led by Cynthia Brazil from the social, really one of the founding experts in social robotics from the MIT media lab. And that was an incredibly promising product and is not available anymore. And I think Tom and I were very excited about the potential for something like that, and it really kind of embodied a lot of the interaction principles that I talk about.
- But at the same time, it's tough to say, what is a social robot or what isn't at this point? We're going to see it, we're seeing it. We see even I feel confident that we will start seeing, one of the examples I talk about is a microphone and a microphone that has smarts and knows who's talking and knows when to point and knows when to shut itself down and expresses when it's on or when it's off. And I know that designers are working on that, and I think a lot of it is the difficulty of, there's this phrase that's so silly, but I'm going to say it right? People say hardware is hard, hardware is difficult, it's challenging and expensive to have a motor that simply expresses, I'm going to sleep now on a microphone. But I think that there's greater public awareness and therefore greater willingness for companies to explore those nuances from a design point of view.
- Brendan Jarvis:
- Karla, just mindful of time, I got a couple of final questions to ask you. I want to turn our attention now to about a decade ago when you just published Leo, the Maker Prince, which as I mentioned in your introduction, but quick summary for people that may have just jumped to this part, is a children's book that enabled people print the objects and 3D from the story if they had access to a 3D printer. Now, you've said previously that this was a homage to the Little Princes, which is a novella written in 1943. So I learned by the exile French writer, poet, and pioneering Aviator, Antoine Dessan exude. How did the little prince change you?
- Carla Diana:
- Oh, that's such a great question. What I loved about The Little Prince was, number one, the encouragement to dream and have a creative mind, even if you feel misunderstood, right? There's a lot in this book about talking to the grownup and the permission to never grow up, I think in terms of hanging on to creative spirit. And the other thing that really influenced me was the way the book weaves drawing and the written word one into the other. And I think that that was a large part of the inspiration for the Leo Maker Prince book because I felt like, oh, let's weave three dimensional objects into this story, this narrative, and get people excited about what the objects mean and why they exist or what the context of them is, but then have them be real things that people can hold and touch. So that was, I remember that as just as a child, I remember reading about the bale bomb tree and then having this amazing illustration of the bale bomb tree paint or the way that the little prince moves his chair to see the sunrise just a little bit on this little planet.
- And then to actually, I mean, it could have just been the written word. It could have just, I guess maybe I was used to some format of a book that was either mostly pictures or mostly words, but the pictures were so meaningful to me,
- Brendan Jarvis:
- Something that I hadn't heard of the book before, but when I was preparing for today, I had a deeper look into it and some of the history behind the author and what he was trying to convey through the book. And I've actually ended up buying it for my sons.
- Carla Diana:
- I don't, you'll have to let me know what you might think yet.
- Brendan Jarvis:
- It's the second book I've actually bought as a result of doing the podcast. The first one was the Big Orange slot. I'm not sure if you're familiar with that one.
- Carla Diana:
- I'm here.
- Brendan Jarvis:
- It's also one that I recommend if you've got children, because it explores that permission to be creative. That seems to get beaten out of us at a far too earlier in age. Speaking of being creative, clearly what you do at Cranbrook Academy of the Art with what was called the four D design programme and is now the Interaction Design programme is hugely creative. I've seen some of the work that your students have produced in the past, and you are currently a designer in residence there as well. So it's very, very creative work. It's at the crossover of, I think you call it this form code and electronics. It's very leading edge. What I was curious about though was that you've previously said, and I'll quote you one last time, that social commentary is something that just runs through everything we do. We are always trying to talk about what the technology implies, what the future of it might be. So where you currently sit now with your students that you've got working with you, what are the most striking or surprising social commentaries that you're seeing reflected in their work currently?
- Carla Diana:
- I think there's a lot of backlash against mobile devices against and looking at mobile devices as everything about technology. And it's interesting to me to prod them a little bit to drill down into that because I think that a lot of them just grew up with mobile devices or for most of their, at least teenage young adult life, whereas to me, it just feels like it's magical. I mean, I understand that's a whole three other podcasts, right? Some pitfalls of, but I think that they're looking at that they're looking a lot at gender and identity and how that's formed and influenced by everything that we interact with and how as designers, we can make a difference in terms of thinking about those things. Yeah, I think that they are working a lot with gesture, which we don't see as much yet in everyday products. I have a student now, Jessica Harvey, who's working a lot with kinetic sculptures that respond to a particular way of approaching it in terms of where the body is and how I think everything around, a lot around students that work with wearable electronics and thinking about how the form relates to the body.
- And
- Brendan Jarvis:
- I may have suggested that was the last time I would quote you, but this will be the absolute last time I'll quote you today, and I want to take you back now to 2017 when you were standing on stage at TEDx Brussels. You said the robots are here. We can have them taunt us, we can have them coach us, we can have them nag us, we can have them encourage us to be better, but it's up to us to know our limits. What did you mean when you said it's up to us to know our limits?
- Carla Diana:
- I've had really a lot of fun with that audience because I showing that images of what robots could be, and I think I started with an elevator and then went on to show them a dog walking robot and say, would you want, what about this? And then I showed them this Nanny robot, which was a composited picture of my child who was two of the time, and this 2-year-old just alone in a room with a robot. And the audience wrote really gasped. I had some people tweeting like, what a terrible talk, what a terrible thing to show. Which I'm glad because that was exactly the point. It was meant as a provocation. And what I meant by that was that as certainly designers, but even as cons, sewers, we shouldn't just accept what is being offered to us at point blank. We should have conversations and think about and be critical and say, well, what does it mean to have an nanny robot? And what is that? What human interaction is that replacing? And is that a value that we support? And why or why not?
- Brendan Jarvis:
- And it's funny to think that those same people that gasped at the idea of the nanny robot might not even think twice about leaving their 2-year-old in a room alone with a mobile device,
- Carla Diana:
- Right! Right! I mean, which I think that banks, that's why my students, I have a hard time with the question about the mobile device, but that's sort of the part of the core question is that there are things like that that are actually happening right now. It's not as provocative or incendiary as that image that I put up on the screen. But yeah, we need to talk about it as a society and we need to be critical.
- Brendan Jarvis:
- Carla, this has been a really timely and thought provoking conversation. Thank you for so generously sharing your stories and insights with me today.
- Carla Diana:
- Oh, thank you so much. Your questions have been great, and I really appreciate being Cardos your podcast.
- Brendan Jarvis:
- My pleasure. Carla, if people want to get in touch with you, they want to follow along and be abreast of all the changes and contributions you've been making to the field of design, what's the best way for them to do that?
- Carla Diana:
- So the best way is I've always got some contact information on my website, which is just carla diana.com, and then I communicate best through Instagram. So it's just at Carla Diana,
- Brendan Jarvis:
- And there's some really fun projects on there that people should check out that we didn't cover today, but definitely worth a look.
- Thank you, Carla, and to everyone who's tuned in, it's been wonderful having you here with us as well. Everything we've covered will be in the show notes, including where you can find Carla in all of the things that we've spoken about. If you've enjoyed the show and you want to hear more great conversations like this with world-class leaders in UX research, product management and design, don't forget to leave a review, subscribe, so it turns up every two weeks. And please pass the podcast along to just one other person who you feel would get value out of these conversations at depth.
- If you want to reach out to me, you can find me on LinkedIn, just search for Brendan Jarvis, or there's a link to my profile at the bottom of the show notes or head on over to my website, which is thespaceinbetween.co.nz. That's thespaceinbetween.co.nz. And until next time, keep being brave.