Laura Faulkner
UX Is Hard and That's a Good Thing
In this episode of Brave UX, Laura Faulkner gives a passionate and practical overview of how to frame decision risk, develop better stakeholder relationships, and do research that matters.
Highlights include:
- Why is UX hard and why is that a good thing?
- How can you frame risk to make better design decisions?
- What is a UX research intake conversation and why do they matter?
- How can we improve our chances that stakeholders will take action?
- What is vital to get right in UX research and what isn't?
Who is Laura Faulkner, PhD?
Laura is the Head of Research at Rackspace Technology, where she leads the team responsible for helping the organisation to make better decisions through qualitative and quantitative research.
Starting out in the early 90s as a technical writer, Laura went on to build extensive experience as a researcher, design lead and program manager for the University of Texas at Austin’s Applied Research Laboratories and Institute for Advanced Technology.
Laura has been the International Conference Co-Chair of the User Experience Professionals Association and is the author of “Beyond the Five User Assumption”, which was named the “Best Paper of the Year” by Human Factors International in 2005. It is still regularly cited today.
Transcript
- Brendan Jarvis:
- Hello and welcome to another episode of Brave UX. I'm Brendan Jarvis, Managing Founder of The Space InBetween the home of New Zealand's only specialist evaluative UX research practice and world class UX lab, enabling brave teams across the globe to de-risk product design and equally brave leaders to shape and scale design culture. Here on Brave UX though, it's my job to help you to put the pieces of the product puzzle together, I do that by unpacking the stories, learnings, and expert advice of world class UX, design and product management professionals. My guest today is Dr. Laura Faulkner. Laura is head of research at Rackspace Technology, the company that pioneered managed hosting and that is now the world's largest and leading provider of managed cloud services at Rackspace. Laura leads the team responsible for helping the organization to make better decisions through qualitative and quantitative research, including large scale market surveys, product research, and competitive intelligence.
- Starting out in the early nineties as a technical writer, Laura went on to build extensive experience as a researcher, design lead, and program manager for the University of Texas at Austin's Applied Research Laboratories and Institute for Advanced Technology. After leaving the university, Laura became the director of UX design and research for Pearson Clinical Assessment. She also successfully ran her own UX strategy consultancy for 15 years. Between 2020 15, Laura has been the international conference co-chair of the User Experience Professional Association, where she won the International User Experience Service award for fearlessly redesigning and leading the implementation that brought the conference into a new age. She's also the organizer of Hot Topics in UX, Finding Expert answers, a meetup in Austin, Texas, and she's the author of Beyond the Five User Assumption, which was named the Best Paper of the Year by Human Factors International in 2005. It is still regularly cited today, Laura Hold a Bachelor of Science and Anthropology from the University of Houston Clear Lake and a PhD in experimental psychology from the University of Texas at Austin, described by her colleagues and contemporaries as someone who is compassionate, a wonderful advocate for UX research and tenacious brilliant and down to earth. I can't wait any longer for today's conversation. Laura, welcome to the show.
- Laura Faulkner:
- Thank you, glad to be here.
- Brendan Jarvis:
- It's great to have you here, Laura. And as I mentioned before when we were chatting off air, I do tend to do a bit of research for these conversations and in that research I discovered something that ped my curiosity, which was you once wrote a Monte Carlo simulation in matlab, and for people that dunno what MATLAB is, what is matlab and what on earth inspired you to do that with it
- Laura Faulkner:
- [laugh]. So it was by necessity only. That is not my thing. So programming is not my thing even though I have a graduate minor in statistics, that's not really my thing either I hire professional statisticians, [laugh] instead of doing that myself, which is a very wise thing to do if you're not a specialist in that. So what possessed me to do that? So MATLAB is, for example, I was using it to write a simulation of some sort. So you can write test programs in that it has a lot of other really great capabilities, but it's more accessible than a lot of just basic than a lot of programming language kind of situations. And it's often used in laboratory work. So I do recommend going out and looking at that fabulous tool. Some of my friends live and work there in user experience for many years, but what possessed me to do it was that I was seeking to understand, really seeking to answer the five user assumption question.
- So it had gotten out there and into popular usage especially when Y Nielson and Roth Mulich first published a paper on it and then Nielsen put it in the usability engineering book. And of course that book is part of what gave the field a defined identity and a name of what we do, even though a lot of people had been doing it. So this is kudos to all of my friends out there who were part of creating it as a profession, but the book just elevated the awareness of the methods in it and collated it all in one place.
- But the problem with that was is that and the paper explains this, is that my paper beyond the five user assumption is that it was a quick and dirty measure, which we do a lot in user experience and we should because we need quick business answers, we need to move forward quickly. And it's not classical research, it is delivering business value, which is a very different thing from statistical research. But the challenge was is that people who did understand statistics would look into it and they say, well, even the method used to derive that answer is not a good solid method we can rely on. So I thought, okay, nobody had ever tested it directly from scratch. So I chose that I was able to find, and so I chose to create a full test from scratch of this. So actually me and a colleague, we built a site that had defined user experience problems in it from glaring problems, which the five users do pick up to moderately difficult to find of problems, to very subtle problems that it might take more users to discover.
- Those just by chance, by accident, you might miss those subtle problems. And sometimes the subtle problems, while they're not the big glaring ones that we could go fix and get quick wins out of, might be one of those, might be the one that causes a data loss or causes a system to go down. And so in high risk situations, like say you're testing something on an airline cockpit, a very subtle problem is often what does cause the major accident. So what I was trying to do was to help answer the question and put it to bed once and for all. Let's put some strong data behind it. And so I tested 60 users, which when people hear that they're, they, I can't test 60 users, I said, I know 60 users, wow, [laugh], I tested 60. So you wouldn't have to, so I had a defined set of user experience problems.
- There may have been more built into the software site that the application that we did, but at least we knew which ones were built in to there intentionally. And then tested 60 users on the whole thing, saw which problems they found, which ones they didn't. But I needed to know some way of, well what happens in a real lab when I have five people walk in, well, which five people and what am I gonna get and what might I miss? So I had to be able to draw out all of the possible samples of five people. Well, while I was at it, I also wanted to know, well what if I had random samples of just, let's say this time I decided to user experience test 10 people and so these 10 people signed up for this study and I vetted them and they came in, but I didn't know about the other 55 people and what their experience might have been.
- So what would happen if I randomly had this group of 10 versus this group of 10? So to test all possibilities from bringing in one user to bringing in all 60 users and every possible combination of every user that could only be done by automation. It's like that was never gonna happen by me manually looking at all 60 pieces of those data and figuring out, well what if I got these so the money car Carlos simulation allowed me to pull every possible sample and to create those loops that would do that. Do I remember how to do that? No [laugh] sadly that program was lost to a data loss. It's like we live and learn over 20 years how to protect things electronically. But having run that multiple times, having had it checked by multiple folks at the time to see that it was running properly, I was able that that's where I was able to identify not what do you get that?
- So the other thing is that the original studies were all about what do you get If you just do five users, you find a lot of the problems and those are the ones you need to fix, those are the glaring ones, go fix 'em. But what I wanted to answer was what might you lose? And so by doing this, what I was able to show were and in the paper it shows four of the charts, for example, of what happened with any set of five or 10 or 15 or 20. And that lets you decide from, my whole purpose was not to give you the answer you need to go test this many people. It was to empower every user experience researcher with what are you willing to risk, what are you willing to let go of? What is your risk tolerance? For this particular project, I wanted to give decision support so that, because researchers are ones who know it's like, and so if we're running we have a high risk product and there's something really, really bad that can happen.
- If something gets missed, something that we didn't even know about, perhaps then you're gonna wanna add, you're gonna wanna test that with some more. And then what I found though also was that a sweet spot and the sweet spot spot is really somewhere between 10 and 20 where you're gonna have a pretty strong idea that you get a good set of the problems and you're not gonna lose very much. Do I do that and does my team do that? No, most of the time we're testing six because that's the reason we've gotten into it, does do what we need to do. When we have a bigger problem where it, there's higher stakes or where we're getting a lot of different answers from people and we're not really getting to the same answers over and over, then we'll do 12 or 15 or if there's a whole lot of dissent, if the stakeholders are really arguing, that's another reason to add a few more if you can. So we don't ever do it to slow down projects, but we do it to find those things that we simply don't wanna miss.
- Brendan Jarvis:
- So it sounds like the context of the product and the risk tolerance you need to consider for that product is quite important in determining your decisions decision as to how many users you should be running through those usability tests.
- Laura Faulkner:
- Absolutely. And so your risk tolerance is based on several different factors. So I started doing some of them. Let's see if I can just list some. So one is the depth of stakeholder dissent. That's gonna be one where we do add. And that way we don't have to keep going back and telling the whole story about why six is fine and why six is valid. Now I do have a way to tell that if you wanna ask me again later, I will, I'll circle back to
- Brendan Jarvis:
- Definitely one
- Laura Faulkner:
- Of the way we've talked that I talk about those six participants, but when there's a lot of dissent, if it's not too hard and it's not always very hard for us to add some participants we go ahead and add them. And if it's gonna slow down the timeline, we do it in parallel so that way we don't hold up anybody's timeline. So one is depth, depth and breadth of stakeholder dissent. Another is four potential usability problems in there, what bad things can happen for the user. And if there's things like if there's a potential that if they miss something, there's a data loss or they make a really wrong serious decision, so maybe it's a wrong decision about how much infrastructure they choose that's big one because that can change fixing that can be involve a lot of contracts and time and effort and all that, and it can involve potential data loss for them.
- Or if it's making a mistake on the level of rental car that your insurance company allows you to get, that's not really high stakes. And so we maybe don't need that. So the stakes and the potential bad things that can happen for the user, that's really our biggest driver. And then some of it is simple user research, gut instinct. It's like we think that there's something in here that we're missing or that design is missing or we're not even from our expert mind. Like Malcolm Gladwell, he says from the expert mind in Blink, he said, the expert mind can't always say why it knows something. So if there's something like that, it's like, and you have the luxury to add a few and you can test them in parallel. So you don't hold up anybody's things, add a few if it's gonna help you feel more confident in yourself and what you're presenting.
- So those are some of the factors and reasons why you want to go ahead and add participants. But the charts in the paper will really show you, it's like it'll show you the mess. You can get [laugh] if you miss and how much tighter your results can be when you add some statisticians will just say, oh, that's basic user, that's basic statistics stuff. But we're not talking about just basic statistics. We're talking about what matters to the user experience. And so having a few simple decision charts can help make all the difference which one feels comfortable to you and the right one for that study. So yeah, those are some of the choice factors that you're gonna base on base those adding users decisions on.
- Brendan Jarvis:
- And I will definitely be linking through to those charts and to the paper because this is a discussion that, believe it or not, you publish this in 2005, it's now 2021. Yes. Where you're still having as a community, it occupies people's headspace. So much of it I really would love to help, as you said earlier, put this to bed once and for all and also just give people, like you said, a decision support so that they know where to go to refer to get some help with determining how many users you need to put through your studies, which is clearly there's no right answer
- Laura Faulkner:
- And confident. Yes,
- Brendan Jarvis:
- [laugh]. Yes, a hundred percent. Now you touched on something else, Laura, that I wanted to ask you about there, which was when the stakes are quite high. And in your introduction I mentioned that you'd worked for the University of Texas and I believe the labs and the center that you were working with there had some defense contracts or some tie to defense. And it seems like the stakes might be quite high in a defense situation. What sorts of UX related projects were you involved with while you were at the University of Texas at Austin?
- Laura Faulkner:
- I'd say that, so quite a few, but the bulk of the projects there, which were for joint forces. So I got to do work. I grew my great privilege to work with all of the forces in the us in the US military as well as the Royal Australian Air Force and the Royal Canadian Air Force. It was a significant privilege to get to do that and for so many years, and I learned so much from that. And so what I can share that is common knowledge, if you go look up and see how the projects are described, is that the significant amount of work that our team worked at those times in our division worked were around communications and digital communications. And so it was really about combining multiple communication sources so that you could understand what was happening and have situational awareness. And so very common problem.
- And then I also got to work some defensive measures or that were really not so much the defensive measure itself as it was the test simulation platforms. So the test simulation platforms, you need those to be usable as well so that all of your systems can be tested for safety and efficacy and how well they work. So those were two of the big main things that I got that I had the great privilege to work on. And so in terms of the stakes of those, a lot of times in the terms of the simulation ones, we didn't wanna miss something. And so it was important to make sure that we shake it all out for safety and efficacy. It's like was it really doing its job? And so to make sure that we captured all of those potential use cases or that the simulation simulations that got set up for automated testing worked, it's like it needed to be easy for the test developer.
- And we often drop out the person who's sitting in the work chair who's doing the work to do that sort of thing. It's like as user experience people, we do work on the frontline and the person that's using it and actually has the final technology in their hand I happen to have a particular passion for the enterprise space for the person who's sitting in the work chair. So enterprise tools of any kind, which is one of the reasons I love working at Rackspace is that this is a big enterprise. It's an enterprise platform. It's not about the end users of the internet, it's about the folks who are setting up environments and getting them to work and for the hosting to work correctly and for the management of that and the security of all of that to work. And so I just find it a particularly interesting problem set to bless the day, the eight to nine 14 hour days that people spend in their chairs on complex work tools that go across whole enterprise problems. And so really defense was no particular mystery that way. Like anything else it it's really helping the folks who are sitting in the work chairs and trying to get something done in very big complex spaces every day.
- Brendan Jarvis:
- A hundred percent. And it's interesting you say that. It's like any other enterprise context and I couldn't help but But in defense, and there are some other industries, there is literally life at stake. And you talked about communication and the role of UX and trying to improve that communication for the people that are in those seats. Yes. I wondered how did you think about the gravity of what it is that you were working on as a UX professional in that context?
- Laura Faulkner:
- That's a very good question. So how did I think about it? I was keenly aware of it and it's one of the reasons that I can say that I was privileged to work in that environment because I found that politics aside, folks in the military all across the world are really human beings who are about trying to create the world to be a safer place. And for all that that might be talked about at larger levels about that the individual human beings who came to work every day came to work with that mission in mind. And I was very surprised because I'd never been exposed to that, to military work before then and to I thought that it was going to scare me a lot more. A lot wasn't
- Brendan Jarvis:
- Gonna tell me about that. What part of it did you, what did you think might scare you about it?
- Laura Faulkner:
- That I was gonna go in quite honestly and go into with a bunch of war Mons, right? [laugh]. And I'm like, I'm rather a peacenik, actually [laugh]. But what I didn't understand was for whatever mission it was, the individual human beings involved had very earnest desire to serve the world and to serve humanity in their own ways. And nearly all of them, having gotten to do many user interviews over the years of multiple levels of people all the way across almost all of them, their very first things out of their mouth when you asked why did they come to work every day? And they talked about safety and deterrents and really that was really the driving mission. And so once I left that field and went out into industry, I still really look for people who are committed to a mission of whatever kind it is. And it has to be a business that wants to do something more than make money [laugh] and it has or an organization that wants to do something more. And that's what attracts me and that's what attracts a lot of user experience people [laugh] is because we have our own personal sense of mission in the world. And so when somebody interviews me for a job, it's like they better tell me a story that's about more them profit and brand. They better tell me something about what it means to them to come to work every day.
- Brendan Jarvis:
- Yeah, yeah. It's so important. It just reminds me, I had a conversation last week with Peter Marvel very much on this topic and how we can as user experience professionals better live up to the idea that we have of making the world a better place and some of the choices that we need to make around that and who we choose to work for is definitely one of those things,
- Laura Faulkner:
- [affirmative]. And so it was ultimately the people that, it was the mission that initially that frightened me, but it was the people that kept me there and then some of the, and that there was more positive impact than I anticipated.
- Brendan Jarvis:
- Yeah. [affirmative], I really like how you described how you were able to hold those two things and it wasn't what you expected when you went into it. You said you're a little bit scared or hesitant that it might not play out. I had been curious about the sort of ethical conversation, if that's the right term to use, that you might have been having in your head as you went into that and then went through that. And I think you really articulated that really powerfully For people that are listening today, Laurie, your head of research at Rackspace now, and I understand that your team's mission statement is to spark fast confident decisions. I was really curious about that for a couple of reasons. The first of which is probably where I'll leave things here cuz it'll turn into a very long question otherwise, is that you run an agency style research model and internal agency style research model as far as I could tell, as in your team, your researchers aren't embedded in product teams. What can you tell me about the mission and the way in which you work at Rackspace in research to deliver on that mission?
- Laura Faulkner:
- So how we primarily work, you described it exactly perfectly. We work primarily as an internal agency. We're not embedded and there's several reasons for that. It's not the perfect model and I'm not a big evangelist for this is how you have to do it in your organization. So to everybody out there who's like saying, whoa, whoa, whoa, that's not gonna work here and that's not good. And embedded in product is better, it's like this is for this use case and it works for this use case. So we are a very small team so this lets us, the model helps us to help more people and to scale more easily and quickly. It also does have the benefit and there's many ways to get this benefit, but it does have the benefit that we ourselves are never siloed. It's in some ways there are things that we know about across the company better than almost anybody else in the company because we work for everybody and we'll answer anything for anybody.
- Oh, I like that. We work for everybody and we'll answer anything for anybody. [laugh]. Okay, good. So in terms of us being agency model, so how that works is that one, we do preserve out a significant part of our time for ad hoc work. So because our mission is to spark fast confident decisions, there's always, somebody doesn't always have a question and they don't always need something to move forward in their decisions. Now granted in the product model, you do want to be, and we do stay constantly abreast of product development and we will insert ourselves [laugh] when it's time. And so when a new project is coming up, we do have a point of contact for that project who semi embeds. So they're not there as a full-time person and they're not a full-time asset, which also means that they're also fully utilized. My team is always fully utilized because, and we don't have one person way over test and one person way under test at any given time because we share this.
- But to keep things sane and to keep us in the loop fully throughout the product process, we do have a point of contact for each area in each focus area. And so they know that they can come to that person first and that person will as well as having breadth of knowledge will also be able to go deep on their product and their thing. So that's how we work on both of those ends of that. And so we will do full iterative testing throughout from inception and concept through to post-release benchmarking as well. But again, it's like we cosign we to keep other people in the loop so that you don't have a single point of fail. We are, again, we're a small team and so if we've had somebody go on medical leave, well if, because we're not wholly embedded only in the product team and that point of contact is not the only point of contact for that team, somebody else knows about their project, that we were able to have the second person come in and seamlessly, reasonably seamlessly pick up the thread and keep things going so we don't ever slow down.
- So really I think those are the primary reasons that we developed this, which also answer how we do it. The primary reasons were that there were a lot of people out there in pain who don't have an embedded researcher on their team. It's like they were not that kind of team that don't have the resources but they need help. And so we wanna be able to say yes and help them. And then second is that we don't want to introduce single points of fail. And so the product team is not ever at risk. So yeah, that's how we do that dance and we're able to do it with a small agile team that can really turn our hands to anything, which also how we learned multi methods and began to apply multi methods across the business to where now we have insight from the smallest UX kind of test of something up to that largest market insight. A lot of fun for us.
- Brendan Jarvis:
- Yeah, certainly sounds like the thing I was curious about in your mission statement was this somewhat apparent tension between fast decisions being made in the context of an agency model, which can sometimes have constraints on it in terms of the way it interacts with the rest of the organization. So it was really great to hear there that it sounds like you can have your cake and eat it too if you think really cleverly about how you interact with the organization,
- Laura Faulkner:
- Right? And it sounds all very fancy, but honestly I led the development of this and we as a team creatively created this because we, one, we have a servant leadership attitude, and so when somebody needs something, if we can make a difference, we're going to. And then second is because we're trying to solve real world problems that were happening on the ground, it's holding up schedules or supporting schedules and helping them happen, making it easy for stakeholders to consume what we did and what we had and being able to be graceful about all of that so we can answer yes, yes, we're here to help you when you need it, as well as we can keep an eye on more strategic questions and be discovering those ourselves so that we have ready answers even before questions are often asked. So yeah, we love it, but we did it because we were solving practical problems on the ground. So anybody can come up with a great model for your organization by being creative about solving problems on the ground.
- Brendan Jarvis:
- If you didn't have the constraints of time and budget and being able to find the right people. What, if anything, would you change about the current model that you've set up?
- Laura Faulkner:
- Oh, that's a really good question. So let me pause and think about that. I think we would do every single one of our micro processes on every project developing this concept of micro processes. So micro process would be an intake process. A lot of people have those. Again, no big mystery behind any of this. It helps me to think about it to put words and terms on it. Like Brene Brad, she gives names to things. And so if you think about this intake process, even though it's something we do very simply and lightly, it's like it's a 20 minute conversation. We have some questions we answer for ourselves in some ways we document it, but we don't let the process drive us.
- Brendan Jarvis:
- And when you talk about intake, Laura, from my interpretation from the research I did for today, it's a stakeholder conversation about what it is that you're there to do for them,
- Laura Faulkner:
- Correct it, correct. But it's not just what we're here to do for them. It's like what are they accountable for doing for the business? And so that's part of what we want to do because they often know what they want to ask of us. Sometimes it's not the right thing and it's not really gonna serve them. So we need to get them to the larger conversation. And also we are co-business people with them. We have a stake in this business too. And so we want to know what is the business value that you're gonna deliver if we answer this question for you. And that helps us ask the right questions and often get a whole lot more out of a study than they imagined and that even we imagined if we've just gone to answer their question that was on the surface. So we have this question that really takes them to larger conversations around their business goals the business problems they're trying to solve, what drove this in the first place?
- Who's asking them to do this? Which part of the business, is it the operational part of the business? And are they trying to solve for cost and efficiency or is it the marketing side of the business and they're trying to solve for attention and message and what are we good things will happen for the business if we get that? So that's taking them to that. But then we're also discovering what are their constraints? What are their risks? If we don't ask them their constraints, we'll often give them some very, very answer to something that is never gonna be doable. Even from a design perspective, if we know what their required tooling is or that they have legacy that they have to fit into, it's like, of course we want to give them the ideal user experience answer and we'll get that. But for giving them something real that they can act on tomorrow, we also need to know who's gonna tell them no and why.
- So we're gonna ask about constraints and then we're gonna ask about what are the bad things that can happen? What are the risks and how can we help mitigate those? So under the covers in our intake conversation, there is actually a classical risk mitigation method. And we are asking them what bad things can happen, how likely are those to happen? What is the impact of that? So what results will cascading bad things will happen if that risk gets through our net? And then what can we do to mitigate that? What can we do to, if the bad thing does happen, how are we gonna fix it? And to think through that, it's really just classical risk mitigation planning and risk management planning. And we've just put that in a conversational form. But that often helps us find things too that we, and to ask questions of the users that we wouldn't have thought to ask before. And then we ask schedule and timeline, and then we also ask, what have we got on this spaceship that's good. That's like, have we already done some of this research? Has somebody already solved this with a design that the engineers didn't act on yet? So a lot of times somebody will ask us just recently had somebody come in and say, I need somebody to design a dashboard so that our customers can do this. It's like did you know that there's already a dashboard [laugh]?
- So to take the stakeholder back into that dashboard conversation and say, well here, look at this. Does this answer your question? Yes it does in this part, but no, not this. Okay, then we just need to add something to the dashboard. Or Oh, you need a dashboard like that before this other product. So now we know this scope and we're not duplicating work. So that's what that in it's, it's a very complex thing under the covers, but it's very simple on the top. That gives me back to my favorite user experience quote of all time. And I learned it seven days into my 25 year career. And the quote is, it is a simple matter to make things complex. It is a complex matter to make things simple. And the source of the quote is a gentleman named Arthur Block who was early in user experience, but then he pivoted to I think the legal field or something.
- So you won't find his name anywhere except where I've quoted this quote. But I did have some original materials from someone who knew him and captured that quote. So anyway, it's my favorite. It's still valid today and it helps me remember that when it's hard to do user experience design or user experience research, that's okay. It's supposed to be hard because it is hard to make things something simple. It's like that's why we have a job. It's why we have a profession. It's why we put this effort into it. So we're all about doing that hard work and that head work. And then also I'm at this point in my life, in my career, I'm about how do we make the practice of it easier and simpler so that we can actually do it?
- Brendan Jarvis:
- And it sounds like what you were describing there is that you have created a process that you can repeat and that you've refined over time to have those conversations with stakeholders. And it almost sounded like the conversation you might have with a doctor where they're trying to diagnose, well, what's the real problem here? It's actually a form of research in and of itself. And it's almost like if we're not having these conversations and we're not putting the effort into structuring them in a way we can have them repeatedly and reliably that we're actually in danger of UX research malpractice. We we're doing ourself and our organization a big disservice by not having structure around this conversation.
- Laura Faulkner:
- What a fantastic way to put that. I'm gonna quote you on that. That's awesome. It's like UX research malpractice. Yes. Because if we don't know these things coming up front, then here's the bad things that can happen. So we'll do our own risk analysis here is that, so some of the bad things that can happen is that one will answer the wrong question. So even God who out there has not had this experience, all you UX researchers out there, you can raise your hands on this one. The stakeholders come in, they ask, they say, we need to know this or that, is it A or B? And so you go and answer A or B for them, and you've done exactly what they asked. And then they said, well, that doesn't help us. It's like, but I did what you asked. It's like, so the question is did we ask the right people?
- Maybe there was some subtlety in the question that they were asking and they thought that they were telling you enough to about who to test it with and you thought you knew enough, but you really didn't. We really didn't understand what was the question behind their question. And so they think they know what question they want to answer, but maybe they don't because that's not their job. It's like a desired job, get the design out and to solve these problems. You're trying to solve engineers problems to do this, business problems to do that. It's like, but maybe they don't even understand what their own real question was until you brought them the data. But then guess who gets blamed? The researcher gets blamed for doing it wrong and doing it badly, but we did everything. And so that's one thing that can happen is that we didn't ask the right question.
- We didn't ask the right people that there was really something else that stakeholder was trying to get at. And so what's underneath that? Or really their boss is pressuring them about something and didn't like, they're just trying to help their boss slick back off, or they're trying to show success in some ways. And so in all of that, if we've taken the time to discover that, then we can do the best we can to help them be successful. That ultimately helps the user be successful. So how this all came about is trying to solve that and to solve those pain points both for researchers and our stakeholders. So putting some discipline behind that, let us do that. And I love to say this, it's disciplined process held lightly because we've had people ask us, do I have, can't I just fill out a form? Do I have to have a meeting with you?
- Yes you do. Because as researchers, we know that part of our magic is that follow up question. And if you make your stakeholders fill out a form, then they think that they know how to answer it [laugh], but maybe they don't. But also it can look like a barrier when you have a forum between you and your stakeholders, it can look like that's a firewall. And so I have to answer all of these questions about risks and constraints and previous work done. So we actually hide that complexity from them in the conversation. And so that's part of why we hold that process lightly. And we don't imagine we need to answer all of those questions for even in that 20 minute conversation. It's like, we don't, not holding to, we have to fill out this form. We are using this as a framework to help us have the conversation in a semi repeatable way to, because we need to fill out our form and get all this answered. No, it's because we need to discover this so that we can keep bad things from happening in the future. I don't always have to know the constraints and the risks. I don't even have to always know the timeline. Typically, I do need to know what business problem you're trying to solve and who are your stakeholders that you're reporting to that you're accountable to? Who's your boss? [laugh]? And
- Brendan Jarvis:
- Yeah, that's such an important one to know, isn't it? Yeah.
- Laura Faulkner:
- Who's asking you to do this? So once I know those things, if I don't get anything else and I can get those two things, then we're still mitigating a lot of risk about the research plan going off the rails or the research report or how we analyze it or who we talk to. Yeah, it just really helps us stay focused and get a whole lot more meaning out of the sessions just, oh my gosh, I love just getting a whole lot of meaning out of sessions.
- Brendan Jarvis:
- There's clearly a role for reducing risk that the conversations play. It really helps you to you say, identify what the constraints are and also some of the political risks that maybe at play, what's really driving them, try and get to the bottom of things. But there are also some softer benefits it seems like, that you gather by being a researcher and engaging stakeholders in this way. What have you found in your 25 year career in UX has been the main benefit of having conversations like these with people that are in other disciplines?
- Laura Faulkner:
- So that's a great question and it really is the easiest one is that it does a couple of things. One, it does build relationship. It does build rapport even when the questions are really hard and put them in uncomfortable questions. One of my favorite product partners of all time who, yeah, I mean he came to us faithfully, Laura, can we have that conversation that makes product managers cry [laugh] because
- Brendan Jarvis:
- He knew what he was walking into and
- Laura Faulkner:
- He knew I was gonna make him cry because I'm gonna ask the hard questions about, well, what is, okay, yeah, I know you need this question, but what is the business problem you're trying to solve? What is the business value this will deliver? And sometimes he realized that he was asking something that was not gonna help him deliver the business value or that somebody had asked him to do something that didn't even make sense. So sometimes in this conversation, it would empower the product manager to be able to go back to their stakeholders and say this is probably not even a good user experience direction to go. And it's like, I just had this conversation and that's not gonna deliver the business value we're trying to get. So the conversation that makes product managers cry. So one is that it does hold their feet to the fire on, and it hooks our work and their work to the value over the big value that you're trying to get overall.
- It's even if it's this, we need this button, we need this buy now button, what are you trying to get with that buy now button? So when we pushed on that and asked what was underneath that, it's like, what is the ultimate business result? Or what is the bad thing that that's gonna fit that that's gonna resolve from a bigger perspective? Not just, oh, well we want to be simple for users. Well yeah, but what is that gonna get you and what is that gonna get them? Then we had a real conversation. We could have a real study and get meaningful information out of it. So that's one thing is that it ties to the larger conversations and it makes your work more relevant. It's like, you're not just reporting on this, you're reporting this, you're reporting on this. Even if it you're testing this button, guess what else you get to learn.
- So there's that piece, but then there's that other piece here. You talked about the soft benefit, which is that it is relationship building. It is rapport building, even when it's hard because this product manager did, he knew that a few minutes into that first one, and definitely two minutes into the second one this that we had is that we are for him, we're on his side. We want him to be successful with his boss as well as to serve our user population. So we're very passionate in our internal users, the users of our information, which our mission is about spark fast, confident decisions. It's yes, we're part of the larger mission of making things better and helping users breathe easier. Of course, just who and what we are. But as a research team, it's like who do we serve and how can we help them and how can we help them with that thing ultimately for the user? But it also showed us as experts. So that's the other piece, it's like experts and co-business people. It's like the fact that we are asking conversations from a larger perspective and helping it put in context of the larger thing. It builds our street cred as well, because we're not just order takers who are making burgers the way you want them
- Brendan Jarvis:
- To. A hundred percent. Yeah, that was exactly the phrase that I was thinking of. And experts ask more questions than they give answers for quite often as well. Oh, it sounds like what you're saying is the sort of difference in maturity that you reach as a research practitioner or as a practice where you go from doing things to doing the right things and doing both of them. And yeah, it's really clearly articulated in the way that you're interacting with your other stakeholders. Yes. And you mentioned business problems and also business goals, and I wanted to ask you about those as well particularly in the context of planning research. And this might sound like a loaded question cuz it is [laugh], the goals are kind of important. Goals are kind of important, aren't they? Yes.
- Laura Faulkner:
- So in our research plans, we require ourselves, so the intake conversation flows naturally into and begins populating a research plan template. Now again, we don't let the templates drive us. It's like there are things for us to wield and to use and that facilitate us and organize what we pull and take in. And any good researcher out there is gonna know what a basic template looks like. If you're an expert, you don't even need to go look up the template. I can write that out of, I can write the template, a templated plan out of my head. It's like, and I can do a templated intake conversation outta my head because I've been doing it so long. And so you practitioners out there don't let process drive you. You want to use it and wield it to advantage. But when you become an expert, you don't really have to go back.
- And my dissertation research even prove that when user experience researchers use very rigid tools and tooling and templates and that sort of thing, it can slow you down and actually miss some of your own expertise. So yeah, you use it and as one of your many magic box tools. So in that, getting back to business goals and answering a question about that, we require ourselves to document the business goal and we document two the very first thing in the plan. It's not the first thing we write, but it is the first thing that is in the documented plan. Even if we never look at the plan again, the discipline of taking ourselves from our objectives and our research questions, it's like, okay, here's the things that we wanna get answers to. Not the questions we ask, but the objective and questions that we want to answer for ourselves as researchers, as designers, as engineers, as business is this big section in the document, which is three sections down or two or three sections down.
- And that's all of the kitchen sink questions that we hope to get answered in the study. But then we record, before we do anything else in that research plan, we take ourselves back up to what are the business goals. So this is where we've asked in the intake conversation, what is the business problem you're trying to solve? And so let's go back to that buy now question. So there's big complex product and they wanna make it one, they wanna make it easy for users and then also they wanna make it easy and efficient for the business. Because if you can't just go put in your order and you have to go talk to somebody every time, then that's expensive and complex for the business as well. And then also mistakes can happen if you don't have a defined inputs and all of that sort of thing.
- So there's a lot of great reasons to do this, but if we think about what are the real business goals, why do we even want to do this? We get customers right now, why do we need this button now button? And when we get to that, the business goal is to, one, make us accessible and friendly to buy from. So that that's a good reason. Cause you want to attract business and not put up barriers. So you don't want to distance the buyer from getting what they want and need when they know what they want and need. And so that's a really good reason, but why are we doing that from a business perspective? It's like, oh, we want to invite more revenue. It's like, that's that. So yes, like invite revenue. Great. Okay. So when we hook into that, imagine how much more important our research becomes about that buy now button and who that applies to and why.
- It's like, and then from a business perspective, we don't want to spend a lot of time on repeated purchases and things that are very easy and defined. We wanna be able to spend support time on the rib complex purchases that do need help and do need a conversation to get there. But what was imagined initially was that every customer, for every product needed a buy now button. Turns out that wasn't true because putting a buy now button then became a barrier for the big, if you put it on the big products, they can't get what they need because they must have a conversation and multiple conversations and a kind of support that lets them make the correct choice. It's like that was the thing that we found out is that we had these enterprise users who come coming to us and saying, I need your help.
- I need decision support that's not based on some decision tree that you thought about and pushed me through an interface. I need your intelligent input on that. And then you have the kinds of users so that, okay, wait, I already know what I need and this is a really simple one, or I've bought this before, over and over, so just let me go get it. So what we found out, you say it wasn't a yes or no answer and put a button, slack, a button on every page it it let us deliver a more finess answer that actually delivered the goals of easier, greater revenue, that required less input and had less friction because we were doing it for the right products and the right customer. So if we had not asked the question about why do you wanna buy now button and what are the business goals and gotten to those revenue and those ease of doing business with questions, we would've been doing it for the wrong people at the wrong time and we wouldn't have learned so much about our
- Brendan Jarvis:
- Customers. Oh, that's a really important point. Something that I hear from researchers that they're frustrated about almost more often than not, is when it comes to presenting their findings. So say you've had your intake conversation, you've gone and done the work, and you've come back to your stakeholders to show them what it is that you've learned and help them to make that decision. Sometimes the stakeholders listen, but they don't take action. What are some of the common reasons for that?
- Laura Faulkner:
- Very, very good question. It's one of my other significant areas of passion. So that front end conversation and that last getting, delivering something that's gonna make a difference. Cause that's what we're here to do. We're here to have influence. And so some of the reasons that they don't are embedded in that intake conversation. It is about reasons that stakeholders don't take your advices because you didn't answer the right question. Not because it wasn't what they asked, but because we didn't discover what the bigger question was. But a lot of times, so there's, like I said, there's a lot embedded in that conversation and that it really wasn't the right direction. But the bigger piece of it is that we often present things in the wrong order. So a stakeholder comes in and they need to know what can I act on? What's the bottom line?
- But as researchers, we want to show them our beautiful work. And so we want to take them chronologically through our beautiful work and the way that our research plan is set up. We want to say, you had this question. Okay, great, yes, you had this question. So now the first thing they wanna know in the first five minutes is what's the answer? It's like, tell me the answer. But no, no, no, we're gonna make you wait. No, no, no. We can't tell you the answer yet. We're going to tell you first. So we did a qualitative study and it was a think aloud walkthrough and we did it with these participants with that had these demographics and we tested six people and we did all of this sort of thing.
- Brendan Jarvis:
- Oh no, make it stop.
- Laura Faulkner:
- Yeah, exactly. It's like already you're in pain, right? And it's like they're just being like, what's the answer? What's the answer? Cause
- Brendan Jarvis:
- That's in
- Laura Faulkner:
- It for me. Exactly. That's our world. That is not their world. They don't care. It's like, do we need to have and document that information? Yes, but you put that at the end of the presentation, so at the front then okay, you could say, okay, well we'll just jump straight to the findings. So then what if now imagine that we're a little more advanced and a little more in our, further in our career. And I will tell you the reason I know this is because this is exactly how I did it, because how I learned it, and it's because I experienced the pain of that. Like you got them down the wrong paths and they're just, they're anxious. It's like they're just trying to find the answer and they need to know what to act on. They can't act on how many participants. It's like they just can't.
- And what the method was, they don't care. So then the next thing, there's your next level of maturity. Okay, so we went to this page and then they had trouble with this button and they did this and they fussed about this and they did this. It's like still, it's like, okay, what do I do? And so still it's like we're a step closer to the promise land there, but we're still not really, we haven't begun the conversation in the stakeholders world. So now let's think about what's the stakeholders world? They need to know what do I need to go do immediately? And they, stakeholders do not just want data. This is the real world. They want recommendations for action. Even if they don't take them. It's like tell 'em what you think. Give 'em your opinion about what actions they should take immediately. So on that by now example, for example, we have this by now button in here that we've put in there.
- If I go in and tell them that I tested 25 people because I had a whole lot of different demographics, so I wanted five for each, so I did and I did this kind of test and that sort of thing, they just need to know what do I need to, how come my buy now button is failing? So I'm gonna tell them how to fix it. It's like, okay, you need to implement buy now in this and this product and you need to delete it in that one. And then now here's the decision points that you need to make. And these top three things, if you go do these top three things first, it's gonna have the biggest impact on success. You're gonna have more click throughs and more conversions if you go do these three things. And by the way, now next slide is, here are the findings that supported that.
- It's like they abandoned halfway through this part. So if you were to make this part easier and change this there and explain that, put a label on that, then you're not gonna get abandons there. Or they didn't see the button in the first place and so they didn't go to it cuz it was in the wrong place or whatever. Or they had a question that you needed to answer before they were willing to click on it. So here's the data that supported it. So give your opinion. What we do on our opinion is we say research point of view, and then we let that go. We don't care whether the stakeholder actually does the action we recommended. We are just trying to help make things fast for them. They can take that or leave that, but it gets them thinking about action and it tells them our professional business point of view, based on the data, here's how we interpreted the data and what we would do about it.
- If we were in your shoes, now we understand that you're the one on the hook for this. You go make your decision. Now if you don't take our stuff, here is the data. So here's what we predict could happen if you don't do what we recommended, so you don't have to and that's great, but here's the data, here's the bad things that could happen. So you wanna be prepared that these bad things might happen if you don't take the recommendations and oh, you wanna know how we exactly knew that finding that users were having trouble on this, well, here's what actually happened, here's what we observed. They went to this, they jumped over here, they didn't do this. They had a question about that. They got frightened at this point. And so that's then when we get into the detailed data, but now imagine it's the day after the meeting, they were just in our presentation and they're going to their team.
- They don't wanna go find page 16 in our report to be able to tell their team [laugh] what we need to fix. It's like, put it up, make it easy for them. Use experience, design over research report. Make it easy for them to find the answers. And so you put your recommendations for top three fixes. Top five, we use 35 fixes up front. So here's the ones that'll make the biggest difference for you, or here's the critical one you absolutely have to do cuz it's just gonna fail [laugh]. That's the sort of your problem. And then, oh, if somebody pushes back on you, here's the data that supported that. And then they can drill down on the detail. And then you put your participant stuff at the end and your method stuff. If they are increasingly questioning, well I don't believe it, and who did you test and how did you do that? Okay, we'll be transparent here. It all is up front, but that's not the first thing most people, most stakeholders ask [laugh].
- Brendan Jarvis:
- Got it. So what you need to do and why we think you need to do it. And then everything else.
- Laura Faulkner:
- Yes, that That's it. Yeah.
- Brendan Jarvis:
- It's interesting you said there that research should have a point of view, a perspective, an opinion, a recommendation in there. And ultimately it's the stakeholder's decision as to what they choose to do with that. But that also sounds a little bit scary from a researcher's point of view because if you give your recommendation and then it turns out not to have played out how you thought, what are the consequences of that and how do you manage those? If you've had experience with that in the past,
- Laura Faulkner:
- Have I had experience with that? Yeah, sure. [laugh] so is this is another way that you make yourself attractive and accountable. It's like you have something, put yourself in the scary position that the stakeholder is in for being responsible for the outcome. So yes, you are taking a risk and you might be wrong. It's gonna put your own feet to the fire floor. Do I believe my own data? Do I believe the data that I got and what I recommend? And you might not always have a fix for it. Maybe you don't have a particular point of view, but we often do because where UX people as well as researchers there, there's that piece. But here's the other piece and is, I like to call this how to have a point of view without having an opinion is that we let go of yes, whether the decision that the stakeholder has to make, what we have found through and what I have found through hard experience of failure is that there is often a very good reason why a stakeholder is pushing back.
- It's like we go and we get hurt by that when our work is pushed aside and not taken or somebody pushes back really hard or they tell us we're just down right wrong. And that, yes, that is hurtful, but the more emotional somebody pushes it is about that pushback. The more likely there is some big business thing that they didn't even think to tell you. Something scary that could happen for them that they're accountable for or some constraint that they didn't even realize or think to tell you about. Typically I find that most human beings have positive intent and there is some big thing back there that's pushing on that. And that can change the whole solution, not just the conversation but the solution. So I wanna give them the respect of being professionals in their own right, because I want them to respect me in the same way I gotta do it first.
- So that's the first part of it. And then the second part is here's how you mitigate for yourself the pain of somebody having said, no, I don't believe you. I think you're wrong. You take your results back and you put them in your archive and you hold them lightly and politely and you wait and you see, did the bad thing happen? If the bad thing happened, then guess what? You get to be, you are Superman who comes in to save the day because you already have the answer. You already know why it went wrong. So there was a huge project that I was on, a big national project and we found this as a catastrophic problem. You're gonna lose click throughs, you wanted to improve click throughs by 30%, you're gonna lose because this is a usability problem right here, the way that this is designed.
- And they rushed it to market and they said, well, we'll just find out about it out then. Well then they very confidently checked their numbers 60 days later, didn't, not only didn't get the result of plus 30% clickthroughs, they deprecated 30%, they had 30% less. And so they were trying to diagnose what's wrong? Is there something wrong with the programming and all this? Somebody said, well, we should ask research. Did they find something? Yes, we did. So we came in and politely said, yes, here's the data, here's the thing, try fixing that. It's a simple fix and let's see what happens. And not only did we fix the 30% bad, but we got the 30% that they wanted. So because we were willing to hold that, to give them them the courtesy of making the decision because they had other business pressures, having the data and presenting it back politely to them at the end, they were able to take it and it fixed the problem. So you never know when something that somebody said no to can have a positive impact in the future. But what was more important in that case was preserving the relationship. Because that
- Brendan Jarvis:
- Gives you, so you didn't go in there and say, I told you so then I guess that doesn't sound like the right way to do it.
- Laura Faulkner:
- No, no, not by any means because they were making the best decision. They knew how at that time, based on the pressures that they were experiencing, got to give them that benefit of the doubt. They weren't just being mean to you. So most of the time that is the case. So give 'em a break. They're hurting and they're experiencing a consequence. So what can you do to help them out at that moment? It's a new ball game today. What can you do to help them out in that game? Give them the data, tell 'em what you think. Yeah,
- Brendan Jarvis:
- You talked about the very human aspect of what the stakeholder might be going through there, and I want to take that thread and talk about us as people in UX and product now, we're often quite a passionate bunch. We care deeply about the work that we do. We care about the outcomes for the user and we also care about the business. But we're also people, right? We're emotional. We like to do our best, but things in the business world aren't always entirely supportive of the ideals that we might hold for our chosen profession. They're often competing agendas. There are conflicting priorities, there are people that you just don't get on with. What approach or mindset have you held Laura that's helped you to navigate the complexities of the business environment and being a UX?
- Laura Faulkner:
- That's a very good question. So first, so I do acknowledge that yes, I am a human being and an emotional being and a emotional biological being. And by nature, if I feel physically, mentally, emotionally threatened in some way, my brain is gonna go on high alert and I'm gonna have a response to that. And I do want my work to matter. Yes, I am extremely passionate about it. So the first thing is to recognize that there is nothing wrong with my internal reson reaction to something that is just a fact. Okay, this happened. I reacted okay. So now I ask myself, what's informing that reaction? What am I telling myself about it? Am I telling myself that I screwed up? They're mean, they're bad lifestyle fair, the system is wrong, our processes are set up wrong, whatever it is. But I might get into blaming around all of that, but that doesn't really help me.
- It's okay, I give myself a little bit to just fuss myself about it or maybe fuss to a colleague about but first I just acknowledge, okay, this happened and I feel upset. I feel angry, I feel sad, and then I believe that I'm being disrespected or I believe that they're bad or that they're wrong or whatever that is. So there is that in that moment, I take a breath in that moment. Ideally I don't always do this, but when I do it well I gracefully. That's step one. And then step two is be curious, and this is the single biggest answer to everything. Why did they respond this way? Why did they say no? What's behind that? I mean, that is the biggest medication bottle for everything bad that happens in user experience is what if I just asked the question about what is that about?
- What are you worried about? What's the concern? Or even be curious about why would they have reacted that way? Is there something about how I presented this? Is there something about this or that? So we had one recently that that happened like that where we had to get curious and we actually did a formal root cause analysis because it was such a big one. And somebody very important had said, that's just bad data. We can't trust anything you do. I mean, you can imagine how horrifying that is for researchers. That undermines everything that we do. So we got curious one about our own data and our own methods. So we went back and looked, it's like, okay, what delivered this result that they pushed back on? But then we also got curious about them and come to find out they had just taken a very big gamble on a product direction and they were really worried about the success of it.
- And our data told them that customers were asking for something else. And so it was a terribly frightening thing and they were extremely exposed. And so us presenting that in a big forum, even though it was innocent and we didn't know about what was happening there, that they were getting ready to take this big gamble. It was very exposing for them for our data to say something opposite. And so when we were curious about that, and we went back and discovered, okay, why did customers say this instead of that? Is that really a bad direction, a bad thing? No, it wasn't. It was actually a good thing that they were going this direction because they anticipated something that customers didn't. And so what that told us then for informed action is it told us, okay, for that product direction, they're gonna have to educate customers to bring them up to understand the problem. And so it turned out to be a really beautiful thing, but it was horrifying in the time. But we were able to solve it because we're curious both about us, about our data and about them. So, and
- Brendan Jarvis:
- How did you take that conversation forward with them after they shocked you with this allegation that your data was terrible and you couldn't be trusted? What did you do next with them?
- Laura Faulkner:
- So that's where you take it to that next level of wow, get, first of all, we get why you were so upset. It's like because we were curious. We took the time to discover what was going on. It's like, dang, we would've been too, no wonder. So first part is acknowledge them as human beings. My goodness. It's like imagine they went home and fussed to the people in their house that night because these people said that this was totally wrong and were [laugh] doing this and they're
- Brendan Jarvis:
- Bad. Were you having that conversation with that person one on one after that sort of public that's public lashing that you received?
- Laura Faulkner:
- Yes. Thank you for bringing out that subtlety, is that yes, when you've had a big thing, a big thing like that happen, you do wanna take it to them personally. So of course it was going back and we actually, and I personally even spent some time over the next week's following that to reestablish safe relationships with them in other ways. When I saw a good presentation about something that they did while or while so and so. That was great. While this was really good, I really appreciated that. Thank you for describing. And then also going to them with questions. It's like, I realize I don't think I fully understand this part of the product direction. What does this thing mean? What does that thing mean? So treating, really giving those people the respect as an expert and going back to them, back to them as, educate me, tell me more.
- Because that again, established that I respect them as professional, as co professionals and as human beings. And then connecting over small things. It's like finding those common interests just like you do when you first begin establishing rapport with somebody. It's like you connect over the human things as well. And leaving the controversial question alone, just leaving it alone for a little while, we did our due diligence for what did the data really mean and what did that mean for that product direction? Now that we understand why you were upset. And then finally, yes, you take the answer back to the individual first before you take it public, because then otherwise it's just gonna look like you're trying to defend yourself and make them look, make them wrong. And that's not gonna help anybody either. Yeah, you might win for a moment, but you'll lose for a lifetime.
- Brendan Jarvis:
- Such an important point. There seem to be endless books, blog posts, and yes, podcasts like the one that people are listening to now. And sometimes it feels in the field that we are drinking from a fire hydrant of information and what we should do and how we should do it. In your 25 years of experience, what have you found that is absolutely vital to get right in the practice of UX and UX research and what, if anything, can we just leave behind and not be worried about?
- Laura Faulkner:
- Oh dang, that's a really good question. I have to pause to think about that. So first and foremost, to take, to keep, to not let go is what does your gut tell you? Our expert minds are being trained over and over again by experience. The very nature of our daily practice is a daily experiment and collecting data. And from that, we develop our expert minds. And so the most important things to do are, one, you develop your gut by doing it over and over and learning from your successes and mistakes. What worked, what didn't, what worked, what didn't, what worked, what didn't. And that over and over thing really begins to train the expert mind. So you go back and you look at Malcolm Gladwell's Blink and he will tell you that this is how those expert, those expert medical diagnosticians, it's like trailing back to that.
- It's like that came after experience over and over again of success failure, success failure, success failure. I got it right. I got it wrong. I got it right, I got it wrong. And in our case, this made a difference. This made a difference. This was listened to this. And so those things over and over again, that's really how I've developed my entire practice. The documented micro processes that like the intake or the plan and various other things that I've written or in writing that all came from those small experiments, small experiments in all of those things. So that that's the part you don't ever wanna give up is the daily willingness to keep going out there and doing what you do and succeeding and failing, succeeding and failing. Because you will learn over time, excitement and pain are both very good teachers. So yeah, that's the thing you don't ever, ever wanna leave behind is just the experience of your own practice.
- And then what to not worry about is getting it exactly right. It's, it always shocks me when I present a defined process or concept or something that somebody asks me, well, how do I do that? Give me the steps, the detailed steps of that. And it's not really about the detailed steps. I'm happy to document those and do that, but it's really about what is the larger reason that we need to do steps of any kind. It's not about the specific things to do. So that's the thing to let go of. There is no one way to do one the whole practice. Do you have to be in it every single step and should you be in every time? We would love to and we know we can make a difference on that, but does that mean it's gonna fail if we're not necessarily, there's a lot of smart people out there.
- There's a reason that you don't necessarily need to be in it. Every step do. Is it better overall? Sure. Can we do slightly higher quality? But that's the thing to let go of. It's like one, there's no one way to do the whole practice and then there's no one way to do a particular practice or process, right? So five users, 10 users, 15. No, no, there's not a right answer. There are answers that tend to work better than some answers and some approaches. There are processes that tend to give success better and faster than some other processes, but you don't want to let anyone approach or answer how you do things in ways that work. That's what
- Brendan Jarvis:
- I love it.
- Laura Faulkner:
- [laugh].
- Brendan Jarvis:
- Yeah, be curious and hold things lightly. Don't let the perfect get in the way of good. Yes. I hope that takes a weight off people's shoulders who are listening or watching. Yes. Laura, this has been such a fantastic conversation. You've given us lots of practical insights and plenty of things to think about and to apply back in our practice. Thank you for so generously sharing those with us today.
- Laura Faulkner:
- You are so welcome. It is my pleasure indeed. And thank you for what you're doing out there getting this kind of thing out to people in the world. It matters and it's part of what our profession is about is that we boost each other up
- Brendan Jarvis:
- 100%. And you're most welcome. It's a lot of fun doing these interviews. I really enjoy it. Wonderful. Laura will definitely link to your profiles and where people can find you in the show notes. So keep an eye out for those there people and to you, people that have tuned in. It's been great having you here. We'll put everything, as I've mentioned in the show notes, including where you can find Laura plus all of the resources that we've touched on. If you've enjoyed the show and you want to hear more great conversations like this with world class leaders in UX design and product management, don't forget to leave us a review or a comment on YouTube or interaction of any kind. We'd love to hear from you and subscribe as well. That would be really, really great. And until next time, keep being brave.