Bill Albert
Quantifying the User Experience
In this episode of Brave UX, Bill Albert reflects on what it takes to run effective UX research, why evaluating more than usability is important, and how UX leaders can unlock better budgets.
Highlights include:
- Are UX leaders who aren’t quantifying UX derelict in their duties?
- Why is UX sometimes the ambulance at the bottom of the cliff?
- How can UX leaders secure enough budget to enable UX research?
- Should UX research be concerned with measuring preferences?
- Who was Dr. Tom S. Tullis and what impact did he have on your life?
Who is Bill Albert, PhD?
Bill is the SVP, Global Head of UX & Customer Development at Mach49, a growth incubator and accelerator for global 1,000 businesses that has over $50B in market cap and counting.
Prior to joining Mach49, Bill was an Adjunct Professor in Human Factors at Bentley University and the Executive Director of the Bentley University User Experience Centre for nearly 14 years.
Along with the his good friend, the late Dr. Tom S. Tullis, Bill is the co-author of two books, Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics and Beyond the Usability Lab: Conducting Large-Scale Online User Experience Studies.
Since 2013, Bill has been the Co-Editor and Chief of the Journal of User Experience. A peer-reviewed, international, online publication that’s dedicated to promoting and enhancing the practice, research and education of UX design and evaluation.
Transcript
- Brendan Jarvis:
- Hello, and welcome to another episode of Brave UX. I'm Brendan Jarvis, Managing Founder of The Space InBetween, the home of New Zealand's only specialist evaluative UX research practice and world-class UX lab, enabling brave teams across the globe to de-risk product design and equally brave leaders to shape and scale design culture. Here on Brave UX though, it's my job to help you to put the pieces off the product puzzle together. I do that by unpacking the stories, learnings, and expert advice of world-class UX, design and product management professionals. My guest today is Dr. William Albert, or Bill, as he prefers to be called. Bill is the SVP Global Head of UX and Customer Development at Mach49, a growth incubator and accelerator for Global 1000 businesses that has over 50 billion in market cap and counting. Prior to joining Mach49, Bill was an adjunct professor in human factors at Bentley University and the executive director of the Bentley University User Experience Center. For nearly 14 years at the User Experience Center, Bill and his team partnered with over 100 global companies to help drive innovation and develop competitive advantage through UX research, design and strategy.
- With over 20 years of experience in UX, working across industry, academia and consulting, Bill brings a unique and particularly quantifiable perspective to the field. Along with his good friend, the late Dr. Tom S. Tullis, Bill is the Co-Author of two books, Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics. And Beyond the Usability Lab: Conducting Large Scale Online User Experience Sudies. Can you tell that he thinks measuring UX is important? More on that soon. Since 2013, Bill has been the Co-Editor and Chief of the Journal of User Experience, a peer-reviewed international online publication associated with the User Experience Professionals Association that's dedicated to promoting and enhancing the practice of research and education into UX design and evaluation. Bill's own research papers have been published in other leading academic journals, such as the Journal of Information Design and the International Journal of Human Computer Interaction.
- And he's spoken at over 50 national and international conferences. And now he's here with me for a conversation on Brave UX. Bill, welcome to the show.
- Bill Albert:
- Yeah, thanks for having me.
- Brendan Jarvis:
- Yeah, it's really great to have you, Bill. And as I mentioned, I did enjoy watching your talks. You are someone who has a, particularly, as I mentioned in your intro, quantifiable lens to the field, and you're also someone with a very strong academic background who's been very closely connected to educating the next generation of UX leaders and also the provision of consulting services through your work at Bentley. Now, you are also, as I mentioned, the co-editor and chief of the Journal of User Experience, and you've done that for nearly a decade. On top of all the other things that you've done, why have you dedicated so much time and energy to this?
- Bill Albert:
- It's a great question. I think what, going back decades when I first started out, I always felt like I needed to have a foot in both academia and industry. I never felt a hundred percent comfortable being completely in one camp. I, I've been very motivated academically. I'm trained as a researcher, that's how my brain works. I'm constantly asking questions, trying to figure out how to answer those questions and to try to disseminate that research in the hopes that somebody finds it useful. But at the same time, I've always wanted to make sure that my work has value and I have my feet on the ground, and I love the energy of working in industry. So I guess I always wanted to be in both. And my position at Bentley kind of afforded me that opportunity, what I'm doing right now with Mock 49 does as well. And the journal was a big part of that too, help really establish the UX field to give it more rigor. When I first started getting into UX in really the very late nineties, I just found it to be, it didn't have the scientific rigor that I was thinking of or expecting and have wanted to push it however I could. And the Journal was a great kind of vehicle for that.
- Brendan Jarvis:
- So given your near decade experience as the editor of the journal and the fact that you're also a published academic yourself, what questions have you observed the field asking itself over time and how have they changed if, has there been any sort of observable trend since you first started in that position to where we're at as a field now with those types of questions that we're wrestling with?
- Bill Albert:
- I think what's, at least what we're seeing in the journal, and by the way, before I answer your question, just a heads up, for those listeners only very recently, only since last month have we been the Journal of User experience before that Journal of Usability Studies or Juice since the founding in the early two thousands before I became co-editor. So same journal, just a very recent name change. And that sort of takes me to the trend I'm seeing, and especially the journal and the submissions that we're getting are much broader, more holistic, more in line with user experience or how we think of user experience as opposed to early on really about the mechanics of doing usability testing and the metrics associated with usability testing, which was very important and very useful at the time to get our field or our practice going. But I never saw it as kind of like that's all we do is usability testing. Even in the late nineties, early two thousands, we were talking about user experience and trying to think about it more broadly. And I think that the journals kind of has tried to reflect that over time, but more recently we're just seeing a lot more innovative research coming our way.
- Brendan Jarvis:
- I know that in your own work in particular to do with UX scorecards, you've gone beyond usability, and I would like to come to that soon, but I was also curious about the types of people as the field has grown, the types of people, if you've observed any difference in the types of people who have been submitting papers over that decade?
- Bill Albert:
- Well, one obvious change has been where we get papers from, we are now getting papers from all over. I mean, just we're getting papers from Fiji and from Finland and from Kenya and from all over, especially Africa which is so underrepresented from South America, from every corner. It's not just been US and Western Europe based. So that's been huge for us. And the type of people, I think we're probably reaching a broader audience, or at least I hope we are. It really straddles this line between kind of academic and practitioner who's interested in research that's peer reviewed. And that's really the difference between us is that it's going through an extensive review process. We only accept about a third of the papers and we wanna make sure it's really sound research and folks can kind of rely on it. The other interesting thing too is that this kind of a funny kind of irony here, but with the Journal of Usability Studies, we don't publish usability studies is that we want to make sure that we publish work that has broader implications.
- So if somebody does a usability study about Product X, if you're working on product X, that's great and you love it and maybe it's very relevant, but everybody else probably doesn't care about it. So it has to have some kind of connection to a broader theme issue either from a methodological standpoint or from some other perspective that kind of generalizes to a larger population of readers. So we kind of stay away from usability studies per se, and we recommend people, if you're publishing, doing research on a particular product, let's say it's in vr, then published in a VR journal instead of ours. So
- Brendan Jarvis:
- You mentioned it was calibrated towards academics, but also practitioners who were up for having their work peer reviewed. Are there any minimum sort of academic qualifications or other industry based qualifications or experience levels that are required in order to submit a paper to the journal?
- Bill Albert:
- No. No. Absolutely not mean. Anyone is welcome to submit it, but we do have a pretty formal set of submission guidelines that we ask all the authors to follow. So what we don't want is somebody just to take a 10 page paper from their undergrad time and just throw the logo on and submit it. It's gotta follow a certain standard, just like any submission to any other journal would. In terms of APA style, the interestingly, the only requirement, I believe the only requirement of a section we have is called tips for practitioners because sometimes we wanna make sure it's going to connect to the practitioner as well. And so they have, whoever it is, has to be able to figure out, okay, this is great, but what does this really mean for that practitioner? So that's a really important aspect. But no a anyone's more than welcome to submit. Usually the turnaround time is a couple months which is pretty standard for journals.
- Brendan Jarvis:
- Well, if you're listening people then get your thoughts together and submit a paper to the journal. It would be good to see more of you publishing your work or attempting to get your work published through the journal. I'm sure that they would appreciate the submissions. Bill, you are someone, and you mentioned this earlier, who clearly believes in introducing or upholding certain standards in rigor in UX research. And you can see this through the papers you've published, the books you've written, your Body of work, your 20 years in this field. In your experience of the field and from your observations of peers and others in this wonderful field of ours, is there a need currently for more rigor in UX research?
- Bill Albert:
- First off, I think that we've come a long way. So when John Tallis and I first came up with this idea for a book about metrics, it was almost like people would look at you with this quizzical look, why? What do we in UX, how can you measure that? It's very subjective, and Tom and I were really saw eye to eye what actually, this is something that not only can be measured, but it should be measured. Companies are making really big decisions. And we just felt like simply relying on some kind of either purely qualitative insights or anecdotal evidence, it just wasn't enough. Companies were demanding more. And that sort of precipitated this idea around measurement. But to answer your question, I think that things have really moved and toward that measurement direction for sure. There, there's multiple books on the topic now. I hear more and more companies asking for it.
- It's not a nice to have. Companies are demanding it, putting budget toward it. So there's always more that can be done around it. I believe and I just wanna give a caveat for a second, as kinda quant as I, my brain works, I really value qualitative UX research. I do more qualitative than I do quantitative. I think really the power is in mixed methods approach, bringing in both qual and quant. That is where quant can only do so much. I can measure stuff, but if I don't know why, I'm at a disadvantage and I need both qual and quant together. So that's sort of my focus, I guess
- Brendan Jarvis:
- Who's someone who echoed those exact sentiments who you may know is Dr. Sam Ladner, who I had on the show a few months ago, probably earlier in the year actually. Oh gosh, this year's coming so quickly. And she's very much she's written a book on it. In fact, behind this idea that it's the mixed methods and those, that integration of those perspectives that actually enables us to calibrate a more effective view of what might actually be going on and make better decisions. So a hundred percent agree with that. Yeah. I just wanna come to someone who you mentioned and who I mentioned in your introduction, who's been someone I understand has been quite instrumental in your career and was a very dear friend of yours, and that's the late Tom s us. Who was Tom and what impact did he have on your life?
- Bill Albert:
- Yeah, so I mean Tom started off, he did his PhD in basically human factors. I think it was engineering psychology at Rice University. Went on to do a postdoc at Bell Labs and was always kind of myself, half academic, half practitioner. And I think that was sort of the reason why we connected. We actually first met, I remember this very clearly. We first met at a conference, a Kai conference in The Hague in I think 2000 around that time, right? I think it was in the year 2000, 2001. And I saw him give a talk and I was just blown away because he had this ability to explain difficult concepts really simply. And so I went up and I talked to him and we exchanged information. He was working at Fidelity Investments where he worked for more than 25 years in Boston, and I'm in Boston.
- And I don't know, I just felt that spark of that connection that we were seeing things in a similar way. I ended up working at Fidelity getting a job there and ended up being working for him. And during that time was, I remember we would just have, every week we would have lunch lunch one day a week together and just start to talk about different ideas. It was just like we were riffing off each other. What about this and what about this? And we both were interested in questions and the way to answer questions, the answers those kind of didn't matter so much. It was really like, well, what about this? Or what about this? How would you do that? And it was just such a lovely time. And as we were starting to get to know each other, and I'm sort of doing my day job of a lot of usability testing and we would sort of carve out time in our calendars to run little research studies together.
- And that was such a great time. We were both learning a lot. And I was aware of his background and he had done a lot more than me in this field, especially in hci, human factors. But we really connected on it. And then in early two thousands, there was an interesting moment where he had decided to bring in a group of usability experts into Fidelity to talk about measurement. And we had this one day meeting and we had 20 people in a room from different companies and everyone was sort of saying, oh, I measure things this way or that way, or the only thing that was in common was suss, but everything else was kind of this homegrown methods. And from that, I was kind of thinking, oh, that's so interesting that there isn't any standard. And sometime later, I remember this very clearly is sometimes those ideas come to you really odd moments.
- I, I just come back from grocery shopping and I was ready to take up some groceries to my home and I thought, Tom and I should write a book about this. And I thought, and then the vision occurred to me and then I thought, what? Let me go put the frozen foods away in the stuff in the fridge. And I run over to my computer, I go into Amazon thinking, I don't think that there's a book about this. And that was confirmed. There wasn't a book. And I think maybe the next day or two days later, I sort of had the courage to go to Tom and say, Hey Tom, I had this crazy idea. Would you wanna write a book together on metrics? And without hesitation, without hesitation, she goes, yeah, sure sounds good. And I'm like, wait, hold on, this is a big commitment.
- He's like, yeah, yeah, I know, but I think it would be great. And we sort of took it from there and it became really a labor of love for many years. We published the first edition in 2006 or eight, and then we did this other book called, you mentioned Beyond the Usability Lab with Donna Esco as well as Tom. And then we came back to a second and then the third edition that just came out in March. Very sadly, Tom passed away from Covid very early on in the pandemic. And it was about a two-thirds of the way through that third edition. I sort of took it the rest of the way, but it was really emotional. We really had bonded as friends over many years having taught many classes together and traveled together. And we were very much joined at the hip in a lot of ways.
- Brendan Jarvis:
- He sounds like a really special person and clearly someone who together you were able to sharpen each other. I was curious to get your, as someone who knew Tom so well, I was curious to get your thoughts on if Tom was joining us in this conversation and he was here right now. And given what you know of Tom and his body of work, what would he want to say to the UX leaders that are listening to today's conversation?
- Bill Albert:
- That's a good question. I think that there was a couple things that were really driving forces in him. One is he deeply cared about mentorship. He really, really cared about helping the next generation. And he would want to encourage people that are established in the field to help the young professionals, people who are trying to get in, whether they're undergraduates or they're just starting a career, is to do what you can to mentor. And he was a true teacher. And the second thing, I think he really cared about advancing the field and would encourage people to ask questions that they can try to answer, especially experimentally he was always tinkering with things, physical things. He would go to these, the M I t flea market and buy old pieces of technology and tinker with things. And he loved experimentation. And he would say things like, I wonder if there's a difference between versus submit in terms of people clicking on it or things about the size of the button. Or he was always just these small little experimental questions that he always wanted to answer. And so I think he would encourage people to set up quick little experiments to answer questions and then to share that with the broader community, just to help kind of RA raise the field
- Brendan Jarvis:
- Bill, I believe his Tom's daughter and I'm not sure if you were involved as well. I believe that there's a scholarship or a memorial fund that's available through Bentley for people in New Ex. Is there any info you could share about that?
- Bill Albert:
- Yeah, yeah. So a little while after Tom's passing, when I was working at Bentley University, we went to Fidelity where he worked for I think 27 years. And we said, would you like to help start or seed a scholarship for people who are new to the field who are starting graduate school in his name? And Fidelity was very generous and they offered a nice sizable donation. And then we got additional donations on top of that to have even more funds available. And the first recipient, I believe, was just named the last couple months for the class of 2022. And that's a scholarship that will go on perpetuity in Tom's name through the Bentley UX program. So I'm really touched by that. So happy because that was so near and dear to his heart. I know. And Bill Grins, who's the chair of the graduate program, was instrumental in setting that up. And when Bill and I went to Cheryl and his wife and his other daughter, Virginia, and told him that we'd set up this scholarship, they were just in tears. It was a really lovely moment. Cause it's something he would've really been touched by.
- Brendan Jarvis:
- It seems like a incredibly fitting way to remember Tom, given what you've just told me about who he was and what he believed in. Now I wanna turn our attention to your career now. And I want to wind back the clock a little. We've talked about the early two thousands. I want to go back even a little bit mm-hmm. Further than that to when you walked out of your education and completed your PhD, and I know you did a postoc, but not that long after you walked into what would've been the most exciting time, I imagine for the worldwide web, which was the time that immediately proceeded and also ran through the.com bubble bursting. And you started at Lycos as far as I remember. And at the time. So for people that don't know that are younger than I am, Lycos was a competitor to Google. It was a search engine slash directory. You could probably shine a bit more of a light there, Bill. And that was the years you were there between 1999 and 2002, and you were a senior user interface researcher. What was it like if you cast your mind back to that time, walking through the doors at Lycos for the first time, for example, and then those years afterwards, what was it like being there during that time of the bubble and its subsequent bursting?
- Bill Albert:
- So yeah, it was certainly an interesting time to doubt. So given that context, so I had finished my PhD and then I had done a postdoc looking at navigation systems in cars. And that was really interesting. And it was kind of my first going from research in an area called spatial cognition into human factors in driving. And people were talking about navigation and information spaces as being kind of this way to look at the worldwide web. And that was in 99. And a friend of mine had gotten a job with Lycos, which is a portal. It was basically Yahoo. And I joined in April, 1999. And that month they had overtaken Yahoo in Pageviews and there was all this hoopla. It was one of the hottest, biggest web properties. They were buying up all these different companies left and center, basically just trying to buy more and more page views. And I didn't know anything about usability per se. My wife was kind of like, are you sure how to do this job? Did you even know about this? I'm like, yeah, I'll figure it out, out.
- I can BS for a little while. But the methodology was very familiar to me. So my first impression was the people I was with were really smart. There was some great people there, many of whom had gone on to Google soon after. And the second thing that made an impression on me was we were doing was we were basically just running a lot of usability tests of different products and finding that they were crap. They just didn't make sense. And granted expectations were a lot lower than, but still we were like, are you sure? You sure. And I was surprised that this whole company was sort of built on this idea of delivering a great user experience. At least that's what they were saying. And we weren't seeing the same in our data in the lab. And I was like, how long can this go on? I mean, eventually people's expectations start to rise and can you deliver on a better product, a better experience? And we were convinced, at least internally in our group that's like, no, this is not good. But those were probably the two immediate impressions.
- Brendan Jarvis:
- And And your group, if you recall, I know there's been a bit of water under the bridge since, but how do you recall thinking about Google at the time? What was the view, what thought about Google? Yeah,
- Bill Albert:
- So we weren't worried about Google, we were focused on Yahoo. When Google came, I remember we did a competitive study and we were interested in only the quality of the search results, nothing about the branding. So we would basically do these studies where we kind of almost like white labeled it and would just say, okay, you're searching for a trip to Florida and here are three set of results. And we knew there was Lyos, Yahoo, and Google to see just on the search results alone, are they better? And I believe we found that they were, but it was a long time ago. It was more than 20 years ago. But the thing that struck us was that Google it, for any of you who remember those portal pages, those portals, it was like everything in the kitchen sink on a single page and you just scrolled.
- It was just like the information density was so high, it was so hard. And here, who are these people to have basically a search box and a link or two. And we used to, I don't know if it was cuz of that or not, we used to have a saying, dare to be simple. Basically have the courage just to give what people want and know more. Instead we're trying to interest them in fly fishing and then recipes and all this stuff that people were not asking for. And so I think that was one of those things that kind of made an impression on us that how can you get away with that [laugh] because it wasn't the world we were living in
- Brendan Jarvis:
- Is what you say. And then there's what you do. And it almost seems I know we're sort of reflecting on the history of the web here a little, but it seems at least that while it wasn't a hundred percent central to its business strategy, UX seemed to be much closer or much more strategically deployed by Google than it was by any of those other portals. The alt vistas, the Lycos, the Yahoos of the world. It's like they got it. And I don't know, wait, there's probably people that have written books on this, but they seem to have demonstrated it pretty effectively, haven't they? Over the past 20, 25 years or so. I would love to come for sure, brief briefly to your time post Bentley, and cuz you spent 14 years at Bentley, clearly that was a place from the outside looking in that was quite dear to you as the executive director of the UX Center there, but also as an adjunct professor. And now you are heading up this new role at Mach49 as the global head of UX and customer development. I don't imagine it was an easy decision after 14 years or thereabouts to leave Bentley.
- Bill Albert:
- It was definitely not. It was for really a couple reasons. One is that I was enjoying Bentley and I really cared deeply about the institution and specifically our center and especially about the graduate students that we were working with. And so it was difficult, but I felt like I needed to push myself a little bit more. Not to say I was comfortable with things, but I was at Bentley, most people in UX are working at along a continuum, usually focused on products that are either have already been designed, they've been developed, they're sort of in some process of being designed or the idea's already been kind of formed and we're trying to optimize it. And that is incredibly important. But what Mock 49 does is they work with large companies who need to develop new ventures, new businesses, and what are those businesses based on?
- New products, new services? So where is the customer pain and how do we design a business to remedy that pain? It was a way for me just to get so far upstream that instead of saying, wait, how did this product get to us? [laugh], right? Does this product even make sense? Yeah, I could now work and help to find not just the product, but the business. And that just seems so appealing to me and so exciting. And I was like, you know what? I think I'm ready. And the way I did it was I'm kind of a fairly, maybe a cautious, a little bit of a cautious person. So before jumping in full stop, I worked over the fall with Mock 49 and as a faculty advisor just to get a sense of the people, their process. And then there was a point in time when they approached me and I said, listen, I'm loving it. I would love to join. They said, we'd to have you. And then we made it kind of permanent. So I sort of took it kinda slow to make sure it was the right move, especially at this point in my career. I wanted to make sure that whatever next chapter, it was not going to be a mistake. And I'm just absolutely thrilled with how things are. So
- Brendan Jarvis:
- What have been the pleasant surprises and what are the things that you wake up in the morning and you're like, oh man, this is such a great place to go to work. What are the challenges that you're really looking forward to?
- Bill Albert:
- I think that the exciting thing for me is to think about how we can deliver value to these new ventures. How we can use leverage our user research methods to get to the right decisions and to help ideate and to help create new products. Because most of our methodology is around, we always are defaulting towards usability testing, and that's important, but there's nothing to usability test at this point. We don't have a product, but how do we understand pain points or the severity of those pain points? And I'm always just like Tom is tinkering with different things. I'm always thinking about different metrics. What are some interesting metrics? Right now, for example, I'm playing around with using different versions of a constant sum, right? So if you say to somebody, you've got a hundred dollars to spend to solve this pain or these pains, how would you distribute that money or those points or whatever you wanna call it, so we can get a more nuanced view of customer pain so we can measure the severity of them and what would happen if you start to fold in other pains that you're experiencing as part of your business.
- So how do those pains compete with a much broader sense of what, what's going on in your context in your world? So stuff like that really, really excites me to help set the strategic direction for the team to be able to work with incredibly smart, ambitious focused people who are really supportive of one another. So I know that sounds kinda cheesy, but it really is true. I, I've just been super impressed and I love working with global challenges. How do we, for example, talk to farmers in Indonesia or convenience store managers who wanna set up EV charging stations? It just goes on and on, things like that. I get really jazz about complex research challenges.
- Brendan Jarvis:
- I was going to say, it sounds like Willy Wonka's chocolate factory equivalent for [laugh] UXs research. Thinking about, you talked about the difference between evaluating an existing experience and trying to better understand customer pain points and problems. I was curious to know whether or not you've been using something like this, something like jobs to be done or jobs to be done to try and get a dispassionate perspective on what those things are and quantify and prioritize them.
- Bill Albert:
- Yeah, we've done some work around jobs to be done for sure, and it depends on the project and the team. I'm trying to push my team, not surprisingly, to take that mixed methods approach to understand the what and the why through qualitative and how much the magnitude of the problem through quantitative. And I think that's really where we're at is bringing those two kind of perspectives together right now at that point. But
- Brendan Jarvis:
- Yeah, quantifying the experience and using mixed methods. We've talked about this a number of times so far now, it's clearly a big part of your life's work has been helping people to do that. And I suppose it's brought you joy, brought you joy, but it seems to have also brought you a little bit of frustration. And I'll just quote quote you now, you've said you can't go on anecdotal evidence, which you mentioned earlier. You just can't go on a hunch. It really, it's really risky to do that. This is probably the area that companies that we work with while you're at Bentley is the single biggest mistake that they make. They come to us too late now, there'll be many other people listening to this episode who will have, if they're not at the moment, will have previously found themselves in a similar position being frustrated that they feel like they're the ambulance or the UX is the ambulance at the bottom of the cliff. Or there's some other analogies that are less complimentary to this type of approach. Why do we find ourselves in this position more often than not?
- Bill Albert:
- Because it's really easy to wait till the end. There's no one telling you, you've got great ideas and you feel like the problems the customer and let's just start building it. Come on, grab, grab a hammer and some nails and let's go out and start building the house. Hey, did you realize that the bathroom is not connected to any room or that it, it's just such people get excited? I mean, I'm just thinking we've probably, anyone who's done any kind of home improvement projects, if we're doing a painting job, we know that almost all the work is in the prep. The actual painting is kind of only at the very end. That's sort of the easy fun part. But when you put the paint on, it looks great, and we can just start painting over old wallpaper. It's just easy and it looks good and we get that satisfaction right away. But you gotta lay that foundation. And with UX is understand. So for example, benchmarking, sometimes companies will say, why should we evaluate something we know is going to change? Well, because maybe you wanna know whether you're actually making it better or not. I mean, I don't think, I'm certainly not the first person to think of that it, it's just, it's so counterintuitive.
- Brendan Jarvis:
- Yeah, it is counterintuitive. But I also wonder whether we are fighting a little bit against human nature here. You know, talked about the rush and the excitement to just get building, I've got this great idea, let's just do this. Sometimes when you run an evaluation, probably actually more often than not, in my experience, it's not overly or not 100% flattering of what is actually being created. So I sometimes wondered how much of this is actually willful ignorance
- Bill Albert:
- Certainly could be. I think a lot of people outside of UX, they know it, they understand it, they appreciate it, but when it comes down to it, are they going to give you the budget you need? Are they going to make a little room in their project timeline for you? Maybe not, right? It's a lot of lip service going on. One of thing that I really believe is that you add steps to save time with what we do. So if Brendan, you come to Boston as a really confusing city to drive around, and if you come from the airport to my house, you're probably going to get lost. And you could keep driving around, driving around, you'll eventually get to my house. But you could stop, ask somebody directions or at least in the old days, and they could point you in the right direction and you're going to save a lot of time. And that is the way with UX, we need to understand the users that contacts the problems to get to a better place, a better product, better experience faster. And people don't get that, or at least they don't fully appreciate that. They see it as kinda this add-on extra thing that we need to do, or it's coming from high up. And so it's kinda a box to tick.
- Brendan Jarvis:
- You've run into this more than once. By the sound of it, you've got 20 plus years of experience in the field, one thing or one area of pressure, or one point of leverage. Can practitioners or UX leaders that feel like they're fighting an uphill battle here, fighting for that room in that house or in that palace that people are building just to do maybe some basic evaluation of what's actually going on sooner rather than later. What can they do? Where should they focus their efforts?
- Bill Albert:
- I mean, I think for many practitioners it comes down to money and budget. And one argument that I've made, whether it's successful or not, I'm not sure, but is to say, listen, how much for product owners or stakeholders, how much is it worth knowing that we're going to deliver the best product, the best experience? How much of the budget would you be willing to set aside to make sure that you really have the thing that you are intending to build or to create 1%, a half of 1%? Just give me a half of 1% and we can guarantee that.
- Brendan Jarvis:
- Conversely, what is the cost of getting it wrong? What's the price of failure?
- Bill Albert:
- Extremely high, extremely high. Years and years ago at Fidelity, we had designed a, built a product that ended up not testing well, and it got kind of, word kind of got around and it was ready to launch and they stopped the launch and basically had to go into a redesign. It was like a million dollars, it was expensive. I was like, wow, that was a real eye-opener for me and hands off to them because they was like, why would we knowingly launch something worse? We're all intelligent people, just, it's sort of that sunk cost fallacy, you know, have to get away from it. It's hard. But there are companies like that and sometimes they just don't wanna know. It's looking at how are other people incentivized? Is it around creating a great product and demonstrating that or is it around making sure you get it done by the end of June?
- Brendan Jarvis:
- Yeah, fundamental difference, isn't it? And you talked about sunk cost fallacy. Yeah, that's right. It is such a powerful influence on behavior and it would be really easy, I suppose, and this happens quite a lot in UX circles, for us to look at the rest of the organization and go, oh, what are they doing? They don't get us, they can't see our value, they just need to take a step back and do some of this work. But how much of the responsibility and perhaps the blame for poor performance or for the situation that we find ourselves in, and I know that we're going to be generalizing here, but how much of this falls at the feet of UX leaders that haven't invested in understanding and building a quantifiable way of evaluating user experience and communicating whether or not it's better or worse than the current state to leadership. They are these UX leaders derelict in their duties to their organizations, to the field, and to users by not doing that.
- Bill Albert:
- Oh, that's good. I think I, derelict is a very strong word. I think that a lot of UX professionals would probably benefit by having some metrics available in their argument. But the same goes for having really effective video clips to show people struggling. Those are really powerful too. Numbers are powerful, stories are powerful. Bringing them together is even better. So I think it would be another kind of weapon in your toolkit or whatever, something that would really help make that argument. But I think that, I don't blame UX people. They're doing whatever they can in sometimes difficult situations, but it's hard, especially for junior folks to be able to push back and demand more budget, more time to have the data that teams need to make the right decisions.
- Brendan Jarvis:
- Is there a right way or time to introduce measurement of UX metrics into the design org or into the wider organization if they haven't really been part of the picture?
- Bill Albert:
- One of the very first quant studies I did way back in infidelity with [inaudible] is what we did was we said, okay, we're lining every year, we're kind of lining up projects. What do we wanna work on our roadmap for the next 12 months or 18 months? And we said, okay, why don't we do this? Let's do a competitive study of fidelity versus two competitors. Let's agree on these are the 20 tasks that are most common, and let's get some data on that to see where do we do well and where do we fall short and against who and why do they beat us? And then let's use that as one of the drivers in coming up with our new kind of slate of projects and people. Yeah, we can set aside some budget that seems like a smart idea. Who doesn't wanna be able to know who's doing what and why are they doing better than us on these key tasks?
- Or what do we have we could actually promote when people are doing well on ours? So it was kind of an easy sell. We could sort of do it off on the side. No one was demanding it. And once we started that, we did that kind of annual benchmarking, then we could, people started seeing the value of the metrics and seeing like, Hey, could you actually collect more data in your normal day-to-day other project work that we're doing? And starting to also bring in other data sources. So voice of customer, our analytics stuff from marketing. And what we started looking at was basically doing a big triangulation exercise. We have this data source and it's telling us this story, this data source is telling us, and we could start to see all these commonalities. What it's highly, highly doubtful that this problem is that we're seeing it in four different data sources where it doesn't exist. I feel very confident if something is only, if we kind of infer a problem or see a problem in only one data source, then we might wanna learn more, try to validate it. But when we see it across multiple data sources, especially three or more, then we feel really confident. So that was another way of trying to see the big picture in a more quantifiable
- Brendan Jarvis:
- Way. You're talking about their integrating insight and data that is across the organization, across different departments outside of the one that you were working in and bringing it together to try and get a more complete picture, or at least to identify areas of potential problems that needed further investigation. You know, made that sound really easy, but I suspect it, what was the approach? Was this a command and control type, we have a mandate to make this happen type thing? Or was it soft power that you used? How did you actually make this happen?
- Bill Albert:
- Oh, this was, well first off, that was way back when, right? Okay. And the other power we had were interns who could do really a lot of grunt work. So we literally would take a sample of a few thousand transcriptions from let's say phone calls coming in and just categorize them. These are all the issues that we're hearing around password resets or around navigation or some other whatever. But yeah, it can be a lot of work. I don't know, it depends on the organization, the data that they have. But at least in theory, you should want to be able to bring multiple data sources in to see a more picture and complete picture. And the reason is ultimately it is about confidence. Like in measurement, it's about the confidence in making conclusions. What is the chance that I'm going to be wrong if I tell you that this thing is crap, I might be wrong, it might be good.
- Or the other problem is I tell you it's fine when it's really crap. So those are kind of the two mistakes as a researcher that we can make MRS and false alarms, errors of omission commission, they go by different terms, but it's the same idea. And what I'm trying to do is have as high degree of confidence in what I'm saying and what I report out is possible. So stakeholders can make informed decisions. If I go in with, we did this one study with a small sample size and we have this metric, and I look at the stats of it and I can say, well, it was really good or really bad, or somewhere in between, that's not going to help anybody. In fact, it makes me look a little silly. So at the end of the day, it's really about having confidence through the data in making the right
- Brendan Jarvis:
- Decision. Well let's talk about confidence. So let's talk about sample sizes. And this is potentially not surprising this next question, but I feel that given your depth of knowledge in quantifying the user experience, that you will be able to hopefully through your own skills, but perhaps channeling Tom as well, make this complex concept, which actually seems really simple to be honest, but really clear for people that are listening. And one of the myths that exists out there, particularly as it pertains to usability testing as the Smith of the five participants are enough to find 80%, as is always talked about of the problems in a given experience. But that's not actually what Jacob Nielsen was talking about in his paper there was it He wasn't advocating for that. What is the actual math? And when we talk about confidence, how does confidence relate to this myth of the five users?
- Bill Albert:
- So the math is actually really straightforward. It's one minus one minus P to the N, okay? And what that is basically is it goes to this idea of problem, the probability of problem detection, usability testing is different than other things of measuring, like measuring somebody's preferences or something like that with a question problem detection. And what is the probability that any one person through one person, we would detect that problem. And in that paper they talked about a P value, which is different than the statistical P value, but that P value of probability of problem detection of 0.3. So if a problem exists and I'm going to run 10 people through a usability test, on average three people are going to have that problem, I'll be able to observe it right now that N is the sample size. So a probability of 0.3 with five or six people gets us to that 80% that we can detect 80% of the problems with only five people.
- Brendan Jarvis:
- Is that
- Bill Albert:
- Okay?
- Brendan Jarvis:
- 80% of the problems that only one and three people will experience.
- Bill Albert:
- So with five people, we have an 80% chance of seeing it. Okay. And that's sort of the math behind it. But here, here's the rub is that some of the products we test sometimes the products are really, problems are really obvious. Almost everybody trips over them and we can see them. And there we might only need three people to have any kind of, because everybody's having the problem. Other products, imagine if Amazon came to you and said, we wanna do a usability test of our entire website and we want you to find all the usability problems. Exactly. You, you'd laugh, right? Because the probability of problem detection is so incredibly small versus some kind of maybe a low fidelity prototype or something like that, or where it's going to be much, much higher. So determining sample size is really important and part of it deals with the fidelity of the thing that you're testing and a whole host of other issues.
- Now the reality though is, and I've seen this happen literally a million times I think, is that when we go to easibility testing, and if we test five or six people in day one and five or six in day two, people don't come for day two or at least they're less likely to. And the reason is because they feel it's kind of a repeat of day one. That's sort of the proof there that day two is not providing that much additional value because that probability of problem detection is usually high or they're not learning anything new on the second day after the sixth or seventh person. And it's more just getting, especially the designer, the product person, you're getting punched in the face over and over and over again. [laugh] hard to take [laugh]. So
- Brendan Jarvis:
- It can be pretty gnarly, can't it? Yeah. You are raising a really good point about when you split testing over multiple days or when you have people that only attend one session is it's very easy for them to draw conclusions off the basis of an incomplete experience themselves of the testing. And there's what I've found at least is always, I don't know what the percentage would be, but when I go back and re-watch sessions that I've been moderating, I always draw out or find more subtle problems than I do in the room or then when I'm observing live. And I was talking to Dr. Natalie Hansen, who's at z s Associates a few months ago about the ladder of inference, I believe is what came up. And the importance of if you are going to have people attending these sessions is it's not just enough. And I'll be interested in your perspective too, Bill on this. It's not just enough to have them watch, it's actually that time with the pizza afterwards where you start to integrate and talk about and share openly, well, what was it that we actually saw? And try and get some alignment or at least clarify any sort of dark corners of that before you leave and go off back to your day jobs.
- Bill Albert:
- Yeah. So I think I've sort of evolved on this. I think back in the day, a long time ago, we were just happy for people to watch. And so there was this whole thing of [laugh], should we have the pizza? Or what about maybe they would like sandwiches instead and let's make sure that
- Brendan Jarvis:
- Was a crew reference. Yeah,
- Bill Albert:
- Yeah.
- Is what can we do to entice them so they can actually see this product in action. So whether they were answering emails and catching bits of it was kind of like that was enough maybe. But I think as it's become more and more central, more and more critical to success, or at least people being more open to that and seeing that act, having them actively engaged in taking notes, debriefing, being responsible for something as it relates to each session will really help, will make a big difference in its, we're on our phones are too it, it's hard to compete against a phone, but anything we can do to start to change that balance and just say, okay, for the next hour, you are going to get so much out of this, but you need to put in some focus. In fact, a number of years ago, a former student, Josh Rosenberg and I developed a metric that we presented at a UX a conference called, I think we called it the team engagement model. And it was basically a metric that measures how engaged teams are in watching usability tests. And we just came up with all these different kinda data points that people could kinda use to track over time if you're engaging your team. But it was kind of that same idea of not only how can we, but can we measure that? So did
- Brendan Jarvis:
- You look at the role of coffee and increasing the engagement?
- Bill Albert:
- I don't know. I don't know. But that coffee is always important.
- Brendan Jarvis:
- From what I could gather, most of your studies had have been large scale or as of late. And again, I don't have the complete picture here, so if I'm putting words in your mouth, let me know. But most of your studies at the tail end of your time at Bentley were large scale unmoderated usability focus, but with a broader lens into other aspects of the UX. But those large scale unmoderated studies, and I've heard you say, and I'll quote you again now, and the work we do, we aim from margin of error of plus or minus 4%. What does that specifically relate to that margin of error? I assumed, as I've said, I assumed this was unmoderated usability testing and why such large samples. I think you've talked about in the past three to 400 people on occasion have been participating in these studies.
- Bill Albert:
- Yeah. So I think what that is referring to, so the large scales, again, it goes back to this idea of confidence in making decisions and being able to, for example, tease apart more nuanced things. So if I'm asking somebody about, do you like the blue button or the red button? I mean, I'm notoriously at kind of trying to choose that I need a lot of data to know if there's a difference. So that's usually there's something that happens with the sample size of around 400, 500 people. It starts being diminishing returns. And if we have a margin of error plus or minus 4%, I could add another 500 people and then get down to maybe 3%, something like that. So it's just not worth it. But one of the biggest mistakes that UX researchers make is that they ask preference based questions of small sample sizes. And it, it's very easy to do. You've got 10 people and you ask about color or you ask about does this image resonate with you? Yes. Or you like, but a million different relevant questions. But when you, it's no statistical power there to make any kind of reliable. And it goes back to this idea of usability is really good at detecting problems and we can do that reliably with small sample sizes, but we can't do it with anything preference based. And that's why the larger sample sizes are needed to make the right decision.
- Brendan Jarvis:
- So when I heard you talk about preference, and I'm not sure exactly what talk it was that I was watching, but it really struck me and in a good way actually, it really sort of challenged my thinking about what it is we are here to do in UX research. And I'll, again, you said measuring preferences is very, very important. And when I heard that, it sounded to me at least more like something a market researcher would say. And I thought in the business of UX research, we were more about measuring behavior.
- Bill Albert:
- But preferences are driving my experience. I mean I'm, I'm trying to look at the totality of the experience and what's driving that. And there isn't a color palette, there's no right or wrong, inherent or wrong to that, but it's affecting my experience and I wanna be able to create the best experience I can. So I need to be able to look at it. More recently, I've been really trying to develop a scorecard around measuring emotion and emotional experience. And it's not just about behavior, but with the emotions people are feeling through an experience and looking at it from different lenses. So I
- Brendan Jarvis:
- Just, is this using galvanized skin response and looking at pupils and well seeing what's going on?
- Bill Albert:
- Not, yeah, I mean that's part of it, but honestly I think the Galvan skin response or gsr, it's not very useful for most practitioners. We can't measure something called balance with that, which is the positive or negative aspect of emotion. I just know that you're aroused, but I don't know if it's good or bad. So it's really a lot of it comes down for most of us comes down to usually self-report asking people different questions and some potentially facial recognition software to look at things like smiles. But even that's complicated. But in terms of measuring experience, the emotional piece is definitely the hardest nut to crack behavioral piece. I think we know how to measure success and time and errors and all these things that are tied to our emotion, but to our behavior. But the emotional piece is much tougher given
- Brendan Jarvis:
- What we've discussed today. What I'm about to ask you for my final question might seem a little strange. You see, I had a look at the book, dedication for your latest book, measuring the User Experience, and it was to your mother, Sarah Elbert. I'm very personally, I'm very close to my mother, so I'm curious, what was it that Sarah gave you that you wanted to acknowledge her in that way?
- Bill Albert:
- Oh, that's a really very thoughtful question. Well, I mean she's beyond being my mother and someone who I love very dearly. She's a really unique person. And as it relates to our conversation, she has no intuition regarding anything technological. She has said on many occasions, I was born in the wrong century. [laugh] had many famous quotes and her family, one is the sound on my internet doesn't work. She refused to have speakers next to her computer because they didn't look good. She's an artist and it's all about the look and speakers didn't look good, so she could never get sound. I am literally a tech support 24 7 for her [laugh]. Me too. And I think what I do, at least tangentially is kind of related or informed by her and trying to make design products that are simple that require a minimum amount of intuition and that are accessible, especially for older adults and other people with disabilities.
- It is so hugely important and to design those design products that meet everybody's needs and they can use them. I've seen on many occasions, just as one real example for this is, for example, for people on oxygen. The mechanics of turning these knobs is very difficult, and if you've got arthritis, you simply can't. I don't know why there isn't some, maybe there is does exist, but I just haven't seen it. There's so much opportunity to make people's lives better directly, just really benefit people through better design and design informed by research. So I think that's sort of a big inspiration
- Brendan Jarvis:
- For me. Well, what a great place to leave our conversation. Bill. Thank you for such a interesting and wide ranging and conversation at depth today. I've really appreciated you taking the time to share your stories and insights with
- Bill Albert:
- Me. It was great. It's always fun to talk about yourself, so thanks for that opportunity.
- Brendan Jarvis:
- Oh, you're most welcome. Bill, if people wanna find out more about you and your work at Mach49 and all the other things that you've been doing, what's the best way for them to do that?
- Bill Albert:
- It's really a bunch of different places. Look for me on LinkedIn mock nine.com to see what our company's up to. Please check out the Journal of User Experience, uxpajournal.org. Check out the Bentley UX Center as well, bentley.edu/uxc and finally, I guess Twitter would be @UXMetrics for me.
- Brendan Jarvis:
- Great. Thanks Bill. And to everyone who's tuned in, it's been great having you here as well. Everything we've covered, including where you can find Bill and all the organizations he's associated with, will be in the show notes and also there'll be full chapters of our conversation on the YouTube show notes, so check those out as well. If you wanna hop to specific parts of the conversation today. If you enjoyed the show and you want to hear more great conversations like this with world-class leaders in UX, design and product management, don't forget to leave a review. Those are really helpful. Subscribe to the podcast so it arrives weekly in your podcast app. And also tell someone else if you feel that there's someone that would benefit from these conversations about design and product at depth, then share it with them. If you wanna reach out to me, you can find me on LinkedIn as well. Just Brendan Jarvis, go and go Find me there. Or there's a link to my LinkedIn profile at the bottom of the show notes as well. Or you can visit me at thespaceinbetween.co.nz. That's thespaceinbetween.co.nz. And until next time, keep being brave.