Usability Testing in Services: Delivering the Right Experience
Carlos Rodriguez | January 26, 2023
Read time: 5 minutes
Good services provide unique, efficient, and consistent user experiences. As designers, we avoid users filling gaps when this consistency breaks and the degree of reorientation is high. Usability evaluates efficiency, task quality, utility, and vague user satisfaction and focuses on a specific user-service interaction. However, service experiences entail much more than usability alone and include all the interaction’s cognitive, emotional, social, and physical elements. Service user experience goals are satisfaction, pleasurably, reward, fun, and provocative. Therefore, service usability methods extend beyond the traditional confines of product and purely digital designs and must incorporate the different actors, interactions, and nuances of a customer journey map.
A usability problem exists when aspects of the service system are unpleasant and inefficient, making it difficult or even impossible for the user to achieve their goals. The absence of usability limits or blocks the delivery of experiential value. Nonetheless, we should approach usability by identifying both the presence of a value, e.g., good design features, and the absence of value, e.g., system failures.
When planning a usability project from the user experience perspective, we need to have clear the study usability goals and the user experience goals. They need to be clearly stated and require correct operationalization.
When measuring service usability, it is essential to determine how we would use the UX data collected along the service life cycle. There are three ways to do so:
Formative usability: The goal is to make design improvements before its release. The designer evaluates the service design periodically and identifies shortcomings, interaction flaws, and navigation difficulties before the experience design is finalized. Formative usability is an interactive corrective process as the service experience is being designed and when an opportunity for improvement has been identified.
Summative usability: The goal is to evaluate how the service experience satisfies the user objectives through previously defined criteria. This evaluation is done once the service experience design is completed and implies assessing service performance against itself or the competition.
Interpretative usability: The goal is to understand the “why” for useful and limited-service features. Methods should provide insights beyond a satisfactory user experience and help designers understand what makes the experience “satisfactory” or “frustrating.” The aim is to understand the situation, e.g., would this interface work in the users’ lives? See Figure 1.
Figure 1: Usability Evaluation Study Goals
User goals are related to two main areas: Performance and satisfaction. Performance includes the degree to which users accomplish their goals, speed to complete tasks, time to complete tasks, efficiency, errors in executing tasks, and learnability. Satisfaction reflects the sequence of interactions, touchpoints, and moments of truth that connect the user with the delivery system components or actors. When assessing the overall user experience, self-reported and behavioral & psychological metrics are recommended.
Analytical and empirical methods. Analytical methods inspect the whole system and define usability as a property of the system (not grounded on theory). In contrast, empirical methods define usability as a property of usage (insight on underlying causes) focus on use.
Analytical methods: (focused on the negatives)
Analytical methods require a design judgment, and users are not involved in the assessment. The approach entails a complete detailed design of the service experience, including a description of the tasks to be analyzed, specific actions to complete, and a clear definition of the “personas” that will experience the service. The goal of the study is summative.
Alternative methods are standards enforcement, cognitive walkthrough, and heuristic evaluations. Heuristics evaluation is a usability engineering method to find problems in a user interface service design and assess whether the overall system follows established usability principles or heuristics (general rules of thumb). See Figure 2 for a list of heuristics (Nielsen and Mack, 1994). A disadvantage is that different evaluators may find different problems, and we may have contradictory evaluation data.
Cognitive walkthrough (how the system is easy to learn to use). This method involves creating a scenario, performing the walkthrough, and identifying the problems. It consists in having experts analyze the customer journey and detail how users will understand it.
Figure 2: Usability Heuristics for Unser Interface Design
Source: Nielsen, Jakob 2020. 10 Usability Heuristics for User Interface Design. NN/g Nielsen Norman Group. Retrieved from https://www.nngroup.com/articles/ten-usability-heuristics/
These methods focus on usage and learnability and require a greater task detail. As such, they involve users. Some methodologies include usability testing, field studies, and click-through studies.
Usability evaluations obey a formative study goal. It is used in the early stages of the service experience design and typically in a simulation environment or laboratory. The method has specific objectives, e.g., how usable is the user interface, whether users can find detailed information, etc.
During usability evaluations, the think-aloud protocol (TAP) provides a deep insight into the problems users encounter, is very detail oriented, requires expertise in cognitive analysis, and provides extensive data. More importantly, the method reveals users’ mental models, information processing, and affective and emotional states.
Field studies require testing the service design in the real world and are generally applied in later stages of the design (summative goal). To this purpose, we provide a functional prototype of the whole service experience system and allow users to experience it in a “natural” setting for a reasonable amount of time.
Daly, S. R., Yilmaz, S., Christian, J., Seifert, C. and Gonzalez, R. (2012), “Design Heuristics in Engineering Concept Generation,” Journal of Engineering Education, Vol. 101, pp. 601-629.
Goldstein, S., Johnston, R., Duffy, J.A. and Rao, J. 2002. The Service Concept: The Missing Link in Service Design Research? Journal of Operations Management 20: 121-134.
Nielsen, Jakob and Robert L. Mack. (1994). Usability Inspection Methods: How To Conduct Heuristic Evaluation, John Wiley & Sons, New York, NY.
Reason, B., Lovlie, L., and Flue, M.E. 2016. Service design for business, Wiley, New Jersey.
Rubin, J. (1994). Handbook of Usability Testing, John Wiley & Sons, New York, NY.
About the Author
Carlos M Rodriguez is an Associate Professor of Marketing and Quantitative Methods and Director of the Center for the Study of Innovation Management, CSIM in the College of Business, Delaware State University, USA. His publications have appeared in the Journal of Business Research, Journal of Business to Business Marketing, Journal of International Marketing, International Marketing Review, Management Decision, International Journal of Business and Social Sciences, Journal of Business and Leadership, and Journal of Higher Education Research & Development among others and several conference proceedings. Currently, he serves in the editorial board of several journals. His research interests are in the areas of entrepreneurship and strategic capabilities, luxury branding and experiences, product design and new product development teams, and relationship marketing. He recently published the book entitled Product Design and Innovation: Analytics for Decision Making centered in the design techniques and methodologies vital to the product design process. He is engaged in several international educational, research, and academic projects, as well, as, international professional activities.