Dilemmas in HCI

The domain of Human Computer Interaction is riddled with dilemmas affecting both users and designers. Let some of these problems be presented as follows.

Testing  users systems is necessary, but costly

Note: Users should not be tested, systems should. Users help identify usability problems the systems have. All technology should be adapted to fit the users, not the other way round.

Every system serves a purpose, even if it is “just” entertainment. Users use a system to achieve a certain goal. The purpose of usability testing is to identify problems that can stop the user from reaching their goal while using the system. Carrying out the testing, however, can be demanding both financially and time-wise since organising testing sessions and analysing the results requires spending time and energy, and participants may need reimbursement.

One of the things that makes testing costly is the number of participants. In case of high-risk systems, evaluating with only a few people (e.g. the magic number five) is not enough (Schmettow, Vos, & Schraagen, 2013). A bigger sample size is needed to identify a sufficient percentage of problems.

This dilemma occurs every time a new (feature of a) system is introduced. One can save costs by not testing the system at all, but since some problems are impossible to predict theoretically, the result will be a dissatisfying system with low usability even when everything has been designed according to usability guidelines. As a consequence, users may not adopt the system at all.

There are many ways to deal with this dilemma. First, one can use usability inspection as a cheaper method instead, or complimentary evaluation methods, such as usability testing combined with document inspection (Schmettow, Back, & Scapin, 2015), to diversify the method of discovery and identify more problems with less people. Furthermore, approaching the process iteratively enables making adjustments in the testing process or the system design and, thus, facilitates gathering higher quality information with fewer costs. Once sufficient percentage of problems have been identified, the testing can stop. Finally, it is important to remember that improved designs as a result of testing ensure that users will actually use the system, so it will be worth it.

The strange relation between usability and beauty

Users perceive systems they consider high on aesthetics to have better usability than systems with low beauty. This is similar to findings from social psychology where people assess the personality of others based on their physical appearance.

Many aesthetic aspects define whether a user interface is high on beauty. For example, unity (appearing as one), regularity (objects spaced equally and in alignment to each other), or symmetry. The study of (Ngo, Teo, & Byrne, 2003) found that the designs that were assessed as more beautiful were simply in accordance to those aesthetic principles or Gestalt laws (heuristics of human mind to deal with visual search by filling in the blanks) and, as a result, low on visual complexity. Low VC, however, means that finding information is easier and faster, and, thus, the usability of the system is higher (Tuch, Presslaber, Stöcklin, Opwis, & Bargas-Avila, 2012). 

Furthermore, forming an opinion on beauty and usability happens relatively fast as shown by (Tuch et al., 2012). This means that a badly designed interface is perceived as being low on usability from the first glance. Changing this opinion might prove to be difficult or even impossible, if the user assesses the usability so low that they do not even want to try out the system.

To mitigate this problem, systems should be designed according to the Gestalt laws and aesthetic principles. As a result, users will find them more aesthetically pleasing (beautiful) and the low visual complexity will ensure easier, faster search, meaning higher efficiency and will, thus, raise both the usability and willingness to use the system.

A legacy design always wins the first round

This dilemma indicates that performance will be lower on new systems as opposed to old ones during the first round of testing, even if the new design is more optimal than the old one. This is due to a learning curve – the users are unfamiliar with the new design at first whereas the old one has become a habit and need time to learn how to use the new system.

As shown by (MacKenzie & Zhang, 2003), one usability test was not enough to show the superiority of their new keyboard design. In their case, at least 10 sessions were required before the new design had the same performance as the old one. That moment is called the crossover point.

The learning curve occurs every time a new design is introduced to users who have had practice with a previous one and, thus, formed certain habits according to the old design. Therefore, no accurate conclusions about the new system being better or worse than the old one can be made after just one testing session. If one does assess the new design based on one simple experiment only, the performance will be lower than with the old design and the new design may be discarded unfairly.

To assess the designs properly, one must do a longitudinal study to capture the performance improvement of the users. Based on this longitudinal data, a crossover point can be identified, the learning curve can be plotted, and by extrapolating the data, the performance can be predicted for the longer run. In addition, with a longitudinal study, the individual differences in initial performance between the users will be reduced.

Users ignore efficient procedures

Old habits die hard. This is also true in HCI and is known as the active user paradox. The active user paradox states that users tend to ignore efficient procedures, prefer using familiar sub-optimal ones instead, and are rather reluctant to learn and change their behaviour.

(Fu & Gray, 2004) explain this as the preferred, generic procedure often being more interactive than the optimal procedure. The possibility of getting immediate concurrent feedback from the system reduces the cognitive workload of the user which is why they prefer using these less efficient procedures. (Carroll & Rosson, 1987)add that users also tend to apply previous knowledge to new situations which can produce errors when the new and old situations are actually not as similar as they appear.

Users are reluctant to learn the more efficient procedures and, as a result, their workflow is less efficient. Procedures take more time and require more interactions with the system to produce the end result.

The active user paradox can be tackled in multiple ways. For example, by making the learning process easy and rewarding, reducing the connections to previous knowledge, making the preferred general procedures more effective and efficient, making everything achievable using the general procedures, providing smart assistants for recommending the advanced procedures, or making the general procedure more costly and, thus, forcing the user to choose the advanced one.

Better designs won’t cure learned helplessness

Learned helplessness is a term used to describe a situation where a person sees themselves as incapable of performing a certain action, even though they may physically and mentally be perfectly capable of doing so.

Learned helplessness appears when one has previously experienced a series of failures in the same domain leading to a perceived lack of control, resistance to and fear of trying again. Learned helplessness and self-efficacy (view of self and one’s own capabilities) are dynamic, meaning that they can change over time.

However, changing one’s level of self-efficacy is not easy. Since self-efficacy is perceived and controlled internally by the user themselves, better designs can do little to improve it. To deal with this problem, designers need to focus on changing the user’s perspective, so that the user will start believing in their capabilities more. Results by (Torkzadeh & Van Dyke, 2002)indicate that training programs, such as computer courses, designed to change the user’s self-image are an effective way to increase self-efficacy.

Other solutions include peer-to-peer or role model learning, assistive functions, reducing the severity of consequences, making connections to real-world tasks, helpful guidance, rewards, and creating an overall positive attitude towards the process. In the end, however, it all comes down to the user’s own will of wanting to change. Forcing someone to use a system can increase the resistance. If one does not believe in their capability of evolving to handle these situations, a better-designed system will not help either.

Making one design for a diverse user population

Users are distributed differently regarding their performance. While some have impairments and need accessibility features from a system and others are highly skilled and need large-scale customisability, even the members of the “normal” range of a user group vary greatly. This means that often one design is not enough to satisfy the needs of all users.

(Frances Jennings, David Benyon, & Dianne Murray, 1991)looked into the connection between different database interfaces and individual users’ performances, personalities and cognition. They concluded that one interface design is not enough to facilitate the wide range of users. Instead, at least two designs (with either aided or non-aided navigation) are needed to support users both with low and high spatial ability.

The diversity dilemma occurs whenever a larger user group or multiple groups are targeted. With more people a greater variation among users is introduced. Presenting only one elaborate design can hinder the performance of those user groups who are not comfortable with it (e.g., requiring oral input from mute people).

While not the most preferred, changing the user with assistive or persuasive technology is one of the options to solve this problem. Another possibility would be an adaptive system that customises itself according to the user’s features, but currently a fully adaptive system is not feasible yet. Content can be adapted to user, however, and that is one of the possible options.

Third, the most preferred solution, however, is to use robust design to create a system that results in more uniformly high performance across users by maximising the minimum performance of users. A robust system is designed to work equally well for both people with advanced skills and those who are disastrous with alternative systems, and, therefore, a design fit for a diverse user population.

Familiarity is good for users, but bad for innovation

Familiarity describes an experience that one has encountered before. Due to having confronted the problem earlier, the user can recall their previous behaviour to deal with the situation again. Participating in these familiar events repeatedly causes the user to make mental connections and create a strong network of common situations and appropriate actions. 

At a first glance, relying on these connections design-wise would mean that all new systems should be familiar to the user, look and feel the same. This, however, can be detrimental for innovation that demands change and disruption. Making the novel systems look similar to the previous ones will indeed make them more intuitive to use, but can keep the users from finding new functions, defeating their purpose and hindering innovation. 

(Van Hooij, 2016)looked into this familiarity and found that while familiarity and previous experiences do play an important role in the ease of adoption of novel systems, they can be reduced to image schemas – concepts and interactions derived from sensorimotor experiences. Their study showed that image schemas can be used as building blocks for designing intuitive systems that are independent from former computer knowledge.

Using the image schemas enables creating novel innovative systems that can look completely different from existing ones but are still intuitive to use due to tapping into the subconscious sensorimotor mental models of the users. The schemas are very basic, abstract, and inclusive of most people, but do not depend on prior technology and can be used in innovative ways.

When approaching close-to-real is a problem

In Human Robot Interaction, the concept of “Uncanny Valley” describes the situation in which imperfect human-likeness elicits feelings of dislike. These robots are found to be creepy because abnormal features can indicate illness which humans are evolutionarily designed to stay away from, because they remind people of their own inevitable death, or because it is simply not understood whether they should be treated as tools or social companions. This can lead to the lower acceptance of human-like robots or the general opposition to adopting the HRI domain and social robots in everyday life. This, of course, hinders innovation in technology in general, because Uncanny Valley is also present in other fields, such as chatbots.

(Mathur & Reichling, 2016)conducted a study in which they used a variety of robot faces ranging from mechanical to human and found that there exists a point where robots are perceived as less likeable and less trustworthy – the Uncanny Valley. They also speculated that this might be the effect of category confusion in which the longer the user is fooled by their initial judgement, the greater the shock when they realise their mistake.

Furthermore,(Haeske, 2016)showed that the perception of a robot or an avatar as eerie happens in the first 100 milliseconds. They concluded that the fast system theories play an important role in the concept of Uncanny Valley.

There is presently no great way to overcome the Uncanny Valley completely. One can currently design robots to look more machine-like and, thus, less eerie. Hopefully, one day we will have the technology needed to present human-like robots in an aesthetic and highly socially responsive way to make them more attractive as social companions.


Bibliography

Carroll, J. M., & Rosson, M. B. (1987). Paradox of the Active User. Interfacing Thought: Cognitive Aspects of Human-Computer Interaction, 80–111. Retrieved from http://dl.acm.org/citation.cfm?id=28446.28451

Frances Jennings, David Benyon, & Dianne Murray. (1991). Adapting systems to differences between individuals. Acta Psychologica78(1–3), 243–256. https://doi.org/10.1016/0001-6918(91)90013-P

Fu, W. T., & Gray, W. D. (2004). Resolving the paradox of the active user: Stable subuptional performance in interactive tasks. Cognitive Science28(6), 901–935. https://doi.org/10.1016/j.cogsci.2004.03.005

Haeske, A. B. (2016). The Uncanny Valley : Involvement of Fast and Slow Evaluation Systems, (January), 1–49.

MacKenzie, I. S., & Zhang, S. X. (2003). The design and evaluation of a high-performance soft keyboard, (May), 25–31. https://doi.org/10.1145/302979.302983

Mathur, M. B., & Reichling, D. B. (2016). Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. Cognition146, 22–32. https://doi.org/10.1016/j.cognition.2015.09.008

Ngo, D. C. L., Teo, L. S., & Byrne, J. G. (2003). Modelling interface aesthetics. Information Sciences,152(SUPPL), 25–46. https://doi.org/10.1016/S0020-0255(02)00404-8

Schmettow, M., Back, C., & Scapin, D. (2015). Optimizing Usability Studies by Complementary Evaluation Methods, (July), 110–119. https://doi.org/10.14236/ewic/hci2014.12

Schmettow, M., Vos, W., & Schraagen, J. M. (2013). With how many users should you test a medical infusion pump? Sampling strategies for usability tests on high-risk systems. Journal of Biomedical Informatics46(4), 626–641. https://doi.org/10.1016/j.jbi.2013.04.007

Torkzadeh, G., & Van Dyke, T. P. (2002). Effects of training on Internet self-efficacy and computer user attitudes. Computers in Human Behavior18(5), 479–494. https://doi.org/10.1016/S0747-5632(02)00010-9

Tuch, A. N., Presslaber, E. E., Stöcklin, M., Opwis, K., & Bargas-Avila, J. A. (2012). The role of visual complexity and prototypicality regarding first impression of websites: Working towards understanding aesthetic judgments. International Journal of Human Computer Studies70(11), 794–811. https://doi.org/10.1016/j.ijhcs.2012.06.003

Van Hooij, E. R. (2016). Image schemas and intuition: The sweet spot for interface design?, (February).

GIF of water glass misuse from Gravity Falls S1E17: “Boyz Crazy”, 2013.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.