WS 03:

Design- and specification methods for efficient user involvement

Date of Workshops

23.02.2022, 3 pm (CET)

Zoom-link to the event:

https://fau.zoom.us/j/66955920889?pwd=R1hnSHlVd01WV2tzKzNmV0I4eGliZz09

Meeting-ID:

669 5592 0889

Passcode:

983538

Organizers

If you are interested in participating, please contact one of the organizers directly:

About the authors

Since 2007, Helmut works for Siemens Technology (Princeton, NJ, USA). He supports research and development projects in the area of requirements engineering and user experience. The ux focus is on “Design for Efficiency”, “Design for Value” und “Design for Explainability”. Helmut is Co-Chair of the conference “AI in HCI” (affiliated with the HCII conference).

Thomas Herrmann is a professor of Information and Technology-Management at the University of Bochum, Germany. Current research interests include design methods for socio-technical systems in various areas such as healthcare, computer supported collaboration, knowledge management, process management, smart factories and the interaction between humans and AI.

Important Dates

  • 19 December 2021: Submissions of proposals (extended abstracts of about 800 words length without literature or references) via conference tool
  • 5 January 2022: Review feedback
  • 22 January 2022: Submission of camera-ready version
  • 23 February 2022: Date of workshop

Summary

  • Workshop duration: 4.5 hours
  • Target audience: User experience directors, managers, architects / designers
  • Submission: We use the conference submission system ConfTool, submissions will be published in in WI22 Workshop proceedings

    Motivation

    Efficiency of use has always been a topic of interest for human-computer interaction and in usability. There are still many problems unsolved. Users are increasingly involved in semi-automated, parallel executed processes. Many users perceive current waiting times and system requests for user decisions as unnecessarily long (waiting times, inappropriate necessity of monitoring) or as unnecessary at all (irrelevant decisions). Design decisions and resulting system behavior lead to long waiting times and unnecessary decisions requests. Such an undesirable behavior impacts negatively the enterprises’ need for a high productivity and the user need for efficient task completion. From that perspective, it is beneficial to minimize mandatory user involvement in the design process and, at the same time, to keep the user sufficiently in control of the process and its outcome. This should also consider the trend that users take part in several semi-automated processes or are required to use several systems in parallel.

    Literature (e.g., Rubin et al 2008) and standards (e.g., ISO 9241-110, ISO/IEC 25062) have introduced “time on task” (ToT) as the main efficiency metric. ToT has the following limitations:

    • It does not focus on the value creating outcome, but on user tasks, which individually may not create such a value creating outcome.
    • It does not explicitly address the kind of users’ involvement during response times, e.g., whether they are able to switch to other tasks vs. having to pay attention continuously.
    • It is not able to express the involvement principle „user intervention“ which may cause non-predictable or less-predictable, alert-driven user actions.

    Today’s user involvement specific design and specification techniques for ecosystems have limitations (e.g., Diaper & Stanton 2003, Wyers et al. 2017 ) to address the changed user behavior and particular to effective design for efficient user involvement. A well-known time-oriented technique, the key stroke level (Card, Newell, & Moran 1983) allows the prediction of interaction time for expert users for well-studied interaction devices (e.g., keyboards, mouse, touch screens). It also considers studied system reaction times. However, the technique requires a detailed user interface design and is only applicable if the benchmark times have been captured in lab studies. It does not focus on user intervention and performance of parallel tasks.

    We need new metrics and methods which can express, define and specify user involvement related performance goals which can be systematically considered in a design and development project from the very beginning (e.g., Degen 2019, Degen 2022), not only for testing. This approach is called here “design for efficiency”.

     

    The new methods of design for efficiency have to take into account that the ways users are involved has changed over the last thirty years:

    1. Users use several devices in different use contexts: Today, users use several devices in changing use contexts (e.g., at home, in the office, in public spaces, on the road). Users use the same devices for personal and professional tasks. Users perform the same task with several devices (device switch).
    2. User involvement includes a variety of interaction modes: The involvement of users does not only comprise fine grained sequences of input and output but also implicit interaction (Schmidt, 2000), phases of attentive monitoring or readiness for interaction (as in automated vehicles) of casual intervention (Schmidt & Herrmann 2017) for adapting an automated process. User intervention might be relevant in the case of using AI or if users delegate certain tasks to a socio-technical process and wait until the system completes the task. An example is an online order. Ideally, the socio-technical system involves the user at the beginning and / or at the end. In case of certain incidents or non-anticipated requirements of the user intervention might take place in between the task handling process. (Herrmann et al., 2018). Thus, user involvement covers a variety of levels of “keeping the human-in-the-loop” (Johnson et al., 2017).
    3. Several user roles contribute to a value chain: Many socio-technical systems today involve more than one user role along a value chain (Herrmann et al., 2017). Such systems are often called “eco-systems”. The involvement of different user roles are dependent on each other. For instance, one user role can start a task (e.g., a worker in a warehouse picks an ordered product from a shelf), after another user role has completed a user task (e.g., a customer has ordered a product). An important dependency between user roles are performance expectations for status updates or alerts between different user groups.
    4. Performing several user tasks in parallel: While many users today perform a main task, they are interrupted by alerts from other user tasks, running in the background or in parallel (e.g., in incoming message from another user or an alert from another system) (González & Mark, 2005). Users are therefore interrupted and need to get back into the original tasks. There are special demands for situational awareness per user task.

    Considering these challenges with respect to user involvement , today’s design and specification methods are only partially capable of supporting “design for efficiency”.

    Goal

    This workshop aims to present the state-of-the-art and future requirements regarding design and specification techniques for “design for efficiency”.

    Every submission should address at least one of the following aspects:

    • Metric to specify the efficiency or performance for user involvement
    • Measurements methods, techniques, and tools
    • Model dependencies between involvement of different user roles and user task along a value chain (for an eco-system)
    • Consideration of various types of user involvement, such as fine grained interaction or user intervention.
    • Modeling of multiple user contexts
    • Modeling of changing user tasks and situational awareness

    Agenda

    • Welcome; introduction into the topic (30 min.)
    • Presentations including a discussion (120 min.)
    • Summary of presented methods; identification of remaining gaps (60 min.)

    An initial case will be roughly described and distributed to the participants beforehand to offer the workshop participants a mutual reference that will make the contributions comparable.

    References

    • Card, S.K., Newell, A., Moran, T.P. (1983). The Psychology of Human-Computer Interaction. Lawrence Erlbaum Associates Inc. Mahwah, NJ, USA.
    • Degen, H. (2019). Goals – Assumption – Interaction Steps (GAIS): Practical Method to Determine a Quantitative Efficiency Benchmark for UX Interaction Design Concepts. Engineering Psychology and Cognitive Ergonomics – 16th International Conference, EPCE 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26-31, 2019, Proceedings, 3—19. Springer. https://doi.org/10.1007/978-3-030-22507-0 .
    • Degen, H. (2022). Each Click Counts – Experience Architecture and Design for Efficiency. Book expected to be published in 2022. Material is available online: www.designforefficiency.com.
    • Diaper, D., Stanton, N. (eds.) (2003). The Handbook of Task Analysis for Human-Computer Interaction. Lawrence Erlbaum Associate. Mahwah, NJ, USA.
    • González, V. M., & Mark, G. (2005). Managing Currents of Work: Multi-tasking among Multiple Collaborations. In H. Gellersen, K. Schmidt, M. Beaudouin-Lafon, & W. Mackay (Hrsg.), ECSCW 2005: Proceedings of the Ninth European Conference on Computer-Supported Cooperative Work (P. 143–162). Springer.
    • Herrmann, T., Ackermann, M. S., Goggins, S. P., Stary, C., & Prilla, M. (2017). Designing Health Care That Works – Socio-technical Conclusions. In Designing Healthcare That Works. A Socio-technical Approach. (P. 187–203). Academic Press.
    • Herrmann, T., Schmidt, A., & Degeling, M. (2018). From Interaction to Intervention: An Approach for Keeping Humans in Control in the Context of socio-technical Systems. Proceedings of the 4th International Workshop on Socio-Technical Perspective in IS Development (STPIS’18) Co-Located with 30th International Conference on Advanced Information Systems Engineering (CAiSE 2018), Tallinn, Estonia, June 12, 2018., 10. http://ceur-ws.org/Vol-2107/Paper8.pdf
    • ISO 9241-110:2020 (E). 2020. Ergonomics of human-system interaction — Part 110:Interaction principles. International Organization for Standardization. Geneva, CH.
    • ISO/IEC 25062:2006 (E). 2006 Software engineering — Software product Quality Requirements and Evaluation (SQuaRE) — Common Industry Format (CIF) for usability test reports. International Organization for Standardization. Geneva, CH.
    • Johnson, A. W., Duda, K. R., Sheridan, T. B., & Oman, C. M. (2017). A Closed-Loop Model of Operator Visual Attention, Situation Awareness, and Performance Across Automation Mode Transitions. Human Factors: The Journal of the Human Factors and Ergonomics Society, 59(2), 229–241. https://doi.org/10.1177/0018720816665759 
    • Rubin, J. Chisnell, D., Spool. J. (2008). Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests. Wiley & sons. Indianapolis, IN, USA.
    • Schmidt, A. (2000). Implicit human computer interaction through context. Personal technologies, 4(2–3), 191–199.
    • Schmidt, A., Herrmann, T. (2017). Intervention user interfaces: a new interaction paradigm for automated systems. Interactions, 24 (5) (September – October 2017), 40 – 45. DOI: https://doi.org/10.1145/3121357 .
    •  Wyers, B., Bowen, J., Dix, Al, Palanque, P. (eds.). (2017). The Handbook of Formal Methods in Human-Computer Interaction. Springer, Cham, Switzerland. DOI: http://doi.org/10.1007/978-3-319-51838-1 .