Calm Computing

Your constantly-updated definition of Calm Computing and collection of videos and articles.
Be a conversation starter: Share this page and inspire others!

98 Shares

What is Calm Computing?

Calm computing is a design approach where digital tools and systems operate quietly in the background, supporting users without demanding their constant attention. Instead of interrupting, a calm computing design stays in your peripheral awareness and only calls you forward when necessary. As a UX (user experience) designer, you build calm computing products that support users’ lives gently and give them what they need without noise or distraction.

In this video, Alan Dix, Author of the bestselling book “Human-Computer Interaction” and Director of the Computational Foundry at Swansea University, shows you how many computers surround you in everyday life and why recognizing this hidden digital layer supports calm computing design.

Transcript

How to Design Calm Computing, Step by Step and Best Practices

Here’s a practical process you can follow when you want to design a calm computing experience for users of devices that blend into their environment, like smart-home devices, and wearables like smartwatches.

1. Clarify User Needs and Context of Use

Before you design anything, you’ll need to deeply understand what the user really needs, and under what circumstances. So, ask yourself: “What’s the primary task the user wants to accomplish? When are they likely to perform it? In what environment?”

If the task doesn’t require full attention from users, for example, ambient monitoring, background tracking, or gentle reminders, you’ve got an opportunity for calm computing. Context matters: if users are often multitasking, distracted, or moving around, a design that demands full focus can disrupt them and, in some circumstances, might even irritate them. Many users won’t have the luxury of sitting on a comfortable seat with plenty of time to do what they like; you’ll need to build a picture of their contexts of use. So, start with empathy, observation, and mapping how the technology fits into the user’s life.

In this video, Alan Dix explains how considering users’ physical, social, and environmental context helps you design solutions that truly support their real tasks.

Transcript

2. Keep Technology Unobtrusive: Minimize Attention Demand

Let the technology fade into the background until it needs to surface. Mark Weiser and John Seely Brown, pioneering thinkers in the fields of computer science and human-centered technology, especially known for their work on ubiquitous computing and calm technology, defined that calm technology should inform, but not demand our focus or attention. To achieve that:

  • Show information in peripheral ways: use subtle visuals, gentle cues, or glanceable displays rather than modals, pop-ups, or loud alerts.

  • Reveal controls or details only when the user indicates interest; don’t clutter the surface with too many elements.

  • Use ambient feedback: Soft light changes, quiet visual indicators, gentle animation, or haptic feedback can signal information without interrupting users.

3. Design for Fluid Shifting between Periphery and Center of Attention

A hallmark of calm computing is smooth transitions between background awareness and active attention. Whenever something becomes relevant, the design should bring it forward gracefully; and, when it’s no longer needed, it should recede quietly and “melt” back into its place in the background until needed again. So:

  • Provide contextual cues that hint at something needing attention, such as a subtle color change, soft glow, or gentle vibration.

  • Let the user choose when to engage; don’t force their attention.

  • Ensure that when the user does engage, the experience is clear, direct, and purposeful so they can efficiently complete the task.

4. Use the Minimum Technology Necessary

An essential principle that contemporary calm-design thinkers expanded is to design only what’s strictly needed to solve the user’s problem, and no more; don’t over-engineer. This means resisting a temptation that many designers feel drawn to: to pile on features or “bells and whistles.” For all they may seem to offer, extra features often lead to more distractions, complexity, and cognitive burden. So, instead, go for simplicity, clarity, and restraint: the fewer moving parts, the easier it is to remain calm and in control.

5. Respect the User’s Attention and Their Broader Context

Technology should not only avoid overwhelming individual attention; it should respect the user’s social and environmental context, too. For example, calm tech that respects social norms and privacy proves the brand understands boundaries, respects users as human beings, and wants them to feel good and safe and to stay in charge. So:

  • Respect privacy and data minimalism. Don’t require more data or permissions than are necessary.

  • Match communication style to social norms: If a loud alarm or intrusive notification would feel rude or jarring, choose subtler cues instead. It might take just one instance where a user feels frustrated at an app or device for them to wish they didn’t have it.

  • Let the design reflect the rhythms and constraints of real life. Think about when and where people will use the tool: at home, in public, while working, while relaxing. Your digital product or design solution needs to mirror those real-world “inputs” and backdrops in how it keeps the user experience as pleasant as possible and the user calm and feeling in control.

In this video, Alan Dix shows you how ambient and intimacy-supporting technologies in the home must align with users’ emotions and social context to create respectful, calming experiences.

Transcript

6. Design for Graceful Failure and Reliability

It’s an imperfect world, and a calm computing design remains calm even when things go wrong. According to calm-tech principles, technology should “work even when it fails.”

So, how do you resolve this apparent paradox in a design? Treat failure as a first-class scenario: default to a safe, non-intrusive, usable state. Avoid disruptive or confusing errors that would leave the user “dangling,” potentially in a state of fear. To illustrate, imagine a smart stove that shuts off automatically in the event something goes wrong. Provide fallback options or gentle degradation rather than abrupt interruption. That’s a philosophy you’ll want to “bake” into the prototypes you build and then stay open to tweaking out flaws as you progress to the next stage: testing.

In this video, Alan Dix explains how iterative prototyping helps you uncover flaws early so your designs can handle failure reliably.

Transcript

7. Test in Real Contexts and Iterate Based on Real User Behavior

Since calm computing deeply depends on context and human attention, you’ll want to test prototypes in real-life or realistic simulations. Observe how people use the tool over time. Do they find it calming, or do they ignore it entirely? Do they find its cues meaningful or miss them altogether? Use those insights to iterate: refine cues, reduce noise, improve reliability, and fine-tune subtlety. You want to create a calm computing experience that proves you understand your users because you’ve anticipated their needs so well in a design that reflects great empathy for them.

Watch this video to see how empathic observation of real user behavior reveals design flaws and guides more calming, effective solutions.

Transcript

Why Calm Computing Matters and Benefits When You Do It Right

To begin with, an effective way to understand calm computing may be to imagine the opposite. You might already know what notification overload sounds and feels like if you’ve ever heard someone’s phone beep and ding its way through your nerves until your head starts aching. Still, picture this: you’re at home, enjoying a quiet evening; now, consider how many smart devices you may have around you.

Once you’ve either counted an exact number or come up with an approximate figure, consider how much of the benefit of having these devices comes from being able to forget that they’re even there. Chances are that your well-designed devices blend into the background and only “emerge” when you want them to or something important is happening and requires your input. That’s part of the “magic”; would you rather have a constant stream of updates and questions, and end up feeling more like you’re the captain of a ship in a field of icebergs, than yourself, just wanting to have a quiet night in?

The benefits of calm computing design run as deep as the sense of comfort and safety users can have in their home environment. Designing with calm computing yields several meaningful advantages, for users, for product success, and for human well-being, and they include:

Lower Cognitive Load and Reduced Distraction

By shifting most interactions to the periphery, calm computing prevents unnecessary interruptions and so users can stay focused on their main task or present activity without constant distraction. This reduces cognitive load, mental fatigue, and “tech stress.” In a world that’s filled to the brim with notifications and interruptions and the speed of life can often feel too fast for some people, calm computing provides relief. And design for calm computing provides a kind of antidote.

Enhanced Attention and Productivity

When technology only draws attention when needed, users remain in control, and they decide when to engage. That ability to ignore background tasks until they matter allows for deeper focus and more meaningful work. For tasks which require concentration, this can significantly boost productivity and satisfaction; far better than making everything important and wearing a user out.

Better User Experience and Emotional Comfort

A calm design tends to feel more natural, respectful, and human-centered. When technology respects your attention and space, it feels like a helpful background companion rather than a demanding burden. That sense of calm can build trust, encourage long-term use, and foster loyalty. People return to tools that feel respectful of their time and mental space.

Supports Ambient and “Always-on” Contexts Gracefully

As devices embed themselves deeper in our environment, in smart homes, wearables, sensors, and the IoT (Internet of Things), calm computing becomes even more essential. In such contexts, constant interruptions are all the more impractical and undesirable. Calm computing enables devices to live in the background, quietly supporting daily life. That aligns with the broader paradigm of ubiquitous computing (ubicomp), where computers blend into surroundings rather than dominate them, and (like the person trying to enjoy a quiet evening in) you can focus on enjoying the moment.

A diagram of the Internet of Things (IoT) model showing a central ‘Internet of Things’ node connected to various domains—including homes and buildings, mobile devices, edge devices, data centers, cities, factories, transportation systems, and retail locations—illustrating how IoT devices and environments interconnect across sectors.

The Internet of Things (IoT) surrounds us and remains a powerful consideration for calm designs to help keep users appropriately informed and empowered.

© Interaction Design Foundation, CC BY-SA 4.0

Promotes Humane Design, Respecting Human Rhythms and Well-being

In contrast with attention-hungry designs tailored to maximize clicks, engagement, or time spent, calm computing puts humans, including their comfort, their attention, and their well-being, center-stage and stays in the background doing what it needs to. It fosters a healthier relationship between people and technology, helping avoid burnout, stress, or digital overload. As our lives become more saturated with devices and the divide between the digital and physical worlds seems to tighten further still, that priority will become ever more important.

Overall, a cardinal rule of designing for calm computing is to let technology inform rather than intrude; it’s how you give users space, respect their attention, and quietly support their lives. For technology to serve humans, it has to reduce stress, enhance focus, build trust, and prove it understands its “place” as a friend with only the user’s best interests in mind.

Human life is already full of stressors and uncertainties, one of the latter being the concern that technology in the home can spy and end up knowing too much about people. Remember that as you design to keep users in control and help them live better lives. Achieving calm requires care: clarity in cues, reliability under failure, respect for privacy, and deep attention to context. Embrace calm computing in your UX work and you can create tools that feel more human, more thoughtful, and more humane. Best of all, calmness can be “infectious,” and worlds away from the alarmist caricature of flashing red lights, wailing sirens, and a panic-inducing robot voice blurting, “Alert! Error! Give instructions immediately!” in an endless cycle. So, design for calm and user empowerment; that’s why people want life-improving tech in their houses and around their beings in the first place.

Questions About Calm Computing?
We've Got Answers!

What are the main principles of calm computing?

Calm computing centers on creating technology that informs without overwhelming. It rests on three core principles: peripheral awareness, minimal disruption, and seamless integration into daily life. As a designer, you’ll aim to shift interactions from demanding attention to existing subtly in the background.

Calm technology should move easily between the user’s center of attention and the periphery, only surfacing when necessary. Calm computing isn’t about removing technology; it’s about making it fit naturally into human environments. And one way to understand more about users’ environments is through personas, research-based representations of real users which you can “plug into” the picture for better insights on how to design for real-world calm computing and more.

In this video, William Hudson, User Experience Strategist and Founder of Syntagm Ltd., explains how design maps connect personas, stories, and constraints to help you plan calm, seamlessly integrated interactions.

Transcript

How does calm computing differ from invisible computing?

Calm computing and invisible computing may sound similar, but they serve different goals. Calm computing respects your attention by offering information quietly and only when necessary. Invisible computing, however, aims to make the technology itself disappear from your awareness.

In calm computing, technology still engages the user when needed; it just does so gently. Invisible computing, on the other hand, works silently behind the scenes, often without direct user interaction. For example, a smartwatch that gently vibrates to signal a meeting follows calm computing. A smart light that adjusts to the time of day without your input reflects invisible computing.

Use calm computing design when you want users to stay aware without being interrupted. Invisible computing works better for automation tasks where no feedback is needed.

Find a wealth of insights to help you design to meet your product’s user needs best, in our article Flow Design Processes - Focusing on the Users’ Needs.

How can I design a calm interface?

To design a calm interface, reduce visual noise, minimize user input, and present information only when needed. Begin by prioritizing what users must know and deliver that information subtly. Use whitespace, soft colors, and gentle animations to guide attention without demanding it.

Limit unnecessary alerts or pop-ups. Instead, consider passive feedback like changing icon states, ambient sounds, or soft vibrations. Also, let users control when and how they receive updates. Opt-in notifications and customizable settings help preserve user autonomy, and that’s one of the key principles of calm computing.

Test your interface by asking: Can the user understand what’s happening without focusing too much on the screen?

Explore how whitespace acts as a calming influence and more in your design solutions.

How should I handle notifications in calm computing?

In calm computing, handle notifications by prioritizing relevance, timing, and subtlety. Only surface alerts when they matter, and deliver them in a non-intrusive way. Avoid defaulting to loud sounds or persistent banners. Instead, use quiet cues like soft vibrations, ambient light changes, or icon badges.

Let users customize which notifications they receive and when. Group similar notifications or batch them at logical intervals. Context-aware notifications, triggered by location, time, or activity, further help deliver value without interruption.

Always ask: Does this notification help or hijack the user’s attention?

Discover How to Design Notifications for Better Mobile Interactions.

How does calm computing relate to IoT (Internet of Things) design?

Calm computing fits naturally into IoT design because it helps smart devices interact seamlessly with users, without constant attention or input. In IoT environments, dozens of devices can compete for attention. Calm computing ensures they cooperate quietly instead.

You can build IoT systems that communicate through ambient signals or automation. Many devices apply calm computing by offering value passively, only notifying users when intervention is truly needed.

Also, calm IoT devices respect privacy by minimizing unnecessary data prompts or visual clutter. By designing with discretion and subtlety, you can keep users informed without overwhelming them.

Get a greater grasp of how the Internet of Things works so you can create amazing designs that work with it and serve users well.

How does calm computing work in AR/VR or mixed reality experiences?

In AR/VR (Augmented Reality / Virtual Reality) and mixed reality (MR), calm computing helps avoid sensory overload by guiding attention gently and minimizing unnecessary input. Immersive environments can easily overwhelm users, so it’s important to reduce clutter and use cues sparingly.

Instead of flashing alerts or constant motion, use ambient changes, spatial audio, or subtle haptics to direct focus. For example, a VR training app might highlight tools only when a user looks at them, rather than showing everything at once. Give users control over what appears and when. Let them toggle layers of information or pause experiences when they need to. Calm computing in XR (extended reality) is about creating immersive yet thoughtful environments that support, not dominate, the user’s experience.

Access a treasure trove of helpful insights so you can create products that put the user comfortably in charge of amazing augmented reality experiences.

How does calm computing influence wearable technology design?

Calm computing plays a critical role in wearable technology by shaping how devices deliver information without constantly interrupting the user. Wearables like smartwatches and fitness trackers succeed when they offer relevant insights quietly and unobtrusively.

Use gentle haptic feedback, minimal displays, and context-aware notifications. For example, a Fitbit vibrates gently to prompt movement without using sound or flashing screens. That’s a perfect example of calm computing in action, delivering value while preserving focus.

Wearables support passive monitoring, too, like heart rate tracking or sleep analysis, without needing input. These features respect attention by working in the background and surfacing information only when it’s useful.

To follow calm computing, prioritize personalization, minimal interaction, and seamless integration into the user’s daily rhythm.

Explore the world of wearable computing for important and helpful points about designing experiences that users can feel are exceptional.

How do calm computing principles support accessibility and inclusion?

Calm computing supports accessibility by simplifying interactions and offering multi-sensory feedback. It helps users with cognitive, visual, or auditory disabilities by reducing complexity and offering subtle but clear cues, like vibrations, visual changes, or sound alternatives.

Inclusive design aligns with calm computing because both aim to make tech usable for all, regardless of ability. For instance, smart home devices that respond to voice and light cues offer support without demanding physical interaction, making them accessible to users with mobility disabilities.

You can apply calm principles by focusing on clarity, predictability, and customizable alerts that match different needs.

Understand accessibility better so you can design better experiences not just for users with disabilities but all users as well.

What are the challenges when designing for calm computing?

Here are some calm computing design challenges to watch out for:

  • Risk of invisibility or being ignored: If the design is too subtle or passive, users might miss important information entirely. A peripherally visible indicator might not catch their attention when needed. If alerts or cues blend too well into the background, the system may fail its purpose. So, test and tune cues carefully: ensure they can rise into the center of attention when required, without being startling or intrusive.

  • Ambiguity and delayed feedback: As calm design often relies on subtle, minimal feedback, users may feel uncertain about what’s going on. For example, a soft glow or minor vibration may not communicate enough information on its own. Users may miss context or misunderstand what a cue means. Good documentation, clear visual language, or occasional explicit cues might be necessary to prevent confusion.

  • Overreliance on user attention patterns: In high-stakes scenarios, such as medical, safety-critical, or time-sensitive tasks, relying on subtle cues could be dangerous or irresponsible. In such cases, you might need more direct, deliberate interaction rather than calm background cues.

  • Potential for privacy and surveillance abuse (especially in ubiquitous computing contexts): Because calm computing often pairs with ubiquitous computing, embedding sensors and “invisible” technology throughout environments, it’s essential to guard against misuse. Ambient sensors, data collection, automated tracking, or behavioral monitoring can overstep privacy boundaries or feel invasive. If you design calm computing in a way that watches or tracks without transparency and consent, you risk eroding user trust and violating ethics and perhaps laws.

  • Complexity in balancing subtlety, clarity, and reliability: Designing something that’s both subtle and reliable is inherently tricky. Too subtle and it’s invisible; too loud and it breaks the calm. Moreover, things like sensor failures, network outages, or mis-configured ambient systems can undermine the calm, producing frustration. So, invest in edge cases, fallback mechanisms, failsafe behavior, and graceful degradation.

  • Technical constraints, like battery life or sensor accuracy, can also limit how “calm” a system behaves. So, work within those limits while still meeting user expectations for relevance, responsiveness, and trust.

In this video, William Hudson explains how identifying constraints early helps you design systems that remain calm and reliable within technical limits.

Transcript

What are some recent or highly cited articles about calm computing?

Weiser, M., and Brown, J. S. (1995). Designing calm technology. Xerox PARC.
This widely referenced white paper from Xerox PARC introduces the concept of “calm technology,” arguing that technology should inform without overwhelming users, allowing information to move fluidly between the center and periphery of attention. Using the “Dangling String” network visualization as an example, Weiser and Brown illustrate how unobtrusive, ambient feedback can make technology more human-centered. Though not formally published in a peer-reviewed journal, this document is foundational in ubiquitous computing and interaction design. It remains a key conceptual reference in designing user experiences that are subtle, supportive, and seamlessly integrated into daily life.

Tugui, A. (2004). Calm technologies in a multimedia world. Ubiquity.
This article explores how calm technology principles, first introduced by Weiser and Brown, can be applied in a multimedia-driven context. Tugui argues that even as digital environments become saturated with sensory input, systems should still aim to reduce cognitive overload and maintain user comfort. The paper emphasizes ambient, low-disruption design approaches that keep users informed without distraction.

Nilsson, T., Fischer, J. E., Crabtree, A., Goulden, M., Spence, J., and Costanza, E. (2020). Visions, values, and videos: Revisiting envisionings in service of UbiComp design for the home. arXiv Preprint.
In this preprint study, the authors use animated video scenarios to explore how people envision and react to different smart home technologies. One video reflects calm, ambient computing principles, while another emphasizes active user engagement. Based on focus group discussions with 28 participants, the paper reveals nuanced public attitudes toward domestic ubiquitous computing, highlighting preferences for calmness, concerns about detachment, and the desire for appropriate control. While not solely focused on “calm technology,” this paper offers valuable insight into how calm design ideals interact with everyday user values in home-based UX.

Case, A. (2015). Calm Technology: Principles and Patterns for Non-Intrusive Design. O’Reilly Media.

This book offers a practical introduction to the concept of calm technology, technology that remains unobtrusive and in the background unless needed. Drawing on the foundational ideas of Mark Weiser and John Seely Brown, Amber Case translates calm computing into actionable principles for designers and developers working in a world increasingly saturated with devices. The book outlines eight key principles of calm design, such as using peripheral attention and minimizing interruptions, and introduces “calm communication patterns” that leverage multiple sensory channels without overwhelming users. With examples and conceptual exercises, including scenarios like designing a calmer kitchen or alarm clock, this book helps product teams create human-centered, non-intrusive technology that integrates gracefully into everyday life.

Earn a Gift Earn a Gift, Answer a Short Quiz!

1
2
3
4
1
2
3
4
Question 1
Question 2
Question 3
Get Your Gift
Interaction Design Foundation logo

Question 1

What does calm computing (also known as calm technology or calm design) prioritize in technology design?

1 point towards your gift

  • Maximizing user attention and engagement with frequent alerts and interruptions
  • Designing interactions that occur in the user’s periphery and only demand focus when necessary
  • Prioritizing visual complexity and rich animations to attract users
Interaction Design Foundation logo

Question 2

What problem does calm computing respond to in digital product design?

1 point towards your gift

  • The need for more features and options on every screen
  • Users being overwhelmed by technology that demands constant attention
  • Eliminating all notifications entirely
Interaction Design Foundation logo

Question 3

Which of the following best reflects a calm computing principle in UX design?

1 point towards your gift

  • Constant pop‑ups guiding the user at every click
  • Interfaces that only interrupt attention when essential
  • Highly animated splash screens that attract user focus

Learn More About Calm Computing

Make learning as easy as watching Netflix: Learn more about Calm Computing by taking the online IxDF Course AI for Designers.

Why? Because design skills make you valuable. In any job. Any industry.

In This Course, You'll

  • Get excited about how quickly you can create meaningful results when you turn AI into your new superpower and combine it with timeless human-centered design skills. As an AI-powered designer, you’ll increase your efficiency and salary potential, and as a human-centered leader, you'll create ethical AI-powered products people love. When these abilities come together, you’ll solve real-world problems faster and make the world a better place. Studies show that AI-driven productivity accelerates growth and helps busy professionals like you achieve in months what once took decades. Now's the time to master the powerful combination that keeps you in demand.

  • Make yourself invaluable when you use AI to accelerate repetitive tasks while applying your irreplaceable human strengths like empathy, critical thinking, and ethical decision-making to guide every solution toward what users truly need. Leaders seek professionals with AI fluency because the people who thrive combine AI efficiency with deep human insight that machines can't replicate. You'll learn to automate and streamline repetitive tasks, make data-driven decisions, and optimize your workflow for innovation and cost savings. No matter your background, you'll easily master AI for Designers. With clear guidance and real-world examples, you'll apply your skills immediately.

  • Gain confidence and credibility as you master the art of text prompts for design. You'll learn how to deal with bias in AI and how to use AI tools to create intuitive user experiences that drive business success. You can fast-track your portfolio of AI-enhanced case studies with step-by-step guidance, practical tips for real-life projects, and downloadable templates. This course gives you the hands-on knowledge to guide AI with timeless human-centered design skills, and that's how you stay in demand in any industry!

It's Easy to Fast-Track Your Career with the World's Best Experts

Master complex skills effortlessly with proven best practices and toolkits directly from the world's top design experts. Meet your experts for this course:

  • Ioana Teleanu: Founder of UX Goodies and former Lead Product Designer (AI) at Miro.

  • Rafael Hernandez: Lead Product Designer at T. Rowe Price.

  • Jarvis Moore: Senior Design Lead, Design Systems at Microsoft.

  • Niwal Sheikh: Product Design Lead at Netflix.

  • Vitaly Friedman: Senior UX Consultant for the European Parliament, and Creative Lead, Smashing Magazine.

  • Pablo Stanley: Designer and CEO of Musho and Lummi.

Get an Industry-Recognized IxDF Course Certificate

Increase your credibility, salary potential and job opportunities by showing credible evidence of your skills.

IxDF Course Certificates set the industry gold standard. Add them to your LinkedIn profile, resumé, and job applications.

Course Certificate Example

Be in distinguished company, alongside industry leaders who train their teams with the IxDF and trust IxDF Course Certificates.

Our clients: IBM, HP, Adobe, GE, Accenture, Allianz, Phillips, Deezer, Capgemin, Mcafee, SAP, Telenor, Cigna, British Parliament, State of New York

All Free IxDF Articles on Calm Computing

Read full article
No-UI: How to Build Transparent Interaction - Article hero image
Interaction Design Foundation logo

No-UI: How to Build Transparent Interaction

Here, we will explore and teach you about the incredible user experience opportunities which you can take advantage of when designing for interaction beyond the classical Graphical User Interface (GUI). Non-visual User Interaction (no-UI) is pioneered by the ground-breaking work of researchers who h

Social shares
874
Published
Read Article

No-UI: How to Build Transparent Interaction

No-UI: How to Build Transparent Interaction

Here, we will explore and teach you about the incredible user experience opportunities which you can take advantage of when designing for interaction beyond the classical Graphical User Interface (GUI). Non-visual User Interaction (no-UI) is pioneered by the ground-breaking work of researchers who have realized that, in today’s world, we are surrounded by computers and applications that constantly require our attention: smartphones, tablets, laptops and smart-TVs competing for brief moments of our time to notify us about an event or to request our action. Staying abreast of developments will turbo-charge your skill set, so you can access users in more ingenious ways.

The bulk of these attention requests and actions take place through interaction with Graphical User Interfaces, peppered with a short audio or vibration cue here and there. However, rich user experiences are not only dependent on good visual design: they can take advantage of the context awareness, sensors and multimodal output capabilities of modern computing devices. So as to take advantage of non-visual interaction options, we need to design these carefully, considering the modern advances in software and hardware sensing, paired with Artificial Intelligence (AI), which continue to transform the way we interact with our computing devices. We’re gradually moving away from designing GUIs, which require the user’s full attention, and moving towards designing calmer, less obtrusive interaction, bringing human-computer interaction without graphics to the core of the User Experience: Welcome to the world of no UIs.

In a world where we are surrounded by information and digital events, Mark Weiser, a visionary former researcher at Xerox PARC and widely considered the father of Ubiquitous Computing, believed that technology should empower the user in a calm and unobtrusive manner, by operating in the periphery of the user’s attention.

“The result of calm technology is to put us at home, in a familiar place. When our periphery is functioning well we are tuned into what is happening around us, and so also to what is going to happen, and what has just happened. This is a key property of information visualization techniques, like the cone tree, that are filled with detail yet engage our pre-attentive periphery so we are never surprised.”
– Mark Weiser & John Seely Brown, Xerox PARC

A Definition by Example

For many decades, Graphical User Interfaces (GUIs) have dominated the way we interact with computers, and continue to be the primary way of interacting with our computing devices even though they are continuously evolving into radically different forms and becoming wildly more ubiquitous. Advances such as multi-touch, gestural input and capacitative screens have moved interaction far beyond early examples of the ‘90s, especially in mobile, although many of the interaction design elements remain the same (e.g., icon-driven interfaces, long, short and double taps, etc.).

The very first GUI-driven ubiquitous computing devices by Xerox PARC (the PARCPad) and Apple (Newton), and GUIs in everyday modern devices, were the smart fridge and smart remote control, seen here. Visually, not much has changed!

The primary goal of GUIs was to present information in such a way so as to be easily understandable and accessible to users, as well as to provide the visual controls and direct manipulation mechanisms through which a user could interact with this information and instruct the computer to carry out tasks. We are so accustomed to using GUIs that perhaps we take for granted the underlying principle by which GUIs are developed: It’s the computer’s job to present the data, interpret the user’s instructions and process the data. However it’s still our job as humans to understand the information, invent sequences of commands through which it can be transformed or processed, and—finally—make sense of the end results of computation by matching these with their intended goals or the surrounding environment.

Let’s take an everyday scenario to illustrate this. Imagine you are on holiday in a new place and want to find a good restaurant to eat in whilst walking down the main street of the city you’re visiting. You bring up the TripAdvisor app on your mobile. You provide it with your location (or allow it to be discovered by GPS) and instruct the app that you are looking for restaurants. The app presents a list of results matching your criteria, together with some basic information about each result (e.g., their name, type, rating and distance from you). By scrolling through the list, you are able to find a restaurant that sounds good (e.g., “La Pasteria” might appeal to a lover of Italian food), isn’t too far to get to (this might depend on how much you like it and are willing or are able to walk) and which has a decent rating (#1 out of 20 is very good, but #50 out of 500 is still also pretty good if it’s not too far and is Italian).

A good GUI design will help you achieve your goals by facilitating (and minimizing) the entering of text and commands provided by you and by laying out the results in a way which you can easily understand on seeing them. However, the hard part—i.e., deciding which one is ultimately a good candidate—is a processing task performed exclusively by you. Only you know your individual preferences, mood, and abilities as a human, and also, to perhaps a lesser extent, those of your companions. Ever noticed how much time it usually takes to make such a choice (especially if it’s not only yourself who will be going – and if you’re all hungry)?

Are you hungry? How much time do you need to spend using an app to find a restaurant that sounds good? And how much more time will you spend if you get there and the restaurant is not what you expected?

Now imagine the same scenario without using a mobile app – instead, you’re walking down that street with a friend who lives in that city. As you walk along, dozens of options arise, but your friend will only initiate a conversation when you’re passing near a place she thinks you may like. So, she might proactively tell you the names of two or three restaurants only, but her advice is based on many more factors: places she has been to herself and has found to be good, experience from providing advice to other guests in the past and from taking their feedback, knowledge of how easy a restaurant is to get to, how busy it might get at the current time, how suited it might be for couples or large groups, etc. Effectively, your local friend has made a large number of observations and assumptions about you, added her own experience and knowledge and has narrowed the results down to just a few, thus doing the hard work for you. She has provided you with a “no-UI” experience: proactively initiating conversation about your goals, limiting interaction to a few natural questions and responses, factoring in a large number of observations and assumptions and presenting you with the results of hard and intensive computation. Now, the question is—can we replicate this experience when we design our applications and services? What technologies do we need so as to accomplish such a task?

The no-UI experience: curated knowledge at our periphery.

Three No-UI Interaction Building Blocks

You will have noticed from the previous example that no-UI interaction is heavily based on three basic building blocks:

  • Observations: the local friend has factored in a number of facts about yourself: whether you are dining alone or with a partner, your age and fitness level, the time of day and the distance of the hotel to other restaurants. These are facts that our mobile devices can “sense”. As a designer, you can leverage information provided via hardware sensors, data repositories internal or external to a device, or user profiling: for example, companionship via Bluetooth, location via GPS/networks and venue locations via databases, age and fitness via apps (e.g., Google Fit), time of the day via the clock. Don’t ask the user for information unless you can’t get it otherwise!

  • External knowledge: your friend also knows a lot of things: many tourists have given her feedback about some of the places she recommended, how much you might like a place depending on whether you are dining with a partner or group, how busy a place is likely to be, the quality of their food against their prices, her knowledge of the area and how complicated a route to a venue is, etc. As a designer, keep abreast of technological developments and be aware of techniques for extracting knowledge from external sources—e.g., semantically and emotionally analyzing comments and tips left at venues, knowing the daily spread of check-ins at venues, knowing the profiles of users who have visited a venue, etc. Various APIs from services such as FourSquare or Google+ are able to give us such knowledge, and there are ways of organizing it in a meaningful manner (e.g., ontologies).

  • Intelligence: Based on her observations and external knowledge, your friend has made a number of assumptions about you. Matching the observation to knowledge requires intelligence (and some creative thinking). This is the hardest part indeed – while the capture and organization of observation and knowledge is relatively easy, it needs prioritizing: for example, it’s no good recommending lunch at the most popular restaurant—which also happens to be very close to your location—if it’s closed at lunchtime. At other times, seemingly unimportant information might become crucial – it’s your partner’s birthday and her favourite food is Chinese; so, on that particular day—and only then—this becomes the number one criterion. Notice here that the criterion is not even about the user as an individual: We live in a world of complex relations with other humans and social rules, and capturing that context is not always easy, even for us as a species.

The critical element binding everything together here is intelligence. Without this step, a no-UI application is impossible. Intelligence determines not only what result you as a designer should present to the user, but also how you should present it.

“[…] deep learning [is], a process where the computer is taught to understand and solve a problem by itself, rather than having engineers code the solution. Deep learning is a complete game changer. It allowed AI to reach new heights previously thought to be decades away. Nowadays, computers can hear, see, read and understand humans better than ever before. This is opening a world of opportunities for AI-powered apps, toward which entrepreneurs are rushing.”
– Tony Aube, lead designer at OSMO

In the beginning of this, we spoke about moving away from the GUI – this means progressively attempting to interact with our users via multimodal interfaces. Sounds, speech synthesis, vibrations and even text, as in the case of chatbots, are ways with which we can convey information in varying degrees of granularity. You should not assume that you know how best to deliver multimodal interaction just because you are accustomed to the beeps and buzzes from the apps you use every day. Instead, multimodal interactions are things you must carefully design with the app’s purpose in mind and by accounting for the user’s abilities and context. For instance, in our previous example on tourism, an intense vibration on your phone might mean you’ve walked past the restaurant you wanted to visit, so you should turn back. Shaking the phone or pressing a hardware volume button while it’s vibrating might signal the device to give spoken instructions (e.g., “It’s behind you and to the right.”). Are these interaction examples good or bad? This is something you have to find out for yourself, through experimentation and human-centred design.

We also need a level of intelligence in order to interpret users’ physical and gestural interactions with the device (e.g., did the user shake the device with an intention to stop the current guidance, or was it an inadvertent action?). Additionally, we need intelligence to be able to determine an optimal way of presenting information (e.g., show a visual message instead of synthetic speech—if the user is in a very noisy environment). Also, finally, once we get the interaction during real-world use right (or wrong!), we should feed the outcomes back into our interaction models, helping the computer learn from the process of being used. This is the true meaning of intelligence – to be able to sense the world around us and learn from our interactions with it.

If it’s so Hard, Why Even Bother?

Humans have evolved primarily in using their vision to perceive and understand the world around them (whether physical or digital). So, GUIs are not going to disappear anytime soon, particularly when the use case calls for significant amounts of information to be presented to the user. In the world of ubiquitous computing, the need for information is constant; even so, we should not forget that much of the information required by users is succinct: a recommendation for a good restaurant, somebody’s phone number, the weather forecast for this afternoon, for instance. Snippets of information like these can require complex systems to generate them; however, this complexity should not mean that the means to obtain it must also be complex. The balance of interaction needed to obtain a bit of information versus the amount of information should be—at the very least—neutral and optimally leaning towards less interaction, while at the same time driving information towards our periphery and not the centre of our attention. Mark Weiser (1997) called this concept “Calm Computing”. Uwe Hansmann et al. (2003) and Stefan Poslad (2009), authors of two key texts on Ubiquitous Computing, both insist: Human Computer Interaction must be “transparent” or “hidden”. Minimizing interaction through no-UI techniques prevents the danger of the user experience being more about the device or app, rather than navigating the complexities of everyday life.

For example, researchers Antonio Krüger et al. (2004) at Saarland University and Katharine Willis et al. (2009) at Weimar University show that constant interaction with mobile maps results in a number of cognitive difficulties for users, such as a diminished ability to build detailed mental models of their surroundings, a failure to notice important landmarks and a detraction from the pleasure of the experience of visiting a new place

These are the dangers of UI-interaction in mobile maps, as shown by Katharine Willis et al. (2008). Learning an area and its landmarks (a) using a mobile map (b), vs. using a paper map (c): Mobile users tend to focus on the routes between landmarks, while using a paper map gives a better understanding of the whole area.

Examples of No-UI Interaction

For the reasons stated above, considerable research has gone into reducing the interaction to multimodal, no-UI methods on mobile devices, but there are also some examples of commercially available services which have been gaining popularity since 2015. An example of the latter is chatbots, which attempt to provide a virtual assistant type of experience (though, arguably, a text interface is still a GUI). AI-driven chatbots became a trend in 2016 with the emergence of new companies such as Pana (formerly Native, a travel booking agency) and the integration of bots in existing services, such as Facebook’s messenger (using Facebook’s own engine or third-party AI engines such as ChatFuel). Other companies have jumped on the bandwagon, too, for their own services—e.g., in 2016, FourSquare introduced a conversational bot that would replace its traditional search interface and provide recommendations by responding to users’ questions. The FourSquare app also proactively issues notifications based on your location, time of day and profile (e.g., “Are you near Trafalgar Square? You might like to try John’s Coffee for coffee.”).

FourSquare’s proactive notifications provide information relevant to the user’s location, without the user needing to interact with the mobile application.

Above is an example of interaction with the CNN chatbot via Facebook Messenger. Although it’s still a UI-based interaction method, the interface resembles (but isn’t truly) natural language, without traditional widgets, menus and options. Notice how ambiguity is handled in the third picture!

Other interesting no-UI examples are found in research. Steven Strachan et al., at the Hamilton Institute, demonstrated a concept in 2005 where navigation instructions were provided to users listening to music on their headphones—by altering the volume of music (lower means further away) and its direction using 3D audio to indicate the target bearing.

In another research study related to non-visual navigation, Andreas Komninos and some colleagues at Glasgow Caledonian University (Komninos et al. 2012) used 3D audio to provide a constant audio stream of a person’s footsteps (in contrast to music, this example uses audio that is natural to the urban environment) – the direction of the sound indicates the bearing to the nearest segment of the calculated route to the target, and its volume shows how far away from that segment a user is.

The SoNav prototype for navigation via 3D audio was created by Andreas Komninos et al. in 2012: The user simply selects a start and end point, while a route is automatically computed (a) – this is the only visual interaction element. From there on, the user hears a continuous sound from the direction of the nearest route segment, or audio beacons positioned at important landmarks (b). In an experiment, users started from the top left of the map and explored almost all of the area (GPS trace heatmap) covered by the route’s audio signal (grey-shaded area) to reach the target audio beacon (red-shaded area), each user taking a different route and freely exploring the city (c).

David McGookin and Stephen Brewster (2012), from the University of Glasgow, also demonstrated a 3D-audio based system, using the sound of flowing water and the splashes of stones being thrown in it, to show how heavily users have been tweeting in an urban area (thus indicating the social “pulse” of the area). The water stream’s sound volume shows the temporal density of tweets, while individual stone splashes are rendered in 3D‑audio and show the actual tweets being made near the user’s location (which fit a number of criteria). Other modalities such as haptic feedback, which are advanced vibration patterns and waveforms to convey information to a user, feature in this—allowing users to monitor the “state” of their device without looking at it. For example, Fabian Hemmert (2008), a researcher at Deutsche Telekom, developed a system where a constant vibration presents the number of missed calls or incoming messages to the user—the vibration is almost imperceptible at first, but it rises in intensity and frequency as more “events” accumulate on the device. As a designer, you have to think twice before applying haptic feedback as the user may not be interested in being disturbed by constant vibrations. It may be fine if a person has one app which is using rising intensity and frequency of vibration as feedback for missed calls. On the other hand, try to imagine a user who has five apps which are all using vibration as feedback for each time, for example, a new email message, “breaking news” or a new online message come through. Would you be interested in using those apps yourself?

More exotic ideas include the use of thermal interfaces for mobile devices: Graham Wilson et al. (2012), at the University of Glasgow, have shown how the use of heat-transmitting pads on a device can do the job of showing users the source (work or personal) and the importance of incoming messages.

In all the above examples, the no-UI approach is incomplete. Conversational bots have access to external knowledge and also use rather sophisticated AI (mostly to interpret your questions) but do not make direct observations about the user, using device sensors. It’s also the user who initiates the interaction, instead of the app taking a proactive approach. Users are also still faced with the burdensome task of providing information about their goals and desired tasks. In the research examples, sensors have the role of obtaining information about the user and also obtaining external knowledge, but the use of AI is rather limited. In our research examples, the use of a GUI is also part of the experience, as users need this in order to input some basic information (e.g., their navigation target) or to initiate the service, thus implicitly stating a current goal. Nevertheless, in these examples, we see how the no-UI approach works well in allowing users to shift their attention easily to monitoring the state or progress of an ongoing task, without really needing to interact with the GUI physically (as you would, for example, when using a simple map application, where you might occasionally bring the device out from your pocket so as to see where you are).

The Take Away

An effective no-UI approach is heavily based on the concept of context awareness, which includes the user’s goals and preferences, knowledge of the surrounding environment, social rules and device abilities for knowing how and when to deliver information in an non-visual way to users. The level of context awareness required for a complete no-UI service is difficult to obtain, but the examples above show where no-UI approaches are likely to work best: Allow the user to monitor the progress of ongoing tasks or get updates on important information as it emerges.

The key advantage of no-UI design here is that it eliminates the need for constant visual interaction with the device. You take the device from your pocket, causing it to exit stand-by mode, unlocking itself, and bringing the desired application to the foreground or expanding notifications for you so you can assess all the information displayed and make a decision.

In a world where we are surrounded by information and digital events, Mark Weiser foresaw the necessity for calm technology. As a designer, your task remains to harness and influence the developments in technology, deploying its capabilities with one thing in mind: to allow the user to keep calm and carry on (with the tasks at hand)!

References & Where to Learn More

Weiser, M., & Brown, J. S. (1997). “The coming age of calm technology”. In Beyond calculation (pp. 75-85). Springer New York.

Hansmann, U., Merk, L., Nicklous, M. S., & Stober, T. (2003). Pervasive computing: The mobile world. Springer Science & Business Media.

Krüger, A., Aslan, I., & Zimmer, H. (2004). “The effects of mobile pedestrian navigation systems on the concurrent acquisition of route and survey knowledge”. In International Conference on Mobile Human-Computer Interaction (pp. 446-450). Springer Berlin Heidelberg.

Strachan, S., Eslambolchilar, P., Murray-Smith, R., Hughes, S., & O'Modhrain, S. (2005, September). “GpsTunes: controlling navigation via audio feedback”. In Proceedings of the 7th international conference on Human computer interaction with mobile devices & services (pp. 275-278). ACM.

Hemmert, F. (2008). “Ambient Life: Permanent Tactile Life-like Actuation as a Status Display in Mobile Phones”. In Adjunct Proc. of the 21st annual ACM symposium on User Interface Software and Technology (UIST) Monterey, California, USA.

Poslad, S. (2009). Ubiquitous computing: smart devices, environments and interactions. John Wiley & Sons.

Willis, K. S., Hölscher, C., Wilbertz, G., & Li, C. (2009). “A comparison of spatial knowledge acquisition with maps and mobile maps”. Computers, Environment and Urban Systems, 33(2), 100-110.

McGookin, D., & Brewster, S. (2012, May). “PULSE: the design and evaluation of an auditory display to provide a social vibe”. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1263-1272). ACM.

Komninos, A., Barrie, P., Stefanis, V., & Plessas, A. (2012, September). “Urban exploration using audio scents”. In Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services (pp. 349-358). ACM.

Wilson, G., Brewster, S., Halvey, M., & Hughes, S. (2012, September). “Thermal icons: evaluating structured thermal feedback for mobile interaction”. In Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services (pp. 309-312). ACM.

Pana, the virtual chatbot travel agent: https://pana.com/

Seth Rosenberg, How to build bots for Messenger, 2016: https://developers.facebook.com/blog/post/2016/04/12/bots-for-messenger/

ChatFuel, an AI engine for ChatBots: https://chatfuel.com/

Vindu Goel, With New App, Foursquare Strives to Be ‘Magic’ in Your Pocket, 2013: http://bits.blogs.nytimes.com/2013/08/29/with-new-app-foursquare-strives-to-be-magic-in-your-pocket/?_r=1

Images

Copyright holder: Roy Want, Copyright terms and license: Fair use.

Copyright terms and license: Public Domain.

Copyright holder: David Berkowitz, Copyright terms and license: CC-BY-2.0.

Copyright holder: © William Hook Copyright terms and license: CC-BY-SA-2.0.

Copyright holder: The Author, Copyright terms and license: CC-BY-2.0

Copyright Holder: Kaique Rocha, Copyright terms and licence: CC0 (http://pexels.com)

Copyright holder: The Author, Copyright terms and license: CC-BY-2.0

Copyright holder: The Author and OpenStreetMap. Copyright terms and license: CC BY-NC 2.0

Copyright holder: The Author. Copyright terms and license: CC BY-NC 2.0

Copyright holder: The Author. Copyright terms and license: CC BY-NC 2.0

Copyright holder: The Author. Copyright terms and license: CC BY-NC 2.0

Hero Image: Copyright holder: DNA Films, Ex Machina (2015). Copyright terms and license: All rights reserved.

Feel Stuck?
Want Better Job Options?

AI is replacing jobs everywhere, yet design jobs are booming with a projected 45% job growth. With design skills, you can create products and services people love. More love means more impact and greater salary potential.

At IxDF, we help you from your first course to your next job, all in one place.

See How Design Skills Turn Into Job Options
Privacy Settings
By using this site, you accept our Cookie Policy and Terms of Use.
Customize
Accept all

Be the One Who Inspires

People remember who shares great ideas.

Share on:

Academic Credibility — On Autopilot

Don't waste time googling citation formats. Just copy, paste and look legit in seconds.