Designing for Humans or Robots

Blog-Desiging-for-Humans-or-Robots.jpeg

Imagine you are driving in an Alexa-enabled car. While traveling, you issue verbal commands to set your home temperature to 62 degrees and preheat your oven to 350 degrees. While you feel your instructions are clear, this can be a massive challenge from a design standpoint. Hopefully, the system understands and correctly translates your voice commands, considering no one wants a 350-degree home and an oven baking at a mere 62 degrees.

In a separate scenario, let’s say you use Google to send out my robot vacuum to clean your floors. The vacuum could be activated by voice or by text. Using the correct parameters of commands becomes essential. If you ask Google to command my vacuum, it doesn’t respond. However, if you tell Google to do the same function, my vacuum is activated, and my floors are swept clean.

In both cases, we design the user experience based on our human perceptions. But computers, robots, and logic systems might not respond how we want if we only design for human interaction, especially if one system is talking to another system.

We often cite Isacc Asimov's Three Laws of Robotics which command a robot to follow our instructions but prevent them from harming humans or itself. However, these are not always feasible. Think of something like a self-driving car. What would happen if that driverless vehicle were about to crash into something or someone? It’s likely to harm the driver or harm the other party. Does this suggest that we need more from a design standpoint? Should software and hardware companies ensure that voice and visual interactions are more human-like? How might that change when robots talk amongst themselves?

While the non-human interactions can be with computer networks, artificial intelligence (AI) systems, algorithms, and robot systems, in this blog, I will collectively refer to all these as robots for brevity.

Every day, we are using the interaction with these robots to complete sophisticated tasks. To continually improve those interactions, we demand that solutions designers provide a user experience and user interface that incorporates our human ways of thinking and performing. But as more machine learning (ML) and deep learning algorithms are used for these interactions, designers must consider how these networks, systems, and robots will want to interact with each other. And careful consideration should be given to how human developers might keep a hand on the wheel for command and control.

Should we design only human-oriented interaction designs for humans to maintain control? If AL/AI interfaces and robot application programming interfaces (APIs) require a different type of experience and ethical design, are humans the ones to do it? Could robots design their own interfaces.

‘Jobs to be Done’ Framework

Most human-robot interactions are designed to automate and perform menial or repetitive tasks: attach a widget, move a box, use rule-based algorithms to approve transactions. In essence, they are getting a job done.

When designing for automation, a primary process is to look at the steps involved in completing the task. The Jobs to be Done theory (also referred to as Outcome-Driven Innovation) looks at the perspective of the customer first and the product or service provider second. As author Tony Ulwick notes, we want a 12-inch hole in a piece of wood, not a 12-inch drill.

Once a design team has decided on the need or the job to be completed, they can ask what measurable outcomes people are struggling to achieve—for example, filing out your taxes. Consider all the tasks: the gathering of financial information, the sorting and assignment of the numbers to the correct sections of the tax forms, the insertion of the data, the verification of the data, the potential manipulation of the data, the finalization of the forms, electronic signing of the forms, and finally the e-filing with the appropriate government agency. A human designer must ensure an automated filing is simple to understand and includes human-based checks and balances. The step-by-step process must be similar to the manual process of using pen and paper to file a tax return.

Additionally, Jobs to be Done theory is broken down by defining the function, the outcome, and the emotional context. In the tax filing example, designers must ask, “When filing taxes using a computer or the Internet, how might people struggle to understand the nuances of an electronic transaction? And when they finish, will they feel a sense of competency and empowerment?”

Designing for Human Robot Interaction (HRI)

If we now apply this breakdown of manual tasks being assigned to robots, we can now define how we want those robots to interact with us.

In a 2003 IEEE paper by M.A. Goodrich, D.A. Olson and others, the concept of supporting human and robot interactions elevates the concept that human commands need to have technical considerations in mind. The paper outlines seven principles that are still adhered to today

1 - Implicitly switch interfaces and autonomy modes - because sometimes you drive in using cruise control while other times you grab the wheel yourself.

2 - Let the robot use natural human cues - using simple maps, sketches, or danger icons allows for simple and efficient understanding

3 - Manipulate the world instead of the robot - using a touchscreen to instruct a robot to travel to a location on a map means the human is thinking about the distance and not about the robot

4 - Manipulate the relationship between the robot and the world - instead of word instructions or secondary manual methods, being able to interact with a representation of a model to control the robot, such as using an icon on a touchscreen to make the robot move

5 - Let people manipulate presented information - the interface should support an interaction with the info presented, such as getting a robot to maneuver around an obstacle

6 - Externalize memory - over-the-shoulder camera work is difficult. You can't navigate a robot and search for survivors simultaneously. But getting the robot to remember previous obstacles, helps the human navigate

7 - Help people manage attention - only give warnings where warnings are merited

The IEEE paper ultimately puts the ownership of human and robot interaction in the hands of the human operator despite our sensory limitations. In each of these principles, the proposed solution for user experience is to shift the collision-avoidance function to machine intelligence coupled with extensive sensing.

“The operator will still maintain the higher-level control and direct the [robot] arm where to go; the [robot] arm will obey, short of collisions," the paper concludes.

Making More Human-Robot Interaction

And when it comes to other types of robotics and designing for them, should they have more or fewer human-like attributes?

In a Business Insider video interview posted in May 2022, famed NASA roboticist Ayanna Howard revealed that while robots do not need to look like humans. However, robots might need to emulate some human behaviors so that people can understand their functions and relate to them.

“I always loved how they showcase robots in "Star Wars," because they had all of the types of interactions we expect between human and human, and they encode it in a robot form factor,” Howard said. “There is actually a number of studies that have looked at the form factor of robotics. Do you have to have a humanoid shape in order to enhance the interaction with people? Most studies show that you don't. It just has to be able to function and behave and move in our human environment.”

“What makes R2-D2 so endearing is a combination of the movement, so the movements that go on in terms of doing little dances, for example, turning in place; the other is the sounds that are used. They're very intentional. If you notice, they're typically high-pitched. We associate high-pitched sounds with certain demographics, like birds chirping and babies laughing. And so, what's going on is that it's evoking that response of the cuteness of the sounds coupled with the movement. It's not the form factor; it's the movements and the language that really taps into our human emotions.”

Back on this planet, some robotic designs ignore visible interfacing altogether. For example, nanorobotics designs allow for medical imaging inside your body. The robot performs its duty with human traits but does not necessarily interface with a human. However, it requires a complex set of tools and designs that may be subject to human bias.

Removing Bias

One of the more dangerous obstacles of human-robot design interface is our own biases. We have a tendency as humans to rely on old designs and then find ways to streamline operations. This even applies to the tools we use to design these interfaces.

Historically, these have been expressed in unique lines of code that were later categorized and reused to streamline repetitive tasks and increase the time for development and innovation. However, the issue might be that we see robot interfaces as too human. We love seeing and hearing robots interact with us and with each other. As a result, we are entertained by dancing robots from Boston Dynamics with little short-term application.

If we realize there are multiple situations in which the job can be done, can that remove the bias? Should we be designing for experiences? Are we building products focused on services instead of aesthetics so that at any moment, you can use the user interface (UI) you created?

Another consideration in removing bias is that new generations are leaning toward new tools for designing human and robot interactions. We see this in the use of virtual reality.

UK-based Extend Robotics, for example, has developed a VR technology interface that allows people to maintain control over high-precision tasks carried out by robots at scale and distance.

To overcome our human-robot biases, should we consider building agnostic UX systems? Should we adopt these new design tools such as VR, voice commands, and mobile touch screens instead of relying on the traditional command-line interface?

So once again, I ask the original question: Are we designing interfaces for humans or robots? As we continue using robots to simplify our processes and enrich our lives, UX designers must think more about the desired experience outcomes rather than a focus on the sales numbers. And once we can overcome our own biases of what that human-robot interface would look like and how we design their functionality, we can quickly amplify how to complete tasks, and also lay the groundwork for more innovations.

One thing is certain. As we design in the future, the enterprise networks that support these human-robot designs will need to be advanced, agile, and robust enough to manage these new interfaces and be user-friendly and consumer-centric.

About the Author
Extreme Office of the CTO - OCTO
Office of the CTO

The Office of the CTO at Extreme Network analyzes forthcoming inflection points and trends for a wide audience – a relatable, trusted resource for future facing, new ideas at the cutting edge of technology and networking.

Full Bio