Interaction with Smart Products
The increasing shift from having one computer to having several intelligent (also called smart) products embedded in everyday’s environments raises many new challenges and offers new opportunities. Their ability to communicate with each other and the user allows system designers to use several smart products together to fulfill a given task supporting more complex behavior. It is also important to include the user naturally into the tasks, telling them what to do or using sensors to detect what they are doing. Nowadays, the behavior of smart environments is often determined dynamically, using task planners and reasoners on ontologies to conclude how a task could be performed. Nevertheless those techniques mainly refer to the technical aspects of tasks like the order of steps required to perform it. Due to this focus, the interaction with users is often disregarded whereas it should be one of the most essential challenges to address. After all, it is the main objective of those environments to help the user with the tasks she wants to fulfill and not to make users adapt to synthetic environments.
To create such a natural appearing behavior, interaction with users should be designed in an adaptive way. It must not be the goal of programmers to manually define all possible situations, where products approach their users, in advance. Of course, it might be necessary to define parts of the interaction explicitly, for example when the designer exactly knows that in a certain step the system needs to approach the user and tell her something specific. Nevertheless, in really smart environments it also needs to be possible to define interaction on a more abstract layer. Designers should neither have to explicitly define, how to ask users for missing values or to tell the user that an action has to be done by her, nor need to define every possible impact of the environment or the user situation into the workflow. For example if the noise in the background increases, voice output might not be possible any more or when the user is driving a car, only important data is allowed to show up.
As the focus lies on non-expert end-user, the most important requirements for user acceptance of smart environments are natural ways of product-initiated interaction and the feeling of being in control of the system.
To provide more natural interaction, an initial classification of product-initiated interaction methods with the user (called “Interaction Types”) is required. A critical warning for example has other requirements than just some information about the weather and thus has to be handled different. These types can be used to describe the interaction on an abstract level. They contain only basic information about the content to communicate but their type can be used at runtime to figure out, how or when the content should be presented to the user, depending on given context information like background noise or stress level of the user. Thus a complex adaptive behavior can be easily included into workflows to generate the real interaction during runtime more reasonable and natural. As a result of this, a simple functional workflow is changed into an “interactionflow” with the aim to make interaction natural for the end-user and easy to design for the system engineer. To achieve this, it is not reasonable to constrain the definition of interaction to cases, where designers have to define the interaction explicitly for a given step in the workflow. Moreover, when the smart environment recognizes for example missing values or capabilities, the user should be approached dynamically without the need to design this explicitly in the workflow. The designer might only deliver a meta-task and if the product cannot find a way to perform it automatically, it informs the user to take some actions.
However, natural interaction is not sufficient to ensure the user’s acceptance of such systems. The environment might behave quite natural but can the user now feel in control of it? The feeling of control heavily depends on the design of the system. On the one hand, if the environment behaves highly dynamically and autonomously, the user would probably reject the system because she does not know what it does. If on the other hand the system is too static and the user tries to extend it, she might be overwhelmed by (incomprehensible) configuration dialogues for every possible combination of products and tasks. So there exists a tradeoff between dynamic behavior and control complexity. Thus, automated learning and reasoning is as important as being able to reconfigure or define workflows manually in an understandable way.
As real time monitoring solutions cannot be regarded as serious solutions for allowing end-users to control their environment, we raise the question how far they really can be involved into the management of tasks, in our case the workflows available in the environment. Thereby the scale reaches from none over partial up to complete involvement. No involvement is given if the environment only supports a predefined, static amount of workflows or if new workflows can be learned automatically but the user can not influence what is actually learned. Partial involvement describes the case where automatically learned workflows can be changed or redefined, while the system also allows the end-user to define and program whole workflows by herself. Complete involvement then requires the end-users to be experts in defining workflows, because in this case the user has to do all the programming. Due to the focus on non-expert end-users, only the partial involvement will be discussed further on.
The first way to include users is to learn from the user’s interaction with the environment. This can only be made reliable and trustworthy by presenting her the results in an appropriate way. This presentation is important to avoid wrong conclusions of automated reasoners, to give the user the possibility to reconfigure learned workflows and to feel in control of what happens in the background. An important focus lies on the question how this kind of learned data can be presented to the user and how she can handle error cases, where the system has learned wrong rules, most intuitively.
Coming to the second way it is remarkable that sometimes adaptive systems based on automatic conclusions might not be sufficient. End-users might not only want to reconfigure their environment, they might also want to add new behavior special to their needs. In opposite to programmers it cannot be anticipated that end-users use usual workflow editors. Their handling is complicated and not very suitable for the concrete situation of smart environments. Thus a natural way to enable end-users to define tasks for their environment needs to be created. Those descriptions can once more make use of the interaction types, so even end-users can define workflows that allow smart interactive environments to behave in a natural manner.