Holistic Human Machine Interface for Commercial Vehicles

Expectations of Future Functions for Commercial Vehicles

Future HMI have to anticipate the change in the driving task towards full automation. There are several steps in between which have to be managed by HMI in the next years. The monitoring of driving environment and the take over as a fallback for dynamic driving tasks are new requirements for the HMI. The levels 2-4 of SAE [Fig.1] are opening the door to new target for HMI. Imagine the needs of a truck driving in position one or at the end of a platoon of trucks. What does a driver who needs to take over control in 3-2-1 seconds?

Fig. 1: SAE Summary of Levels of Driving Automation Standard J3016

Even in 2015 90% of all accidents depend on human mistakes behind the steering wheel and society expects a constant progress in reducing, all accidents. Even, those with drivers who still tend to operate other devices while driving.

Table 1: Risk Increases of Cell Phone Tasks [5]

Cell phone taskRisk of Crash or near event crash
Light Vehicle Dialing 2.8 times as high as non‐distracted driving
Light Vehicle Talking/Listening 1.3 times as high as non‐distracted driving
Light Vehicle Reaching for electronic device 1.4 times as high as non‐distracted driving
Heavy Vehicles/Trucks Dialing 5.9 times as high as non‐distracted driving
Heavy Vehicles/Trucks Talking/Listening 1.0 times as high as non‐distracted driving
Heavy Vehicles/Trucks Use/Reach for electronic device 6.7 times as high as non‐distracted driving
Heavy Vehicles/Trucks Text messaging 23.2 times as high as non‐distracted driving

More and more assistance systems are available in the vehicle to support the driver this system operates in 4 levels:

LevelSystems
InformNavigation, ALC, Night Vision. Rear & Surround View Cameras
WarmLDW, Collision Warning
AssistLKA, Brake assist
IntervenePre-crash system ABS/EPS

Taking the human state into account for safety systems, real-time personal intuitive feedback from the vehicle is expected. This means to pay more attention to human factors in the decision loop. What is the workload of the driving task at the moment and in the time before? The system has to take in consideration the primary drive demand: highways, roundabouts, maneuvering. The secondary task: phoning, watching around or operating the radio. The environmental demand: traffic density, weather, temperature, noise, night. The drivers trait: gender, age, experience and the drivers state: fatigue, drowsiness and stress.

The impact of the manner of driving on fuel consumption is estimated with up to 20% where fuel consumption is still the number one cost driver in truck operation. This gives the need to influence the driver in his behavior towards fuel efficiency and environmentally friendly driving. The trends of towards connected vehicles where new vehicles and drivers will be part of the internet have to be anticipated, cloud-based services, and apps mean more functions that the driver must handle without distraction.

All those requirements, which are partly new or extensions of existing ones have, to be addressed by the holistic human machine interface for commercial vehicles.

Technologies to be Included in the Holistic HMI of Commercial Vehicles

Several technologies are available to fulfill the demands. Free Programmable Clusters (FPC) with partly personalized and adapted content can be used as flexible instrumentations. The possibility to change the content according to the situation makes FPC’s the instrumentation of the future. With AR-HUD systems the driver is informed about the status and function of the assistants systems. As seen in the Fig.2 Augmented Reality Head Up Systems can be used to inform the driver without distraction.

Fig. 2: Continental Simplify your Drive demonstrator AR-HUD functions

Enhanced perception of navigation through augmented reality presentation of upcoming direction is also possible with AR-HUD. Also possible is an increased safety with situation-specific warning using functional lighting within the dashboard. Based on the experience within this framework the most profiting driving situations were identified. They benefit the most from a cooperative interaction between driver and vehicle. As a result, two driving scenarios "Overtaking on Highways" and "Turning Left on Urban and Country Roads with Oncoming Traffic" were utilized in the experiments at [3]. Both scenarios have been compared in independent experiments with regard to alternative system variants. The results showed, a cooperative platform system built from several driver assistance systems is practical in the automobile and that the system’s underlying concept is empirically accessible. Consequently, it is possible to successfully establish a system as an interaction partner. Particularly interesting from a psychological point of view are the results indicating that cooperative systems were profitable in the examined cases, e.g., there was significantly less driving error and fewer required mirror looks. The cooperative prototype was highly accepted and was experienced as trust-arousing; the behavior of the system was well understood by the test persons and was appreciated. Taking the results into consideration, the potential for the developed concept of operative interaction in automobiles can be assumed, and its use also led to measurable advantages.

An extended horizon enables driver/vehicle to adjust driving styles knowing what's beyond the visible horizon combine driver individual records and crowd sourced information several use cases become in reach:

    • Curve speed info for speed advisory system
    • Slippery road alerts
    • Construction Assistant, temporary change of lanes
    • Slow vehicle traffic, end of traffic jam warning

    One of the key components for future HMI designs might be the visual driver analysis for dynamic adaptation of the vehicle behavior. With a mono or multi-camera system it is possible to identify the:

      • Head pose measurement
      • Gaze sector analysis
      • Face recognition
      • Traffic attentiveness
      • Drowsiness classification
      • Eye gaze direction
      • Gaze interaction

      Additional sensors in the steering wheel or wearable sensor for the heart beat can be combined to a driver model. The driver model helps the holistic HMI to adapt the information and warnings towards the driver. The driver model can also be used to optimize the assist and intervene functions of the vehicle.

      System Architectures to Handel System Requirements in holistic HMI

      While a modern vehicle’s electric/electronic (EE) architecture is already complex, this level of complexity is driven further by more worlds that all meet in the HMI domain as discussed. Each function needs an interface which offers choices, parameters and control options. As a result different software systems either share the same display at times or they use different displays in different situations and use HUD or AFFP.

      Having a closer look the different software systems as in [1] this are even based on different operating systems (OSs). Some of these OS are very rich and tailored towards handling large amounts of data, for instance, GENIVI-compliant Linux systems. Other software demands severe real-time capabilities with minimal latency and maximum reliability. With the advent of consumes OS (Linux, Android, Windows) in the vehicle, a whole new dimension of apps and dynamic innovation is added now.

      Higher integration avoids ECU proliferation

      A look back at the evolution of HMI electronics points a way out that has worked before: Higher integration. This successful strategy has led to the current partitioning of the interior domain.

      • Functions, directly related to driving, which interface in the cluster instrument are controlled by a dedicated ECU tailored to the specific needs of this category
      • Infotainment functions are summarized in the radio unit.
      • Logistic functions are separated in an additional display
      • Tachograph functions are on a separate device

      The differentiation between what is relevant for driving and what isn’t has begun to dissolve. With the number of external functions in the vehicle increasing, this type of static architecture with two ECUs is difficult to uphold. It would have to be expanded by another dedicated ECU, e.g., for the APP world. Considering the growing capabilities of multi-core CPUs, higher integration is becoming a more economic option that offers several added benefits.

      Safe system architecture

      Instead of having several dedicated ECUs integration solution are based on a multi-core CPU. At the moment up to four cores are being used but in the future even a many-core CPU could be an option. The computing power and infrastructure of the hardware can be controlled by hypervisor. The Hypervisors divides the CPU into several virtual machines with different OSs. The big benefit of this architecture lies in its allowing of mature, unmodified guest OS and automotive-certified OS and applications running with a single hardware without mutual interference. Even if one OS should fail, the other OS on the other virtual machines will continue to run unaffected.

      Dividing the virtual machines in a trusted and a not trusted zone does not only ensure its reliability, it is also a perfect way of handling the dynamics of consumer electronics: Frequent updates and the installation of new Apps, for instance, are perfectly permissible in the entrusted zone, while they are not in the trusted zone.

      Splendid "Isolationism"

      As the automotive industry is regulated by stringent safety standards, it is of paramount importance to certify safety relevant OS and applications and to re-use them. This applies to the hypervisor software as well. Hypervisor architectures, for instance, have already been successfully certified for mission critical avionics (e.g. in the Airbus) and rail use. In contrast it would be a rather futile effort to try and certify something as rich as Linux to an ASIL-B standard. Certifying around half a million lines of code in the case of Linux would indeed be a big task of unending frustration, the hypervisor’s few thousand lines of code are a real asset.

      The hypervisor strictly separates the virtual machines and their OS from the hardware In/Outs. Any request of a virtual machine for hardware access has to be approved by the hypervisor. This applies to trusted automotive OS and to partially trusted OS as well. If several virtual machines share hardware, they have to ask permission to do so at the shared services. Even if the request is granted, it will be the hypervisor which then accesses the source OS and provides the requested data – not the virtual machine itself. This architecture elegantly avoids problems like data races, which could result from Linux storing data on a memory while other virtual machine accesses it. Not trusted external requests have to pass through the firewall of the security policy if they request data. Only after approval a not trusted OS/application can request data via the shared services.

      The clearly defined security policy of the firewall works both ways, though. It does provide security but is also provides an opportunity for the big Android developer community to come up with innovative automotive apps. While it would be wise to open this door to a certain extent only, the security policy is an element of making the architecture future-proof as the rules of security can be redefined if needed.

      Holistic HMI based on hypervisor technology

      Handling heterogeneous software worlds safely and securely is one main factor for higher integration in the cockpit. The driver is another. After all, it’s the person behind the wheel who has to handle the multitude of systems and information sources on top of controlling the vehicle. There’s no doubt: Being "Always on" provides more valuable information, better driver support, e.g. by dynamic navigation, plus additional services to increase the efficiency of driving. To make this new level of connectivity usable, the human machine interface (HMI) needs to adapt to the growing number of functions and services. One key modification is to look at the HMI as one consistent and comprehensive system instead of a set of individual displays.

      Within such a holistic HMI, any information or message to the driver can be shown on any of two or three displays in the cockpit: Cluster instrument, Head-up display (HUD) and central information display (CID). The difference to current HMI strategies lies in the flexibility. If a driver is navigating through dense urban traffic, the information he needs is strongly filtered to bring the load down to an absolute minimum which is displayed in the HUD, where the driver can perceive it while her or his eyes stay on the road. If the same driver is going down a motorway with little traffic, there is no reason why the number of an incoming call should not be depicted in the HUD or cluster instrument. However, if the driver starts a maneuver such as a lane change and the lane change assist functions detects a vehicle coming up from behind at high speed, the available display space plus other communication channels need to be freed immediately for this highly safety-relevant information. This requires holistic control architecture. This kind of flexible HMI makes the best possible use of the available communication channels that connect a driver with the vehicle. Depending on the traffic situation, the HMI is constantly adapted to the driver workload and condition. This cannot be done with a static HMI that allocates one display to one function. The list of potential problems with a static HMI architecture includes such details as the question where a particular image should be rendered if it is to be shown on different displays in different situations. Another major benefit of a holistic HMI is a consistent look and feel of all functions and systems. It is way easier for a driver to control a multitude of functions, if the principles of making an entry and confirming a choice are always the same.

      Outlook

      With the prospect of automated driving there is yet another rationale for a holistic HMI based on higher domain integration. It makes a big difference whether a driver immediately controls all vehicle functions or whether that driver is responsible for monitoring an ADAS function which controls longitudinal and lateral dynamics. The role change between these two types

      of workload also changes the scope of information a driver needs for the instantaneous task. This evolution of driving puts new challenges to the HMI. Assuming that the vehicle is capable of partly to highly automation driving, for instance, the HMI should support the driver in two phases in particular:

      • The first is the transition between actively driving and just monitoring during a phase of automated driving. It will be highly beneficial if a driver intuitively grasps what is expected of her or of him to do when a phase of automated driving comes to an end and the driver needs to take over the immediate control of the vehicle again.
      • The second is the phase of automated driving itself. While the vehicle navigates automatically, the available displays could be used to show any relevant contents, as long as the holistic system architecture ensures that entertainment or office contents gets overridden immediately, if the driver’s attention is required.

      Higher integration in the vehicle interior domain can help to avoid a costly ECU proliferation. At the same time an interior integration "box" with shared hardware opens up highly beneficial possibilities for changing from a static HMI to a future proof holistic HMI which offers optimum driver support in all driving situations.

      Every component of the HMI in Commercial Vehicles is changing: The environment we drive, the vehicles we drive, the technology we base on, the functionality we use and also the drivers expectations. The driver will still be in the loop for the next decades but the human information processing capacity is limited. The capability of the driver to handle his tasks can be optimized by assistance systems and the appropriate human machine interfaces. User center design has to be proven by sophisticates simulations as in [2] and real environment studies.

      Literature

      1. [1] Posch, T.: Separating software worlds in the dashboard, EE Times-India 2014
      2. [2] Kleer, M., et all: Real-Time Humans in the Loop MBS Simulation in The Fraunhofer ROBOT-BASED Driving Simulator, Archive of Mechanical Engineering ,Vol. LXI 2014 Number 2
      3. [3] Biester,L.: Cooperative Automation in Automobiles ,Dissertation, 2008
      4. [4] Warhol,A. :Machines have less problems. I’d like to be a machine, wouldn’t you?, (Moderna Museet, Stockholm, Sweden 1968)
      5. [5] Virginia Tech Transportation Institute: Insight into cell phone use and driving distraction, BLACKSBURG, Va., July 29, 2009

      Author of the article

      Dipl.-Inform. Arno C. Semmelroth,
      Continental Automotive GmbH, Schwalbach am Taunus