Human Systems Integration

4 minute read

Published:

In December 2020, I graduated with a Bachelor’s of Science in Human Systems Integration (HSI), the first bachelor’s degree of its kind to to be awarded from Ohio State University. Though Ohio State offers an HSI track within its Masters in Industrial and Systems Engineering curriculum, no existing undergraduate curriculum focused on the subject.

What is Human Systems Integration?

HSI is traditionally understood as a multi-disciplinary field of study drawing elements from human factors engineering, system safety, cognitive psychology, and systems engineering. It has deep roots in the Department of Defense (DoD) acquisition process, and is considered integral part of a holistic approach to systems development. In this light, it is also understood to be a process by which human capabilities and limitations are effectively integrated. The Army, Navy, and Air Force recognize the importance of HSI and cite it as a key aspect of developing crucial capabilities.

For the purposes of my degree, HSI is a term referring to the interdisciplinary field of study concerned with the behavior of humans and machines performing joint activity, including relationships such as the limitations of agents (physical, cognitive, or otherwise), system resource availability, resource dependency between agents, delegation of authority & responsibility, and others. These aspects go beyond traditional human factors to include aspects of control theory, computer science, and resilience engineering, emphasizing the complexity of work and highlighting an interaction and interdependence focused approach to building advanced systems.

Traditional HSI and Human Factors Leave Something to Be Desired

Increasing popularity of HSI and related ideas is certainly a positive force to support the development of more advanced technologies. That being said, traditional HF views associated with HSI lag greatly behind the state of the art, often considering the purpose of their endeavors to increase automation at all costs in order to eliminate “human error”. This notion is rebutted by Woods and Sarter and several others. One such HSI tradition is that of the US Navy, whose Human Systems Integration program was initially founded to increase the capabilities of autonomous machines. Scholars have noted that this idea holds significant philosophical baggage, almost completely ignoring the complexities and interdependence inherent in multi-agent teamwork (Bradshaw et al., 2013, Bainbridge, 1983). There are numerous examples where this philosophy fell short of expectations, such as the crash of the USS John McCain. Clunky software controls allowed already understaffed and undertrained sailors to fail to realize that their control system had been reversed: the steering mechanism had been set so that port steering was on the starboard side, and vice versa. This causing the ship to make a ninety-degree turn into a 30,000 ton oil-tanker in the middle of busiest waterway in the world.

The full implications of automating mission-critical technologies are often overlooked when approaches like function allocation or Sheridan & Verplanck (1978)’s levels of automation are applied when designating tasks and allocating work (Interestingly, Sheridan himself later criticized the use of LoA as a guiding system design framework). While increased automation may be a desirable goal, this endeavor is one that must be approached with great caution when designing for highly complex, risky, and uncertain work domains (Bainbridge, 1983, Vicente, 2003). Automation can often lead to unexpected consequences and emergent interactions, especially in the presence of particularly challenging or unusual circumstances. Most automation or autonomous systems today are simply pre-programmed, leaving potential for rule-based behavior to fall short of desired outcomes as described in Murphy & Woods, 2009.

Automation is by no means an inherently bad engineering choice. It often promises great gains in efficiency and increased autonomy of human agents. A perspective informed by cognitive systems engineering and resilience engineering aims to understand the complexities and pitfalls associated with automation in order to identify them and mitigate risks, making the overall system better by pointing out its flaws from a novel perspective. I discuss this at length in Keller & Newton, 2021, wherein Autonomous Flight Safety Systems, an emerging technology where software rules determine whether or not to terminate space launces for public safety, are the primary focus.

Further Reading