An Open AI generated impression in the style of Monet.

Achieving Trusted Autonomy for Aerospace applications

Idea in Brief
  • Trusted autonomous systems must be repeatable and verifiable for the contexts in which they operate. Algorithms may not perform as intended once the context is broadened or changed.
  • The problem becomes greater as autonomy levels increase, where models are updated regularly or will adapt to their environment in real time. 
  • To achieve trusted autonomy, technology leaders highlight the need for a strong safety culture, rigorous testing and increased transparency.
  • There are three capabilities that are fundamental to achieving trusted autonomy:
    • A virtual or hardware-in-the-loop testing environment,
    • A set of heuristics, scenarios, or rules to validate model decisions according to management risk appetite.
    • An automated system to validate model decisions within the virtual test environment.
  • Aligning your organisation to the Autonomy Readiness Level framework improves shared understanding of risk.

The Aerospace industry is entering a new period of transformation. Recent developments in AI and robotics are increasing the applications for autonomy within Aerospace. At the same time, Governments are increasing their focus on regulating autonomous systems. Most recently, in October 2023, President Biden issued Executive Order 14110 to develop standards, tools and tests to help ensure that AI systems are safe, secure, and trustworthy.1 To capture the opportunities for autonomous systems, technology leaders highlight the need for a strong safety culture, rigorous testing and increased transparency.2

Autonomous Systems challenges for OEMs and Operators

Underpinning autonomous systems are the machine learning and artificial intelligence algorithms that are trained on vast amounts of data. The challenge for Aerospace is: how can safety critical applications in Aerospace achieve assurance when AI/ML models are opaque, and the training data sets are large?

For the most safety critical systems, the answer is straight forward, these systems undergo verification and validation in much the same way that lower level automatic or autonomic systems are currently verified. However, the problem is greater as autonomy levels increase, there is a requirement for models to be updated regularly or will adapt to their environment in real time. According to the CEO of TTTech Auto Dirk Linzmeier, “the challenge is to ensure safety; ensure that even rare corner cases are handled in a safe manner.”3

Autonomous systems challenges for Regulators and Air Traffic Management

Air traffic managers and regulators must plan for new aircraft types and the complex interactions between varying levels of autonomy and human-in-the-loop or human-on-the-loop control strategies.

Furthermore, autonomy may increase traffic congestion with UAS operating within and adjacent to controlled airspace. Larger volumes of UAS may in turn, require air traffic management systems to develop autonomous systems to process the increased traffic. Such systems will have different requirements such as traffic anomaly detection, pattern identification, and optimisation for conflict resolution.

“Trust is hard to earn and easy to lose.”

Andrea Kollmorgen, CEO Simulytic

Developing an assurance framework for traffic management systems and regulators will involve developing new safety performance measures, testing for corner cases and “black swan” events. To successfully test at scale and reasonable cost, traffic managers and regulators will need a prioritisation strategy to identify the most valuable or impactful test cases.

Trust, but verify.

Trusted autonomous systems must be repeatable and verifiable for all contexts in which they operate. Algorithms may perform well in limited contexts; however, they may not perform as intended once the context is broadened or changed. According to Shawn Kimmel of EY Parthenon, “traditional systems engineering techniques have been stretched to their limits when it comes to autonomous systems”.4 This is because Autonomous systems perform more complex and safety critical tasks, which results in a far greater test space.

There are three key capabilities required to achieve Trusted Autonomy

There are three capabilities that are essential to achieving trusted autonomy:

  • A virtual or hardware-in-the-loop testing environment,
  • A set of heuristics, scenarios, or rules to validate model decisions according to management risk appetite.
  • An automated system to validate model decisions within the virtual test environment.

Align your organisation to an Autonomy Readiness Level framework to improve shared understanding of risk

A Technology Readiness Level (TRL) is an evidence supported system for documenting qualification, certification and flight proven status of technology. To inform Trusted Autonomy readiness, in March 2023 NASA published the Space Trusted Autonomy Readiness (STAR) Levels:5

STAR LevelAssurance Component Description
9The autonomous system has been proven through successful mission operations for multiple programs.
8All necessary formal tests have been successfully passed, sufficient to qualify the autonomy for space use by the identified system in the anticipated operational conditions. Tests included stress testing to cover plausible off-nominal situations.
7The autonomy is tested for specific use case scenarios and may be wrapped with a run time assurance system that provides safety guarantees and backup control strategies.
6The autonomy code and interfaces have fully met qualification criteria. Testing has been conducted with both nominal and off-nominal inputs representative of expected conditions of use.
5The autonomy algorithms are tested to demonstrate proof of concept operations in real time with highly realistic inputs.
4The number of scenarios evaluated is relatively small but sufficient to characterise the algorithms’ reliability, safety and ethical use.
3Basic operation of the algorithms demonstrated.
2Performance measures and safety concerns are identified.
1Some risks or limitations are considered. 
Source: NASA

These levels provide an important benchmark of common language and readiness to inform risk assessments for organisations utilising autonomous systems.

Key Takeaways

  • Trust is key. Leading companies are building cultures of safety and risk management through transparent safety management systems.
  • Interoperability and virtual testing will become an imperative. Different systems need to interact effectively with one another and be tested together in virtual test environments. Digital twins are an important asset to enable testing.
  • Standards and verification systems such as the NASA STAR levels offer credibility as emerging technologies scale. Companies, air traffic managers and regulators that take a proactive approach to shaping and complying with standards will reduce risk.

How we can help

Sigma Strategy supports the development of regulatory and OEM test and evaluation tools within live virtual and constructive (LVC) environments including:

  • safety strategies
  • safety performance benchmarking and indicators, 
  • digital twins, 
  • analysis of performance data to create test databases of operating cases and known risk events,
  • refining data for training of autonomous systems, and
  • creation of heuristics and rules for verification systems.

  1. US White House, ‘FACT SHEET: President Biden Issues Executive Order on Safe, Secure and Trustworthy Artificial Intelligence’ <https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/> ↩︎
  2. Alonso M, ‘Driving Trust: Paving the road for autonomous vehicles’ World Economic Forum , 6 January 2024 <https://www.weforum.org/agenda/2024/01/driving-trust-paving-the-road-for-autonomous-vehicles/>. ↩︎
  3. Dirk Linzmeier quoted in Alonso M, ‘Driving Trust: Paving the road for autonomous vehicles’ World Economic Forum , 6 January 2024 <https://www.weforum.org/agenda/2024/01/driving-trust-paving-the-road-for-autonomous-vehicles/>. ↩︎
  4. Kimmel S, ‘Improving trust in autonomous technology’, MIT Technology Review, February 2023 <https://www.technologyreview.com/2023/02/22/1066578/improving-trust-in-autonomous-technology/>. ↩︎
  5. NASA, ‘Space Trusted Autonomy Readiness Levels’, March 2023 <https://ntrs.nasa.gov/citations/20220012680>. ↩︎

Feature Image: An AI generated impression in the style of Monet.