What you need to know about the new NIST Anti-Bias Proposal
Artificial Intelligence (AI) applications have the potential to revolutionize every aspect of our lives, including the air travel experience. Already, the airline industry relies on several AI applications to manage operations, customer communications, loyalty programs and retailing, and passenger identification.
It is, by and large, the optimal way to manage and process vast volumes of data while making it easier to find the needle in the haystack and to apply precision targeting to thread that needle. When AI applications work well, they can create a seamless flow, bringing all the pieces together and make the impersonal digital landscape feel personalized and tailor-made.
As someone who has tracked technology for decades in the airline space and beyond it, I’ve always fought against the trope of the rogue AI taking over the ship (HAL-oh), developing its own criteria of action, and doing harm. Simply put, AI is just a helpful tool—closer to introducing the first Singer sewing machine to the process of garment making than handing over the keys to the kingdom to a computer. Until a system could achieve singularity (self-awareness and independent thought), we don’t have to worry about taking it down from the brink of oblivion.
However, that does not mean that AI is necessarily harmless. The flaw in AI is in its maker, not in its makeup. We make AI what it is. We decide its construct. Because human beings are prone to bias, we risk introducing that bias to the AI systems as we create them.
Computers are inherently objective minds. They process information as they are told to process it. The code written by the programmer introduces any error that exists. With AI systems, we ask the computer to process abstract concepts—loyalty, risk, quality, appeal, enjoyment, goodness, effectiveness, value. These things are difficult enough for us to define ourselves and more challenging to get a group of people to agree on what qualities fit those definitions. In real life, we have ongoing debates on these abstract concepts day-to-day. One might even say that the entirety of human existence is the unsatisfiable aim to achieve a uniform agreement on what these things are.
But AI systems are asked to process these abstract concepts based on definitions created by their programmers, who, even with the best of intentions, may have a blind spot for the way the data informs the AI of these qualities. How we ask the AI to find similarities could re-shape results.
For example, if I told a person (or computer) who had never seen a rainbow that it is nothing but water droplets and light, I would hardly expect them to understand color, much less tone and hue. They would need to understand refraction, intensity, concentration, so much else besides. Isaac Newton already wrote Opticks. I won’t presume to best him here. The concerted human effort for hundreds of years to define the nature and interpretation of color is relevant to the AI discussion.
For airlines, which rely on appealing to a global marketplace and making all their customers safe, comfortable and generally happy, the risk of any bias introduced into systems has serious real-world repercussions which could cause disruptions, lead to lost customers and lost revenue streams.
A Proposal for Identifying and Managing Bias in Artificial Intelligence
This week, the National Institute of Standards and Technology (NIST) in the U.S. introduced a proposal for weeding out inherent bias in the creation of AI systems and they have opened this proposal to public comment. It is something that each individual at an airline responsible both for the selection of systems and the application of those systems should look into more closely.
NIST has published A Proposal for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270) in an effort to address bias in AI. From the full NIST report:
“The International Organization for Standardization (ISO) defines bias in statistical terms: ‘the degree to which a reference value deviates from the truth’. This deviation from the truth can be either positive or negative, it can contribute to harmful or discriminatory outcomes or it can even be beneficial. From a societal perspective, bias is often connected to values and viewed through the dual lens of differential treatment or disparate impact, key legal terms related to direct and indirect discrimination, respectively.
“Not all types of bias are negative, and there many ways to categorize or manage bias; this report focuses on biases present in AI systems that can lead to harmful societal outcomes. These harmful biases affect people’s lives in a variety of settings by causing disparate impact, and discriminatory or unjust outcomes. The presumption is that bias is present throughout AI systems, the challenge is identifying, measuring, and managing it. Current approaches tend to classify bias by type (i.e.: statistical, cognitive), or use case and industrial sector (i.e.: hiring, health care, etc.), and may not be able to provide the broad perspective required for effectively managing bias as the context-specific phenomenon it is. This document attempts to bridge that gap and proposes an approach for managing and reducing the impacts of harmful biases2 across contexts. The intention is to leverage key locations within stages of the AI lifecycle for optimally identifying and managing bias. As NIST develops a framework and standards in this area, the proposed approach is a starting point for community-based feedback and follow-on activities related to bias and its role in trustworthy AI.”
NIST hopes to open up a dialogue between the developer community and other stakeholders affected by AI systems on how to eliminate bias both in the creation and application process.
From this week’s NIST call for comments:
“Managing the risk of bias in AI is a critical part of developing trustworthy AI systems, but the path to achieving this remains unclear,” said NIST’s Reva Schwartz, one of the report’s authors. “We want to engage the community in developing voluntary, consensus-based standards for managing AI bias and reducing the risk of harmful outcomes that it can cause.”
“We want to bring together the community of AI developers of course, but we also want to involve psychologists, sociologists, legal experts and people from marginalized communities,” said NIST’s Elham Tabassi, a member of the National AI Research Resource Task Force. “We would like perspective from people whom AI affects, both from those who create AI systems and also those who are not directly involved in its creation.”
The NIST authors’ preparatory research involved a literature survey that included peer-reviewed journals, books and popular news media, as well as industry reports and presentations. It revealed that bias can creep into AI systems at all stages of their development, often in ways that differ depending on the purpose of the AI and the social context in which people use it.
“An AI tool is often developed for one purpose, but then it gets used in other very different contexts,” Schwartz said. “Many AI applications also have been insufficiently tested, or not tested at all in the context for which they are intended. All these factors can allow bias to go undetected.”
“We know that bias is prevalent throughout the AI lifecycle,” Schwartz said. “Not knowing where your model is biased, or presuming that there is no bias, would be dangerous. Determining methods for identifying and managing it is a vital next step.”
NIST is accepting comments on the document until Aug. 5, 2021, and the authors will use the public’s responses to help shape the agenda of several collaborative virtual events NIST will hold in coming months . This series of events is intended to engage the stakeholder community and allow them to provide feedback and recommendations for mitigating the risk of bias in AI.
The NIST report is not light reading, but it can enlighten, and it is critical reading for anyone who will either create, apply or be affected by AI.
And—let’s face it—that’s all of us.