A PROVING GROUND - DAU

5
14 | DEFENSEACQUISITION | May-June 2021 A PROVING GROUND by CAPT ANITA M. NAYLOR, USAF F rom late 2019 into last summer, the Defense Advanced Research Projects Agency (DARPA) hosted events called AlphaDogfight Trials, where artificial intelligence (AI) pilots fought human pilots in a simulated dogfight. The best human pilot was a graduate of the Air Force Weapons School’s F-16 Weapons Instructor Course with more than 2,000 hours in a fighter jet; as Air Force Magazine reported, he was a highly qualified adversary to the AI system. Yet, in the end, the AI won the simulated dogfight by a score of 5-0. The fighter pilot attributed his loss to the AI not following “training and thinking that is engrained in an Air Force pilot.” AI has many advantages for defense business processes; yet many people cling to the antiquated thinking that a human being can do business better. Even with some of the technologies now under development to aid contracting officers (COs), there is always the disclaimer that the technology will not eliminate the CO authority to make the final decisions. But what if the disclaimer were not given? What if the CO could trust an AI’s recommendation rather than saying, “I’m the CO, and my decision is final.” This article looks to first define what AI is and how different technologies known as “AI” can be classified. This article explains the current technologies under development and the challenges presented in using AI to improve efficiency and effectiveness in business decisions. Artificial Intelligence Versus Humans IN THE BUSINESS BATTLEFIELD

Transcript of A PROVING GROUND - DAU

Page 1: A PROVING GROUND - DAU

14 | DEFENSEACQUISITION | May-June 2021

A PROVING GROUND

by CAPT ANITA M. NAYLOR, USAF

From late 2019 into last summer, the Defense Advanced Research Projects Agency (DARPA) hosted events called

AlphaDogfight Trials, where artificial intelligence (AI) pilots fought human pilots in a simulated dogfight. The best human pilot was a graduate of the Air Force Weapons School’s F-16 Weapons Instructor Course with more than 2,000 hours in a fighter jet; as Air Force Magazine reported, he was a highly qualified adversary to the AI system. Yet, in the end, the AI won the simulated dogfight by a score of 5-0. The fighter pilot attributed his loss to the AI not following “training and thinking that is engrained in an Air Force pilot.”

AI has many advantages for defense business processes; yet many people cling to the antiquated thinking that a human being can do business better. Even with some of the technologies now under development to aid contracting officers (COs), there is always the disclaimer that the technology will not eliminate the CO authority to make the final decisions. But what if the disclaimer were not given? What if the CO could trust an AI’s recommendation rather than saying, “I’m the CO, and my decision is final.”

This article looks to first define what AI is and how different technologies known as “AI” can be classified. This article explains the current technologies under development and the challenges presented in using AI to improve efficiency and effectiveness in business decisions.

Artificial Intelligence Versus Humans IN THE BUSINESS BATTLEFIELD

Page 2: A PROVING GROUND - DAU

May-June 2021 | DEFENSEACQUISITION | 15

Page 3: A PROVING GROUND - DAU

16 | DEFENSEACQUISITION | May-June 2021

DEFINITIONS The term AI can refer to a host of different technologies. According to the Summary of 2018 Department of Defense Artificial Intelligence Strategy, “AI refers to the ability of machines to perform tasks that normally require human intelligence—for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action—whether digitally or as the smart software behind autonomous physical systems.” Examples include many of the actions within the contracting career field.

One type of AI is robotic process automation (RPA), which uses a “bot” (basically a computer program) to perform an easily defined task. This task is performed exactly how a human would have done it; but because a computer is doing the work, there is less chance of fatigue setting in from task repetitiousness, thereby reducing inefficiencies and accidents. ML (machine learning) is much different than RPA because it is not programmed to perform a certain task; instead, it can program itself based on the data it is given.

An overview of AI and ML published in 2020 by the Joint Artificial Intelligence Center (JAIC) stated, “Machine Learning systems are different in that their ‘knowledge’ is not programmed by humans. Rather, their knowledge is learned from data: a machine learning algorithm runs on a training dataset

and produces an AI model. To a large extent, machine learning systems program themselves.” Humans set boundaries for the ML system and provide it data, but from that point on, the computer completes tasks on its own. For example, given the data in the Contractor Performance Assessment Reporting System and a set of constraints about what qualifies as a “good” and “bad” narrative, an ML system can provide recommendations on relevant past performance reviews.

DARPA categorizes AI into three different groups: handcrafted knowledge, statistical (also known as machine) learning, and contextual reasoning. RPA falls in the handcrafted knowledge category, which includes technologies used to assist in well-defined repetitive tasks. ML is the next stage or category of AI that “applies statistical and probabilistic methods to large data sets to create generalized representations that can be applied to future samples.” The most advanced version of ML is deep learning, or neural networks. The final category classified by as AI technology by DARPA is contextual reasoning, where computers are no longer bound by predefined tasks or databases but rather are able to reason and apply context to their decisions.

These definitions show just how far the contracting community has to go with AI development. It is still difficult to get commanders and directors to invest money and resources in RPA

AI REFERS TO THE ABILITY OF MACHINES TO

PERFORM TASKS THAT NORMALLY REQUIRE HUMAN INTELLIGENCE …

Page 4: A PROVING GROUND - DAU

May-June 2021 | DEFENSEACQUISITION | 17

development, the most basic of all AI technologies. These definitions matter because for those without a relevant background or interest, the term AI can seem overwhelming. These three categories help to facilitate understanding of the term and its related technologies. AI doesn’t have to mean “the Matrix.” It can be as easy as getting a computer to fetch screen shots for you.

As you read the rest of this article, envision the technology used in the competition against the CO as a very advanced ML system. It will still be an ML program requiring large amounts of data, but will be able to make nuanced recommendations based on that data.

A CHALLENGE IN BUSINESS DECISIONSAlthough an interest in new technology development exists, and some projects are being developed, top-level support for its rapid development is absent, especially for using AI in business processes. One reason is the engrained belief that a human can make a better business decision and that any AI development will lead to the eventual phasing out of the CO. But Michael Wooten, former administrator to the Office of Federal Procurement Policy, told the Federal News Network, “We don’t want to replace workers. What we want to do is augment workers and relieve them of the burden of these step-by-step, tedious types of jobs. … This is not the big evil plan of Dr. Wooten to move workers off to the side.” He said he thought of the leverage available from using AI to do “those mundane processes faster than we can, cheaper than we can and very regularly.”

We need an intelligent effort to obtain support from federal employees before we will see mainstream AI adoption for business decisions; otherwise, each new implementation

will fail because employees will not understand what it can and cannot do. Once employees start to understand the value of AI and stop fearing it, they will begin to ask their leadership for these tools. And, as in the case of the DARPA dogfight between AI and the instructor pilot, we can showcase the benefits within contracting by doing a “dogfight” between an AI system and a CO.

A “DOGFIGHT” TO SHOWCASE BENEFITSThe overarching idea for the dogfight includes a comparative assessment of an AI system and a CO performing the same task. In the world of business decisions, few things are so clear-cut that an outcome can be easily related to a single business decision. Yet, there is one business decision made by COs prior to awarding any contract that definitely can be linked to poor performance post-award—the contractor responsibility determination (CRD). The CRD is explained in Federal Acquisition Regulation (FAR) 9.1 as a method for ensuring that a contractor has the capabilities, production and managerial, to perform the contract and a history of good past performance and ethics. CRD is one of the ripest actions to test in an AI dogfight because of three main factors: (1) it has set standards that

must be met; (2) it is a discretionary process; (3) it happens for every purchase.

The hypothesis is that an AI system can more accurately and more quickly determine that a bidder is a responsible contractor. However, the AI is unable to replicate the CO’s discretion that is so highly valued in the CRD process. The best outcome of the dogfight would be a showcase of the capabilities of an AI system as well as the CO’s discretionary abilities.

A competition between a CO and AI on completing a CRD starts with both sides gathering information on a contractor and making a CRD. The AI system would gather this information based on programming, and the CO would use traditional methods.

For each correct determination by the AI or CO, that competitor would receive a “kill.” The opposite of a “kill” would occur when a competitor is “shadowed.” Normally “shadowed” means that radar was unable to detect the presence of an aircraft. But in the case of this proposed competition, it would mean that either the CO or AI made an incorrect determination. Points would be awarded based only on the accuracy and speed of the determination.

Page 5: A PROVING GROUND - DAU

18 | DEFENSEACQUISITION | May-June 2021

The CRD standards provided in the FAR allow for the AI and CO to have a set of rules that both must follow during the competition. A contractor must meet seven standards to be determined as responsible prior to being awarded a contract—in addition to other rules set forth on documentation and follow-on actions within FAR 9.1, “Responsibility of Contractors.” This guidance in the FAR is similar to the operational limits on the AI system in the simulated air-to-air dogfight competition. This guidance helps to ensure that the AI and CO are looking for the same things while doing research on a prospective contractor.

Another part of the guidance is FAR 9.1’s instruction that it can find a contractor responsible only if there is evidence to support the finding, and that a lack of negative evidence does not constitute sufficient support for a determination of responsibility. This guidance tightens the guidelines even more for the AI system. If the task were only to look for negative information, the AI could be programmed to execute a search of negative words such as “bad,” “poor,” “unable,” and other words and phrases with negative connotations. Then, in the absence of any negative words, the AI could quickly make a determination of responsibility. However, with added guidance, the AI must have evidence that the contractor meets the seven standards to determine a contractor responsible, thus making the task more complicated for the AI system.

Even with the standards and rules set forth in the FAR, there is still a plenty of discretion in the CRD process that helps to showcase any potential superior benefits a human brings to decision making over AI—such as intuition. A CO’s discretion is so highly valued that very few protests related to a CRD are ever sustained, and those that are sustained must

show that the determination was obviously biased and not based on the information gathered.

This discretion is how a CO can let intuition factor in their business decisions. An AI does not have the same ability to exercise immediate discretion because it would require new programming or training in order to employ outside factors. AI does not utilize intuition. A dogfight between AI and CO may show that this discretion that the CO is able to exercise provides a more reliable and quicker outcome than is available from an AI system.

Because a CRD must take place for all federal contracts, it is an ideal “battleground” for an AI versus CO competition because any benefit from AI can be seen throughout every government purchase. If AI is shown to provide more accurate recommendations on a contractor’s responsibility than a CO, it could change contracting dramatically. Currently, very few contractors are found to be not responsible, which may be due to the CO’s inability to check every available data source in the limited time available to conduct a CRD. However, if more data can be analyzed and a recommendation provided to the CO, the CO may feel more comfortable exercising the right to find a contractor not responsible.

CONCLUSION AND RECOMMENDATIONSAI is a part of the future of contracting. Nand Mulchandani, acting director of the JAIC, told Breaking Defense that “The organization that’s going to be victorious in the future is going to be the organization [with] the maximum agility to bring out new capabilities and learn very, very quickly how to adapt.” This is not only new capabilities in the terms of traditional platforms, but also with business systems and processes. AI allows COs

to be freed from the low-value work of gathering and trying to summarize volumes of information. With AI in place, a COs can focus on utilizing their resources for the critical thinking needed to make the business decision.

A dogfight between CO and AI, will let COs see the value that AI can bring to their lives by helping to eliminate low-value work and increase the accuracy of their decisions. Many additional AI dogfight areas are possible within the defense acquisition process—from small business determinations to advanced proposal evaluations—but the CRD seems the ripest first choice.

NAYLOR is a contract manager at the Air Combat Command’s Acquisition Management and Integration Center at Langley Air Force Base in Hampton, Virginia. Previously, she was stationed at the Naval Postgraduate School.

The author can be contacted at [email protected].