The Chinese scientists solved a big tough challenge for the US Air Force

Salon

The “black box” issue not only undermines people’s trust in machines but also impedes deep communication between them.
Zhang’s team found that this technology opens a new window for human pilots to interact with AI.
In contrast, the conventional “black box” AI can only achieve a 90 per cent win rate after 50,000 rounds and struggles to improve further.
In the US, the “black box” issue has been mentioned in the past as posing a problem for pilots.
DARPA has adopted two strategies to assist pilots in overcoming their “black box” apprehension.
But according to the paper by Zhang’s team, the Chinese military enforces rigorous safety and reliability assessments for AI, insisting that AI be integrated into fighter jets only after cracking the “black box” enigma.
Zhang’s team showed the prowess of this AI through multiple examples in their study.
But it seems the US sanctions have had no obvious impact on the exchange between Zhang’s team and their international counterparts.

POSITIVE

Prior to China, the US started testing the use of AI in air combat. Chinese drones were still fighting each other in the sky in real time, but US test pilots had already launched their dogfighting AI into the air for testing.

Current AI technologies function as a “black box”: tasks enter one end and results come out of the other, with no information provided to humans about the inner workings of the system. Examples of this include deep reinforcement learning and large language models.

However, aerial warfare is a matter of life and death. Pilots of the near future will have to collaborate closely with AI, at times even giving these sophisticated machines their lives. In addition to undermining people’s confidence in machines, the “black box” problem prevents meaningful communication between them.

The new AI combat system, which was created by a group under the direction of Zhang Dong, an associate professor at Northwestern Polytechnic University’s school of aeronautics, can use words, data, and even charts to explain each instruction it sends to the flight controller.

Moreover, this AI is able to explain the tactical goals behind each directive, their importance in relation to the current combat scenario, and the precise flight maneuvers that are involved.

Zhang’s group discovered that this technology creates a new avenue for communication between AI and human pilots.

Zhang’s group discovered that after only roughly 20,000 rounds of combat training, this type of AI—which can converse with people “from the heart”—can achieve a nearly 100% victory rate. By comparison, the traditional “black box” AI struggles to improve beyond 50,000 rounds and can only attain a 90% win rate.

In a peer-reviewed paper that was published on April 12 in the Chinese academic journal Acta Aeronautica et Astronautica Sinica, Zhang’s team stated that although they have only used the technology on ground simulators thus far, they plan to extend its use to more realistic air combat environments in the future.

Pilots in the US have previously expressed concern about the “black box” issue.

Colonel Dan Javorsek, a program manager at DARPA’s Strategic Technology Office, stated in an interview with the National Defense Magazine in 2021 that “the big tough challenge that I’m trying to address in my efforts here at DARPA is how to build and maintain the custody of trust in these systems that are traditionally thought of as black boxes that are unexplainable.”.

To help pilots get over their fear of the “black box,” DARPA has implemented two tactics. Pilots can launch with a single button push thanks to one method that lets AI initially handle simpler, lower-level tasks like automatically choosing the best weapon based on the attributes of the locked target.

The alternative strategy entails senior officers physically boarding AI-powered fighter aircraft to show their assurance and resolve.

“Not having it puts your security at risk.”. We absolutely must have it at this point, Kendall told AP.

However, Zhang’s team’s paper claims that the Chinese military imposes stringent safety and reliability evaluations on AI, demanding that the technology be included into fighter jets only after the mystery of the “black box” has been solved.

When applied to combat scenarios, deep reinforcement learning models frequently produce decision-making results that are more effective than human-level ones. This framework for making decisions is difficult for humans to understand and infer from prior experiences.

Zhang and colleagues wrote, “It raises a trust issue with AI’s decisions.”.

“The key to using AI technology in air combat engineering is deciphering the ‘black box model,’ which will allow humans to understand the strategic decision-making process, understand the drone’s manoeuvre intentions, and rely on the manoeuvre decisions.”. This further emphasizes the main goal of our research development, they said.

In their study, Zhang’s team provided numerous examples to demonstrate the power of this AI. For example, in a losing scenario, the AI would have started by climbing and performing a cobra manoeuvre. It would then have planned to engage the enemy aircraft with a series of combat turns, aileron rolls, and loops, ending with evasion manoeuvres like diving and leveling out.

However, an experienced pilot could quickly identify the shortcomings in this combination of extreme maneuvers. The drone’s speed dropped during the battle as a result of the AI’s repeated climbs, combat turns, aileron rolls, and dives, and it finally failed to dislodge the adversary.

According to the paper, the AI was given the following human instruction: “This air battle loss is the result of consecutive radical manoeuvres that reduced speed; such decisions must be avoided in the future.”. “.

In a subsequent round, the AI employed large manoeuvres to induce the enemy, entered the side-winding phase early, and used level flight in the final stage to mislead the enemy, achieving a critical winning strike with sudden large manoeuvres. A human pilot would typically employ tactics like side-winding attacks to find effective positions to destroy enemy aircraft.

Subtle maneuver that proved crucial during the impasse was discovered by researchers after they analyzed the AI’s intentions.

According to Zhang’s team, the AI “adopted a levelling out and circling tactic, preserving its speed and altitude while luring the enemy into executing radical direction changes, depleting their residual kinetic energy and paving the way for subsequent loop manoeuvres to deliver a counter-attack.”.

The communication between Zhang’s team and their international counterparts, however, does not appear to have been significantly impacted by the US sanctions. They have made use of cutting-edge algorithms that American scientists have presented at international conferences, and they have also revealed their own creative frameworks and algorithms in their paper.

According to some military analysts, the Chinese military is more motivated than their US counterparts to create a guanxi, or link, between AI and human combatants.

One pilot in China’s J-20 stealth fighter, for example, is tasked with communicating with AI-controlled unmanned wingmen; this is a feature that the US F-22 and F-35 fighters do not presently have.

However, because the matter is so delicate, a physicist in Beijing who wished to remain anonymous stated that the new technology might make it harder to distinguish between humans and machines.

“Pandora’s box could open,” he commented.

scroll to top