Does China Really Want A Two-Seater Version of Their J-20 Stealth Fighter?
Key Point: China no doubt wants AI to assist its human pilots. However, there are some things only a real human can do. An extra human in the warplane could help the fighter do even more
Is there a chance that a two-seat 5th-Generation Stealth fighter might bring some additional advantages to multi-role air combat? The answer to this seemingly complex question might, at least at the moment, seem somewhat vague. One answer may be as ambiguous as it is true--maybe.
The question is taking on new relevance in light of rumours that the Chinese PLA Air Force is engineering a two-seat J-20 stealth fighter, according to a recent report.
The possibility raises what might be called an interesting 2-fold question.
Initially, it is without question well-known that, despite the rapid advances in AI and autonomy, human cognition and decision making amid fast-evolving combat circumstance still offers a unique, indispensable set of attributes that mathematically-oriented computer algorithms simply cannot replicate.
A second set of eyes and human decision-making capabilities could easily bring added value, as the human brain can quickly adapt to previously unknown emerging variables, analyse strategic, conceptual and tactical dynamics in ways beyond the current reach of computers and therefore ease the burden placed upon an individual pilot.
Computers can aggregate and analyse vast pools of data, quickly sift through to find items of relevance and perform rapid, integrated analysis. Of course, these continued technical advances bring unprecedented advantages, yet not without some limitations. A second human brain might add new abilities to make more subjective forms of analyses and free up the pilot to spend greater amounts of cognitive energy on other high-priority items.
The prevailing consensus is not that computers necessarily exceed humans but that they offer amazing, unprecedented, yet different attributes. The optimal approach, therefore, is to “team” humans and computers through a man-machine interface, as it brings previously unimagined advantages to combat. Manned-unmanned teaming brings new capabilities that far exceed what one or the other can do by itself.
All this being said, humans also bring new possibilities of “human error,” as computers are far less likely to miss critical procedural or analytical details. However, even the most advanced algorithms are not perfect, meaning they can be confused or at times deliberately “spoofed” by unknown variables or information not part of a compiled database. Perhaps different kinds of sensor data can be compiled and analysed for a second crew member able to support the main pilot with additional decision-making abilities. This might make a lot of sense given the expectation that computer algorithm-enabled sensors will increasingly be gathering and analysing vast amounts of information and performing greater levels of procedural functions.
Adversaries are already known to take specific measures to confuse or disrupt AI-empowered analysis by inserting unknown variables or obscurants into what sensors can discern. For example, enemies might put a large piece of differently shaped wood or alternative structures on top of an armoured vehicle so AI-driven sensors might have trouble identifying the platform. Possibilities such as this are one reason computer algorithms may not yet be at a level of complete “reliability” beyond the capabilities of human perception.
No comments:
Post a Comment