There are real limits on the amount of AI acceptable to navy commanders.
With
rising security challenges in the global commons, there is growing interest in
the subject of “intelligent” weapons systems. This is especially so in the
maritime realm, where recent studies have shown that precision-guided weaponry
and networked systems are likely to play an increasingly important role. Even
while accepting autonomous systems as the future of maritime warfare, however,
many find the subject of “intelligent weapon systems” to be deeply contentious.
A
good point of departure for the discussion on autonomous combat systems is a
recent report in the Chinese media about the development of a family of cruise
missiles with artificial intelligence (AI) capabilities. In August this
year, a Chinese daily reported that China’s aerospace industry was developing
tactical missiles with inbuilt intelligence that would help seek out targets in
combat. The “plug and play” approach, a Chinese aerospace executive
pointed out, could potentially enable China’s military commanders to launch
missiles tailor made for specific combat conditions.
Oddly
enough, no clarifications were offered for what “tailor made cruise missiles
with high levels of artificial intelligence and automation” really meant. Apart
from reiterating China’s global leadership status in the field of artificial
intelligence, the Chinese source did not provide any insight into the specific
nature of autonomous capability being developed.
The
issue for many naval commanders is the dichotomy between the theoretical definition
of Artificial Intelligence and its popular interpretation. Technically, AI is
any onboard intelligence that allows machines in combat to execute regular
tasks, allowing humans more time to focus on demanding and complex missions.
Modern-day combat requires war-fighters to operate with the active assistance
from sensors and systems. Intheory, AI provides the technology to augment human
analysis and decision-making by capturing knowledge that can be re-applied in
critical situations. It purports to change the human role from “in-the-loop”
controller to “on-the-loop” thinker who can focus on a more reflective
assessment of problems and strategies, guiding rather than being buried in
execution detail.
In
practice, however, Artificial Intelligence is a term used for a combat system
that has the ability to taketargeting decisions. This is more in the nature of “who to
target,” as opposed to “how to target,” which is anyway a task that guided
missiles have been performing with some precision. It’s worth emphasizing that
maritime forces remain skeptical of autonomous weapon systems with independent
targeting capability. In the nautical realm, the launch of a missile on an enemy
platform is an act of war. The decision to execute a missile launch is the
exclusive preserve of the command team (led by the ship’s
captain), which must independently assess the threat and act in
pursuit of war objectives.
Despite
several advancements allowing for a more precise targeting of platforms, the
logic of maritime operations hasn’t fundamentally changed. As a result, naval
missiles haven’t been invested with any serious intelligence to make command
decisions to target enemy units. While their ability to strike targets has been
radically enhanced — through the use of superior onboard gyros, computing
systems, and track radars — the basic mode of operation of cruise missiles
remains the same.
To
be sure, Artificial Intelligence is considered indispensable in the development
of new-age naval weapons, in particular hypersonic missiles. After China’s
recent high-speed (over Mach 10), “extreme maneuvers”hypersonic tests, it is amply clear that future combat
missions will require a human-machine interface on an unprecedented scale;
which is why four other Asian states — Japan, India, South Korea, and Taiwan
— have been developing supersonic and hypersonic systems. Each one of them
has expressed an aspiration for a sophisticated maritime force, with long range
sensors, armor protection, precision weapons, and networking technologies. Yet
none has been developing naval missile systems with artificial intelligence.
A
useful illustration of the predicament that AI poses for the naval community is
the U.S. Navy’s Long Range Anti-Ship Missile (LRASM). Often portrayed by senior officers as a
single-shot remedy for America’s surface-combat deficit at sea, the LRASM is a
replacement for the Harpoon missile (albeit a more powerful version) and a
supposedly “intelligent” missile system. Guided first by ship-borne equipment
and then by satellite, the projectile is jam-resistant and capable of
operations without the Global Positioning System. Flying through a series of
way-points, evading static threats, land features, and commercial shipping, the
LRASM has the capability to detect threats independently, and navigate around
them.
The
nature of the LRASM’s “intelligence,” however, tells a story. The missile is
smart enough to avoid the engagement zone of an enemy ship that is not on the
target list. To bypass enemy warships that aren’t on the target list, it skips
way-points that lie within their weapons-engagement range. With an inbuilt
capability to dive to sea-skimming altitude in its approach to the target
vessel, the missile can strike at an independently calculated “mean point of
impact.”
Notwithstanding
its considerable computing and processing capabilities, however, the LRASM does
not select its target in flight. Human operators feed that information into the
missile, providing it with a continuous stream of data. In crime-investigation
lingo, the missile is not the mastermind of the encounter; only the assassin.
This also demonstrates of the limits of artificial intelligence, where the
missile makes its own decisions only after it receives critical targeting
information from the command team. Despite its coordinated attack capabilities,
the LRASM cannot be termed as a fully autonomous weapon.
Understandably,
the debate surrounding artificial intelligence and
autonomous naval platforms is a contentious one. AI might have the potential to
radicalize naval operations at sea, but many maritime practitioners areuncomfortable with its use in combat – particularly
the development of lethal autonomous weapons systems (LAWS). The ethical
dilemma arises from the LAWS’ ability to kill people, and
policymakers’ reservations about inanimate systems that can take decisions to
terminate lives.
It
is instructive that while the U.S. Defense Advanced Research Projects Agency
(DARPA) has, in recent years, developed programs that envisages the use of
LAWS, these apply only to Collaborative Operations in Denied Environment (CODE) — where autonomous aerial vehicles must only
target enemy platforms in situations where signal-jamming makes communication
between human commanders impossible.
Here
too, there is a debate about its humanitarian implications, because
international humanitarian law — which governs attacks on humans in times of
war — has no specific provisions for such autonomy. The 1949 Geneva Convention
on humane conduct in war requires any attack to satisfy three criteria: military necessity; discrimination between
combatants and non-combatants; and proportionality between the value of the
military objective and the potential for collateral damage. Evidently, these
are subjective judgments no current AI system seems able to fully to satisfy.
In
the absence of consensus around “artificially intelligent” weapons, autonomous
naval combat systems are yet to find ready acceptance in the military. Navy
officials aren’t against the use of AI technologies to hasten command and
control processes and human decision-making on naval platforms, but it is
unlikely they will easily acquiesce to weapon systems taking independent
targeting decisions.
**Abhijit
Singh is a Senior Fellow at the Observer Research Foundation at New Delhi. His
recent report on Unmanned and Autonomous Vehicles and Future Maritime Operations
in Littoral Asia elaborates on issues covered in this piece.