Algorithms are becoming increasingly advanced, allowing computers to find their way around complex problems and decisions, and this is happening at an incredibly rapid pace. Furthermore, as machine learning advances, humans aren’t the only ones crafting these recipes, so to speak — computers are now able to create their own algorithms. As countries and their governments begin to use algorithms in warfare, who, if anyone, has control over these autonomous programs? What does the existence of such algorithms mean, both in terms of international affairs and for us as individuals?
What are Algorithms?
The dictionary definition of algorithm is something along the lines of a process or set of rules to be followed when performing computations or other problem-solving operations, especially by non-human entities, such as a computer. Algorithms can be as simple as a decision tree, or they can be complex enough to power a self-driving car.
How Algorithms Are Being Used in Warfare
The use of algorithms in warfare is not a concern for tomorrow, but something that is already happening. Here are just a few instances:
Autonomous Fighters
Autonomous fighters, more commonly known as drones, have been in use for a couple of years now. Rather than relying on humans in fighter jets, drones are capable of flying into enemy territory, conducting surveillance, and attacking enemy targets. Furthermore, because they do not carry humans, they are smaller and stealthier. Currently, drones identify targets, but they only attack after being given the okay by its remote operator (this behavior is typical of most autonomous weapons). However, there’s nothing to say that this won’t change.
It’s worth noting, however, that drones aren’t used strictly for fighting. They are quite helpful when it comes to implementing disaster relief efforts. For example, drones helped identify areas of need in the Philippines after its coastline was decimated by Typhoon Haiyan, locate mines displaced by the 2014 Balkan floods, and survey the damage done by the 2015 earthquake in Nepal.
Today, several companies are building creative drones to actually deliver supplies to difficult-to-reach areas (whether due to geopolitics or just terrain). Otherlab is developing industrial-strength paper airplanes capable of carrying over two pounds of supplies, such as blood and vaccines. Windhorse Aerospace created Pouncer, whose wings are filled with food. Once the food has been removed, recipients can use the food’s protective covers as shelter, while the plywood frame can serve as firewood.
Predicting Future ISIS Activities
In 2014, Arizona State University researcher Paulo Shakarian created an algorithm designed to predict what ISIS will do depending on the situation they’re facing. Rather than reacting, the goal is to be proactive. While there is still more work to be done, the American drone strikes have been so effective that an elder tribesman in Afghanistan called them “the magic.”
Using Algorithms to Hunt ISIS
In April 2017, Deputy Defense Secretary Bob Work launched Project Maven, designed to integrate big data and machine learning across Department of Defense (DoD) operations.
Project Maven’s first goal? Hunt ISIS. Timeline? By the end of 2017.
Currently, there are thousands of military and civilian intelligence analysts tasked with watching videos recorded by drones flying over battlefields for the purpose of identifying abnormal activities. Despite the size of the labor force, analysts are overwhelmed by the amount of video produced, and the DoD, rather than adding more staff, is striving to work smarter by teaching computers to do what humans are currently doing. Rather than having analysts watch the collected videos and log things of interest onto a spreadsheet, Project Mavin’s hoping to use algorithms to identify actionable intelligence.
Suicide Prevention
Luckily, not all military use of algorithms is for destruction. More specifically, the Army is refining an algorithm to help prevent suicide by current and former servicemembers.
In 2009, the US Army began the Study to Assess Risk and Resilience in Servicemembers (STARRS), using data it collected from 2004 to 2009 on more than 1.6 million soldiers. In 2014, STARRS researchers published a paper describing the algorithm they designed to predict the risk of suicide in soldiers. While the Army is cautiously optimistic about their algorithm, the results they’ve seen thus far are positive.
It’s worth noting, however, that such an endeavor could only take place in the military. Because the Army is so integrated into soldiers’ lives, it has employment, health, and other important data (including background information, such as education and family-related details) required for such a study.
The Dangers of Using Algorithms in War
As with any major advance that promises to shift the status quo drastically, many are raising questions regarding the moral, political, legal, and ethical implications of algorithmic use in warfare, especially in instances where human lives are at stake. The following section will cover several issues that have been raised, but it is by no means a comprehensive survey of the discussions that are currently happening.
The Legal Responsibility of Algorithm-Based Actions
Generally speaking, legal issues surrounding the use of algorithms depend upon who is held responsible for the actions executed by an algorithm. When a person carries out an action, society can hold that person legally responsible for the results occurring as a result of that action. However, with algorithms, who holds responsibility? Is it the person who developed the idea? The person who wrote and programmed the algorithm? The operator of the object (be it a drone or a self-driving car)? The person (or country) who owns the weapon?
Diplomacy and Foreign Affairs
In the world of diplomacy and foreign affairs, where every issue is complex, multi-faceted, and full of areas where subjective judgment is needed, algorithmic use raises a host of delicate issues and problems.
For example, algorithmic calculations have been used to determine drone targets, and others have been designed to identify threats of cyber attacks. Currently, such intelligence is received by somebody (or group), who decide whether such intelligence is actionable (and whether it should be acted on or not). However, there are some who think that algorithms should be in place to automatically act if certain conditions are met since this offers preemptive protection and faster response times. While this is an idealistic notion, algorithms are only as good as the people who create them, and biases can creep into the code. In some cases, we may not even be aware of our biases and therefore remain unaware that we’ve hard-coded such into our software. Are we willing to risk mistakes that arise from technological reliance when what’s at stake is peace in the global community?
Arms Race, Redux
As the power of artificial intelligence grows and its military applications become refined, an arms race based on technology is becoming a reality. What does this mean for our global society today, especially in light of what we learned from the first Cold War?
Furthermore, unlike the previous arms race, much of the artificial intelligence discovered can be used for commercial and other non-military applications. In a lot of instances, this is a good thing, but what are the ramifications of military-grade technologies available to the general public? One specific example is spy-searching technology, which could be used to identify people at large. Is this safe, especially in the hands of predators and other criminals?
Conclusion
Big data, artificial intelligence, and algorithms are powerful tools that are currently being used in the world of warfare, and their ability to compensate for the limitations of humans make them increasingly attractive as our world leaders and military officials strive to solve international conflicts. However, such technology (algorithms in particular) are not the panacea that many make them out to be.
While algorithms are powerful in ways that humans cannot match, algorithms are also not humans and therefore lack the ability to make subjective judgments necessary in most war- and foreign policy-related areas. Furthermore, as a distinctly non-human entity, there are a lot of questions that need to be answered before we can rely on them for executing actions that might result in the loss of human life.
In spite of these hard questions, we also shouldn’t abandon them. Algorithms, and the use of such in technology, don’t necessarily need to result in mass destruction — they can certainly be used for good, such as drones for humanitarian relief. When debating the merits of algorithms, it’s important to take a nuanced approach and remember that, at its core, algorithms are just a tool that humans can use to do both good and evil.