But what are the consequences to the societies who deploy these systems? What do autonomous fleets of combat machines do to the concentration of political power in a society? These are open questions. I think it’s likely to strengthen the hand of authoritarian governments who will be able to use these weapons in combination with ubiquitous surveillance to retain control even in the face of widespread popular dissent. Certainly robotic weapons will never refuse an order to fire upon civilians. That’s why it’s important that checks and balances, as well as transparency are built into drone use from the outset, otherwise unseen and unaccountable concentrations of power will form. At present, none of those checks and balances are being implemented and the targeting process is a black box - no transparency at all.
Let us speak about the social and political implications. I can think of several scenarios, some of which you also explore in Kill Decision. Assassinations become easy, draconian security measures, counter-drone-drones, futile attempts at technology restriction, surveillance systems that can increase coverage and intensity within minutes. What else do you think will happen inside societies with massive drone usage?
Your premise covers a lot of the physical manifestations of drone proliferation, but consider the *effect* of such an environment on society: it would be corrosive to democratic institutions. Why would one-person-one-vote continue if the powerful (whether government, corporate, criminal, or whomever) could cost-effectively and reliably use force against political opponents with little or no risk? Is there any time in all of human history when a people remained free or when democratic forms persisted if citizens could not credibly assert their rights? Power isn’t shared unless it must be, and unchecked proliferation of autonomous combat drones throughout society could seriously shift the balance of political power.
It would not require Terminator-like weapon systems to tilt this balance. Cheap, error-prone - but numerous - autonomous weapons could achieve the same result.
Currently, the U.S. military claims that every drone strike has a human „in the loop“ that makes the final kill decision (and this is supported by recent leaks about Obama’s direct involvement). However, the intelligence process needed to designate targets is clearly on a path towards more and automation and reliance on algorithms. Some systems, like so called loitering munitions already react automatically to target signatures, such as radar radiation or pre-programmed visuals of vehicles like tanks. How big is the pressure to speed up the „kill loop“ by minimizing human influence?
’Lethal autonomy’ is the term used to describe drones capable of making a kill decision without a human in the loop. I think it’s likely that, for political reasons, humans will remain in the ’kill decision’ loop in NATO militaries in the near-term (at least officially). However, there are powerful incentives pressing for lethal autonomy in drones. First the sheer volume of sensor data that needs to be analyzed is choking decision-makers. Drone surveillance video coursing through modern military networks has already outstripped the capacity of humans to view it all. The U.S. drone fleet flew 71 hours in 2004. That climbed to 25,000 hours by 2009, and the Pentagon estimates their drones flew 300,000 hours in 2011. Likewise, drones are about to acquire many more eyes. The Gorgon Stare and ARGUS projects could give each drone up to 65 independently-operated cameras, enabling surveillance of vast swaths of terrain - and creating yet more imagery for analysis. Thus, it will be drones that tell humans what to look at, not the other way around.