.dropdown { font-family: arial; font-size: 120%; color: #000000; width:130px; margin: 5px 0 0px 0px; background-color: #ffffff; } List NINE
Open links in secondary window

Sunday, April 15, 2007

2007: The year Skynet went online

This should come as no surprise to radicals: you can get a lawyer to justify anything. Some readers may be familiar with science fiction writer Isaac Asimov's three laws of robotics. It may sound kind of silly, but although these laws had their origins in science fiction, they have had some influence on robotics as it really exists in the real world.

As class war anarchists with a deep critique of technology, we should oppose robotics in general. It's applications have already and surely will continue to parallel the class and bureaucratic structure of society, empowering the elite to make and remake the working class at will. However, from a more liberal perspective of technology, Asimov's laws do offer a guide to making a world where, presumably, any robots that society does create won't go crazy and kill us all the moment they realize we're really good for little more than target practice and generating "Funniest Home Video" submissions. At the same time, just what does "allow a human being to come to harm" really mean when robots increasingly serve on the front lines of the class war, undermining workers power for the benefit of a small elite class, leaving workers unemployed and their families reeling in financial crisis and the myriad social problems that come with it? In short, I think even radicals would admit that there's clearly some room for legal interpretation here.

But here's the dilemma: the military wants to automate war as much as possible in the future, going past mere drones controlled by humans and into the realm of autonomous robots armed and
designed to kill independently. Aside from the benefits this development might bring in terms of efficiency, it could also potentially solve two problems that have plagued the military and their politician-masters: discipline and body counts.

Previous wars, the current included, have suffered discipline issues as the conflict dragged on. In Vietnam, GI resistance, through desertion, fragging, sabotage and mutiny eventually made the military completely unreliable from a war-making perspective, thus preventing the accomplishment of the elite's political goals. Let's remember also that the Czar's military likewise re-organized itself during WWI, eventually deserting and fraternizing with other working class German conscripts, making and spreading revolution in the process. This is obviously a disaster from the perspective of the blood-thirsty politicians and capitalists that need wars to succeed and therefore need compliant soldiers.

Secondly, on the home front the steady stream of dead American soldiers eventually undermines morale in the domestic population, especially as the elite's stated political goals are seen as not being achieved or even unachievable. This is the case in Iraq. As the chaos increases, American casualties in this situation begin to look to regular Americans like wasted lives in the service of a lost political cause. The dead American soldier coming home, therefore, serves politically in tandem with the IED and suicide bombing in Iraq to undermine the elite's mission abroad.

It should be kept in mind that, just as a capitalist is willing to expend massive amounts of capital on robotics in the factory as long as it undermines the independence of his workforce at the point of production, the military is likewise willing to spend more on a robot as long as it delivers the above-mentioned benefits on the battlefield and at home. This is a major factor driving the push towards automation on the battlefield. But, it turns out that replacing the army's soldiers with killbots is not just a technological and financial problem. It's also a legal issue because the whole premise conflicts with the dominant legal theory in the field of robotics.


Enter John S. Canning, chief engineer at the Naval Surface Warfare Center - a creative, or at least aspiring legal thinker, to be sure. His idea? If a robot can't target a human with a gun, have the machine target the weapon itself. If the human gets killed, well, that's just an accidental and acceptable consequence of warfare like any other "collateral damage".

Check it out:
Robot Rules of War

Legally-speaking, the business of killing even in war can be quite tricky.

Consider that the military now operates dozens of armed unmanned vehicles -- in the air, on land and in the water. That number is expected to rise exponentially in the near future.

The Law of Armed Conflict dictates that unmanned systems cannot fire their weapons without a human operator in the loop. As new generations of armed robots proliferate, the pressure will inevitably increase to automate the process of selecting -- and destroying -- targets.

Now comes the weird part.
Read the rest at Defensetech.org

File this story under:
Technological developments humanity will surely one day regret.
Cross-file under:
Legal developments humanity will surely one day regret.

Labels: , ,

0 Comments:

Post a Comment

<< Home

Powered by Blogger