AnimationNation Forum

AnimationNation


Post New Topic  Post A Reply
my profile login | | search | faq | forum home
  next oldest topic   next newest topic
» AnimationNation   » SideTopics   » Robot attacks human in Sweden..

   
Author Topic: Robot attacks human in Sweden..
GaryClair
Member
Member # 3164

Icon 1 posted      Profile for GaryClair   Email GaryClair         Edit/Delete Post 
 -
http://www.thelocal.se/19120.html
Robot attacked Swedish factory worker

A Swedish company has been fined 25,000 kronor ($3,000) after a malfunctioning robot attacked and almost killed one of its workers at a factory north of Stockholm.

Public prosecutor Leif Johansson mulled pressing charges against the firm but eventually opted to settle for a fine.

"I've never heard of a robot attacking somebody like this," he told news agency TT.

The incident took place in June 2007 at a factory in Bålsta, north of Stockholm, when the industrial worker was trying to carry out maintenance on a defective machine generally used to lift heavy rocks. Thinking he had cut off the power supply, the man approached the robot with no sense of trepidation.

But the robot suddenly came to life and grabbed a tight hold of the victim's head. The man succeeded in defending himself but not before suffering serious injuries.

"The man was very lucky. He broke four ribs and came close to losing his life," said Leif Johansson.

The matter was subject to an investigation by both the Swedish Work Environment Authority (Arbetsmiljöverket) and the police.

Prosecutor Johansson chastised the company for its inadequate safety procedures but he also placed part of the blame on the injured worker.

 -

IP: Logged
GaryClair
Member
Member # 3164

Icon 1 posted      Profile for GaryClair   Email GaryClair         Edit/Delete Post 
http://www.foxnews.com/story/0,2933,532492,00.html


 -

Biomass-Eating Military Robot Is a Vegetarian, Company Says
Thursday, July 16, 2009

EATR robots roam a barren landscape as an unmanned drone flies overhead in an artist's rendering.

A steam-powered, biomass-eating military robot being designed for the Pentagon is a vegetarian, its maker says.

Robotic Technology Inc.'s Energetically Autonomous Tactical Robot — that's right, "EATR" — "can find, ingest, and extract energy from biomass in the environment (and other organically-based energy sources), as well as use conventional and alternative fuels (such as gasoline, heavy fuel, kerosene, diesel, propane, coal, cooking oil, and solar) when suitable," reads the company's Web site.

But, contrary to reports, including one that appeared on FOXNews.com, the EATR will not eat animal or human remains.

Dr. Bob Finkelstein, president of RTI and a cybernetics expert, said the EATR would be programmed to recognize specific fuel sources and avoid others.

“If it’s not on the menu, it’s not going to eat it,” Finkelstein said.

“There are certain signatures from different kinds of materials” that would distinguish vegetative biomass from other material."

RTI said Thursday in a press release:

"Despite the far-reaching reports that this includes “human bodies,” the public can be assured that the engine Cyclone (Cyclone Power Technologies Inc.) has developed to power the EATR runs on fuel no scarier than twigs, grass clippings and wood chips -- small, plant-based items for which RTI’s robotic technology is designed to forage. Desecration of the dead is a war crime under Article 15 of the Geneva Conventions, and is certainly not something sanctioned by DARPA, Cyclone or RTI."

EATR will be powered by the Waste Heat Engine developed by Cyclone, of Pompano Beach, Fla., which uses an "external combustion chamber" burning up fuel to heat up water in a closed loop, generating electricity.

The advantages to the military are that the robot would be extremely flexible in fuel sources and could roam on its own for months, even years, without having to be refueled or serviced.
Related Stories

Upon the EATR platform, the Pentagon could build all sorts of things — a transport, an ambulance, a communications center, even a mobile gunship.

In press materials, Robotic Technology presents EATR as an essentially benign artificial creature that fills its belly through "foraging," despite the obvious military purpose.

IP: Logged
GaryClair
Member
Member # 3164

Icon 1 posted      Profile for GaryClair   Email GaryClair         Edit/Delete Post 
http://news.cnet.com/8301-11386_3-10281328-76.html

Robotics engineer aims to give robots a humane touch
by Dara Kerr

 -
Robotics engineer Ronald Arkin

Can robots be more humane than humans in fighting wars? Robotics engineer Ronald Arkin of the Georgia Institute of Technology believes this is a not-too-distant possibility. He has just finished a three-year contract with the U.S. Army designing software to create ethical robots.

As robots are increasingly being used by the U.S. military, Arkin has devoted his lifework to configuring robots with a built-in "guilt system" that eventually could make them better at avoiding civilian casualties than human soldiers. These military robots would be embedded with internationally prescribed laws of war and rules of engagement, such as those in the Geneva Conventions.

Arkin talked with CNET News about how robots can be ethically programmed and some of the philosophical questions that come up when using machines in warfare. Below is an edited excerpt of our conversation.

Q: What made you first begin thinking about designing software to create ethical robots?
Arkin: I'd been working in robotics for almost 25 years and I noticed the successes that had been happening in the field. Progress had been steady and sure and it started to dawn on me that these systems are ready, willing, and able to begin going out into the battlefield on behalf of our soldiers. Then the question came up--what is the right ethical basis for these systems? How are we going to ensure that they could behave appropriately to the standards we set for our human war fighters?

In 2004, at the first international symposium on roboethics in Sanremo, Italy, we had speakers from the Vatican, the Geneva Conventions, the Pugwash Institute, and it became clear that this was a pressing problem. Trying to view myself as a responsible scientist, I felt it was important to do something about it and that got me embarked on this quest.

What do you mean by an ethical robot? How would a robot feel empathy?
Arkin: I didn't say it would feel empathy, I said ethical. Empathy is another issue and that is for a different domain. We are talking about battlefield robots in the work I am currently doing. That is not to say I'm not interested in those other questions and I hope to move my research, in the future, in that particular direction.

Right now, we are looking at designing systems that can comply with internationally prescribed laws of war and our own codes of conduct and rules of engagement. We've decided it is important to embed in these systems with the moral emotion of guilt. We use this as a means of downgrading the robots' ability to engage targets if it is acting in ways which exceed the predicted battle damage in certain circumstances.
"Right now, we are looking at designing systems that can comply with internationally prescribed laws of war and our own codes of conduct and rules of engagement. We've decided it is important to embed in these systems with the moral emotion of guilt."
--Ronald Arkin

You've written about a built-in "guilt system." Is this what you're talking about?
Arkin: We have incorporated a component called an "ethical adaptor" by studying the models of guilt that human beings have and embedding those within a robotic system. The whole purpose of this is very focused and what makes it tractable is that we're dealing with something called "bounded morality," which is understanding the particular limit of the situation that the robot is to operate in. We have thresholds established for analogs of guilt that cause the robot to eventually refuse to use certain classes of weapons systems (or refuse to use weapons entirely) if it gets to a point where the predictions it's making are unacceptable by its own standards.

You, the engineer, decide the ethics, right?
Arkin: We don't engineer the ethics; the ethics come from treaties that have been designed by lawyers and philosophers. These have been codified over thousands of years and now exist as international protocol. What we engineer is translating those laws and rules of engagement into actionable items that the robot can understand and work with.

 -

Autonomous robots can be programmed with information that allows them to avoid areas where civilians may be--like cemeteries, hospitals and apartment buildings.

Autonomous robots can be programmed with information that allows them to avoid areas where civilians may be--like cemeteries, hospitals and apartment buildings.


So, right now, you're working on software for the ethical robot and you have a contract with the U.S. Army, right?
Arkin: We actually just finished, as of (July 1), the three-year project we had for the U.S. Army, which was designing prototype software for the U.S. Army Research Office. This isn't software that is intended to go into the battlefield anytime soon--it is all proof of concept--we are striving to show that the systems can potentially function with an ethical basis. I believe our prototype design has demonstrated that.

Robot drones like land mine detectors are already used by the military, but are controlled by humans. How would an autonomous robot be different in the battlefield?
Arkin: Drones usually refer to unmanned aero vehicles. Let me make sure we're talking about the same sort of thing--you're talking about ground vehicles for detecting improvised explosive devices?

Either one, either air or land.
Arkin: Well, they'd be used in different ways. There are already existing autonomous systems that are either in development or have been deployed by the military. It's all a question of how you define autonomy. The trip-wire for how we talk about autonomy, in this context, is whether an autonomous system (after detecting a target) can engage that particular target without asking for any further human intervention at that particular point.

There is still a human in the loop when we tell a squad of soldiers to go into a building and take it using whatever force is necessary. That is still a high-level command structure, but the soldiers have the ability to engage targets on their own. With the increased battlefield tempo, things are moving much faster than they did 40 or 100 years ago, and it becomes harder for humans to make intelligent, rational decisions. As such, it is my contention that these systems, ultimately, can lead to a reduction in non-combatant fatalities over human level performance. That's not to say that I don't have the utmost respect for our war fighters in the battlefield, I most certainly do and I'm committed to provide them with the best technical equipment in support of their efforts as well.

In your writing you say robots can be more humane than humans in the battlefield, can you elaborate on this?
Arkin: Well, I say that's my thesis, it's not a conclusion at this point. I don't believe unmanned systems will be able to be perfectly ethical in the battlefield, but I am convinced (at this point in time) that they can potentially perform more ethically than human soldiers are capable of. I'm talking about wars 10 to 20 years down the field. Much more research and technology has to be developed for this vision to become a reality.
"So, if warfare is going to continue and if autonomous systems are ultimately going to be deployed, I believe it is crucial that we must have ethical systems in the battlefield."

But, I believe it's an important avenue of pursuit for military research. So, if warfare is going to continue and if autonomous systems are ultimately going to be deployed, I believe it is crucial that we must have ethical systems in the battlefield. I believe that we can engineer systems that perform better than humans--we already have robots that are stronger than people, faster than people, and if you look at computers like Deep Blue we have robots that can be smarter than people.

I'm not talking about replacing a human soldier in all aspects; I'm talking about using these systems in limited circumstances such as counter sniper operations or taking buildings. Under those circumstances we can engineer enough morality into them that they may indeed do better than human beings can--that's the benchmark I'm working towards.

 -
Pioneer robots in multi-robot teams used in previous research for a Defense Advanced Research Projects Agency project.

Ok, what kind of errors could a military robot make in the battlefield in regards to ethical dilemmas?
Arkin: Well, a lot of this has been sharpened by debates with my colleagues in philosophy, computer science, and computer professionals for social responsibility. There are a lot of things that could potentially go wrong. One of the big questions (much of this is derived from what's called "just war theory") is responsibility--if there is a war crime, someone must be to blame. We have worked hard within our system to make sure that responsibility attribution is as clear as possible using a component called the "responsibility advisor." To me, you can't say the robot did it; maybe it was the soldier who deployed it, the commanding officer, the manufacturer, the designer, the scientist (such as myself) who conceived of it, or the politicians that allowed this to be used. Somehow, responsibility must be attributed.

Another aspect is that technological advancement in the battlefield may make us more likely to enter into war. To me it is not unique to robotics--whenever you create something that gives you any kind of advantage, whether it is gun powder or a bow and arrow, the temptation to go off to war is more likely. Hopefully our nation has the wherewithal to be able to resist such temptation.

Some argue it can't be done right, period--it's just too hard for machines to discriminate. I would agree it's too hard to do it now. But with the advent of new sensors and network centric warfare where all this stuff is wired together along with the global information grid, I believe these systems will have more information available to them than any human soldier could possibly process and manage at a given point in time. Thus, they will be able to make better informed decisions.

The military is concerned with squad cohesion. What happens to the "band of brothers" effect if you have a robot working alongside with a squad of human soldiers, especially if it's one that might report back on moral infractions it observes with other soldiers in the squad? My contention is that if a robot can take a bullet for me, stick its head around a corner for me and cover my back better than Joe can, then maybe that is a small risk to take. Secondarily it can reduce the risk of human infractions in the battlefield by its mere presence.

The military may not be happy with a robot with the capability of refusing an order. So we have to design a system that can explain itself. With some reluctance, I have designed an override capability for the system. But the robot will still inform the user on all the potential ethical infractions that it believes it would be making, and thus force the responsibility on the human. Also, when that override is taken, the aberrant action could be sent immediately to command for after-action review.

There is a congressional mandate requiring that by 2010, one third of all operational deep-strike aircraft be unmanned and, by 2015, one third of all ground combat vehicles be unmanned. How soon could we see this autonomous robot software being used in the field?
Arkin: There is a distinction between unmanned systems and autonomous unmanned systems. That's the interesting thing about autonomy--it's kind of a slippery slope, decision making can be shared.

First, it can be under pure remote control by a human being. Next, there's mixed initiative where some of the decision making rests in the robot and some of it rests in the human being. Then there's semi-autonomy where the robot has certain functions and the human deals with it in a slightly different way. Finally, you can get more autonomous systems where the robot is tasked, it goes out and does its mission, and then returns (not unlike what you would expect from a pilot or a soldier).

The congressional mandate was marching orders for the Pentagon and the Pentagon took it very seriously. It's not an absolute requirement but significant progress is being made. Some of the systems are far more autonomous than others--for example the PackBot in Iraq is not very autonomous at all. It is used for finding improvised explosive devices by the roadside, many of these have saved the lives of our soldiers by taking the explosion on their behalf.

You just came out with a book called "Governing Lethal Behavior in Autonomous Robots." Do you want to explain in a bit more detail what it's about?
Arkin: Basically the book covers the space of this three-year project that I just finished. It deals with the basis, motivation, underlying philosophy, and opinions people have based on a survey we did for the Army on the use of lethal autonomous robots in the battlefield. It provides the mathematical formalisms underlying the approach we take and deals with how it is to be represented internally within the robotic system. And, most significantly, it describes several scenarios in which I believe these systems can be used effectively with some preliminary prototype results showing the progress we made in this period.

IP: Logged
GaryClair
Member
Member # 3164

Icon 1 posted      Profile for GaryClair   Email GaryClair         Edit/Delete Post 
http://www.youtube.com/watch?v=B-K4pk25zD4
http://www.youtube.com/watch?v=kBrVRTVNph0

IP: Logged
GaryClair
Member
Member # 3164

Icon 1 posted      Profile for GaryClair   Email GaryClair         Edit/Delete Post 
http://www.wired.com/dangerroom/2009/07/air-force-plans-for-all-drone-future/

Air Force Plans for All-Drone Future
By David Axe July 17, 2009

 -

Is the day of the hot-shot fighter jock nearly done?

An Air Force study, released without much fanfare on Wednesday, suggests that tomorrow’s dogfighers might not have pilots in the cockpit. The Unmanned Aircraft System Flight Plan. which sketches out possible drone development through the year 2047, comes with plenty of qualifiers. But it envisions a radical future. In an acronym-dense 82 pages, the Air Force explains how ever-larger and more sophisticated flying robots could eventually replace every type of manned aircraft in its inventory — everything from speedy, air-to-air fighters to lumbering bombers and tankers.

Emphasis on “might” and “could.” While revealing how robots can equal the capabilities of traditional planes, the Air Force is careful to emphasize that an all-bot air fleet is not inevitable. Rather, drones will represent “alternatives” to manned planes, in pretty much every mission category.

Some of the missions tapped for possible, future drones are currently considered sacrosanct for human pilots. Namely: dogfighting and nuclear bombing. Drones “are unlikely to replace the manned aircraft for air combat missions in the policy-relevant future,” Manjeet Singh Pardesi wrote in Air & Space Power Journal, just four years ago. Dogfighting was considered too fluid, too fast, for a drone’s narrow “situational awareness.” As for nuclear bombing: “Many aviators, in particular, believe that a ‘man in the loop’ should remain an integral part of the nuclear mission because of the psychological perception that there is a higher degree of accountability and moral certainty with a manned bomber,” Adam Lowther explained in Armed Forces Journal, in June.

Despite this, the Air Force identifies a future “MQ-Mc” Unmanned Aerial System for dogfighting, sometime after 2020. The MQ-Mc will also handle “strategic attack,” a.k.a nuke bombing. Less controversial is the conjectural MQ-L, a huge drone that could fill in for today’s tankers and transports.

But just because a drone could replace a manned plane, doesn’t necessarily mean it definitely will. “We do not envision replacing all Air Force aircraft with UAS,” Col. Eric Mathewson told Danger Room by email. “We do plan on considering UAS as alternatives to traditionally manned aircraft across a broad spectrum of Air Force missions … but certainly not all.” In other words, in coming years drones might be able to do everything today’s manned planes can do — technically speaking. But the Air Force still might find good reasons — moral, financial or otherwise — to keep people in some cockpits.

The Flight Plan represents a new twist in a heated debate raging in Congress over the Pentagon’s 3,000-strong fighter force. The legislature is split over whether to fund more F-22 fighters — a move that could draw a veto from President Barack Obama. Secretary of Defense Robert Gates has long favored drone development over buying more manned fighters, and in May Joint Chiefs chair Admiral Mike Mullen predicted Gates’ position would win out, over the long term. “There are those that see [the F-35 Joint Strike Fighter] as the last manned fighter,” Mullen said. “I’m one that’s inclined to believe that.” General Atomics, which makes the popular Predator line of drones, underscored Mullen’s comment by unveiling its new, faster Predator C.

If Flight Plan proves an accurate predictor, it’s not just manned fighters (maybe) headed for extinction, but (maybe) nuclear bombers, transports, tankers … nearly all human-occupied military planes.

IP: Logged
GaryClair
Member
Member # 3164

Icon 1 posted      Profile for GaryClair   Email GaryClair         Edit/Delete Post 
Robot Scientist Makes Discovery
Eric Bland, Discovery News

April 2, 2009 -- The discovery of 12 new functions for genes in one of the most studied organisms in the world wouldn't be news, except that scientists didn't discover them. A robot named Adam designed, carried out and discovered the new gene functions.

"Our goal is to make science more efficient," said Ross King, a professor of biology and computer science at the University of Wales and author of a new paper in this week's issue of Science detailing Adam's work.

"If we had computers designing and carrying out experiments we could get through many more experiments than we currently can," said King, adding "robots don't need to take holidays."

The 10-year-old Adam, which is housed at Aberystwyth University in the U.K., might replace humans eventually, but it doesn't look like one. From the outside Adam is 45 cubic meters of elongated white plastic instruments.

IP: Logged
GaryClair
Member
Member # 3164

Icon 1 posted      Profile for GaryClair   Email GaryClair         Edit/Delete Post 
Scientists Worry Over Super-Smart AI
By Kevin Parrish

Will 2001: A Space Odyssey's HAL 9000 become a reality soon? No, but scientists fear that technology is heading that way.

This weekend John Markoff of The New York Times wrote an interesting article about machines, and how they may eventually outsmart man. His opening paragraph describes three scenarios that are already a reality: a robot that can open doors and track down an electrical outlet to recharge itself, machines that are very close to killing humans autonomously, and unstoppable computer viruses and worms that have reached a "cockroach" stage of machine intelligence.

The good news is that artificial intelligence hasn't reach the "HAL 9000" level of intellect; computers haven't become self-aware, nor will they form any kind of Skynet any time soon. However many researchers have agreed that the killing robots, as previously mentioned, are in fact here, or will be here soon.

Additionally, progress has raised concerns that robots will take the place of human workers, and that humans will eventually be forced to live with machines that mimic human behaviors. There's also concern that criminals could take advantage of advancements in AI, using a "speech synthesis system" to impersonate another human, for example, or mining smart phones to uncover personal information.

Does that mean super-intelligent machines and artificial intelligence systems will eventually run amok? The researchers attending the conference (mostly) discounted the idea of a spontaneous intelligence stemming from the Internet, and other highly centralized intelligences outside the web. However, Dr. Horvitz said that computer scientists must respond to the notions nonetheless.

IP: Logged
Charles
Administrator
Member # 7

Icon 1 posted      Profile for Charles           Edit/Delete Post 
I spoke with someone recently who used to work for Lockheed here in Burbank, and he mentioned that the fighter planes the US is developing are going to be pilotless. Fighter pilots as we will know them will operate remotely. The article you posted above Gary confirms that.

--------------------
 -

IP: Logged
CavePainter
IE # 297
Member # 2568

Icon 1 posted      Profile for CavePainter   Email CavePainter         Edit/Delete Post 
The remote pilotless fighter plane has been in the works for quite a while.... the only limitation on the performance and maneuverability of these craft for quite a few years now has been the physical G limitation of its human pilot. As soon as you get rid of the pilot, you can make lighter, smaller, rugged, cheap, inherently unstable craft (like the computer guided X-29 from a few years back) that have exponentially higher performance. Unlike the human animal, they're EXPENDABLE.

In the future (apparently right now), the victor will be not the combatant with the strongest arms, but the one with the smartest brain and the best technology.

Someday War will be conducted when opponents have a bunch of pilotless aircraft, tanks and robots fighting each other off in the middle of the desert. Everybody will just kick back in the bleachers at the edge of the battlefield, pop open a few beers, grab some hot dogs and watch the game....
[cheers]

IP: Logged
GaryClair
Member
Member # 3164

Icon 1 posted      Profile for GaryClair   Email GaryClair         Edit/Delete Post 
 -
IP: Logged
Plai
IE # 298
Member # 3529

Icon 1 posted      Profile for Plai   Author's Homepage   Email Plai         Edit/Delete Post 
Where's Dr. Light when we need him? Hopefully he comes back in time to build mega man :]

--------------------
Plai.tv

IP: Logged
EustaceScrubb
IE # 37
Member # 862

Icon 1 posted      Profile for EustaceScrubb           Edit/Delete Post 
Robot Attack Insurance
IP: Logged


 
Post New Topic  Post A Reply Close Topic   Feature Topic   Move Topic   Delete Topic next oldest topic   next newest topic
 - Printer-friendly view of this topic
Hop To:

Contact Us | Animation Nation

Animation Nation © 1999-2012

Powered by Infopop Corporation
UBB.classicTM 6.5.0