As robots become more common in the workplace, the industry is trying to figure out how to integrate the technologies with business — both safely and ethically. Those were some of the issues discussed and debated at the recent Robobusiness conference in Silicon Valley.
The robots are coming.
Actually, they’re already here in greater numbers than ever before.
Roboconference in Silicon Valley demonstrates how robots are increasingly finding work in factories and warehouses without taking any breaks.
The startup Rover Robotics has created a platform for any company to put robotic applications on top of a base for just $4,000. t’s technology used by bomb squads and first responders. The hope is to spread it to other verticals like agriculture and warehouses.
On the expo floor as numerous types of robots whizz back and forth, it’s apparent that the technology is getting cheaper, faster and smarter.
But on a scale of 1 to 10, with 10 being most concerned, where does Brian Green, the head of Santa Clara University’s Director of Technology Ethics, rank his concern about the future of robotics and artificial intelligence?
Green tells an audience, “I’m about at a level of 8 or 9.”
“The world already has plenty of problems in it. And A-I and robotics and machine learning technologies are going to take the problems we already have and amplify them,” Green told me. “Is this dangerous? Is this risky? And the answer is I think clearly yes.”
Green’s department has developed an ethics curriculum for companies such as Google’s Lab for secretive projects, known as X.
He cites AI weapons systems as his biggest concern, saying that he can envision the technology getting away from humans.
“We feel like we want to have control, but at the same time we want to delegate lots of power to the technology. And so based on that, what is the balance going to be between that? asks Green. “And I think if we want to maintain safety of the technology, we need to delegate less rather than more.”
When technology gets out of control, that’s when lawyers from a robotics and automation group step in.
Lawyers from Faegre Baker Daniels LLC held a session where they discussed seeing everything from accidental plant shutdowns to employees getting hands stuck in rollers.
But partner Brian Clifford says A-I opens up a new legal frontier.
“When there is no human involved, and the YX was chosen over Y, how do you deal with culpability? asks Clifford. “Is it the programmer of the machine that ultimately led to it even though at some point it becomes its own learning mechanism, or not? But at the end of the day, we are all going to struggle with this.”
Clifford says one continuous challenge is that the legal system is always playing catch up to the pace of technology.