ScienceBerkeley robot learns via trial and error method, like human beings

Berkeley robot learns via trial and error method, like human beings

University of Berkeley researchers have built an exciting new type of robot which learns how to do things just like a child does. Dubbed BRETT, short for Berkeley Robot for the Elimination of Tedious Tasks, it’s different from similar machines of today because it requires a comparably smaller amount of pre-programming and has the capacity to work outside controlled environments such as medical centers, factories or laboratories.

In other words, BRETT can adapt to situations and this is due to the advancements made by Berkeley’s People and Robotics Initiative. The program deals with an relatively nascent form of artificial intelligence called deep structured learning. The last mentioned can be loosely described to be based on the neural circuitry of the human brain when it perceives and interacts with everything around it.

2552015-302

BRETT has the ability to learn to perform tasks all by itself, through trial and error, just like human beings. Without figuring out how to adapt, robots cannot be properly integrated into our lives. All we will have are ‘dumb machines’ which need specific instructions on what task it should do and how it should be done. As technologies like Siri and Google speech-to-text prove, its easier to apply deep learning to software.

Merging the science with motor skills is an entirely different challenge. This is because physical tasks involve more than just passive detection of vision or audio. BRETT functions without pre-programming containing labeled directions, examples of how to solve a problem and so on. The researchers at Berkeley added a reward function instead to the robot’s learning process and asked it to carry out several motor tasks.

2552015-303

The assignments included stacking LEGO bricks, screwing on the cap of a bottle and fitting wheels onto a toy airplane. Actions which brought the robot closer to completing the job were given a higher score than those that did not. The deep learning technique builds ‘neural nets’ containing artificial neurons to process overlapping raw sensory data. The scores were passed on to this neural net in the machine’s ‘brain’.

The robot could learn which movements were better for the task at hand based on how high its various movements were scored. When given the required coordinates for the beginning and end of a job, BRETT could finish a basic task within 10 minutes. But when data on the location for the objects in the scene was missing and the robot needed to learn vision and control simultaneously, it took almost 3 hours.

We will have robots which can learn to clean houses or deal with laundry without human intervention. But that future is at least 10 years away, according to the team at Berkeley.

BRETT the Robot learns to put things together on his own

Related Articles

Latest Posts