A new programming method previously used in manufacturing can be applied to control a swarm of robots to make use of robotics advantageous in areas where safety is a concern, says a new study.
A team of researchers from the University of Sheffield applied the novel method to experiments using up to 600 of their 900-strong robot swarm, one of the largest in the world.
Swarm robotics studies how large groups of robots can interact with each other in simple ways to solve relatively complex tasks cooperatively.
In the study, published recently in Swarm Intelligence journal, the researchers used the supervisory control theory for the first time with a swarm of robots to reduce the need for human input and, therefore, error.
The researchers used a graphical tool to define the tasks they wanted the robots to achieve, a machine then automatically programmed and translated this to the robots.
This programme used a form of linguistics, comparable to using the alphabet in the English language. It allowed the robots use their own alphabet to construct words, with the ‘letters’ of these words relating to what the robots perceived and to the actions they chose to perform.
The supervisory control theory helped the robots to choose only those actions that eventually resulted in valid ‘words’. Hence, the behaviour of the robots was guaranteed to meet the specification.
A previous research used “trial and error” methods to automatically program groups of robots, which can result in unpredictable, and undesirable, behaviour. Moreover, the resulting source code was time-consuming to maintain, which made it difficult to use in the real-world.
However, the new research showed that the new method could be used in a situation where a team is needed to tackle a problem and each individual robot is capable of contributing a particular element, which could be hugely beneficial in a range of contexts — from manufacturing to agricultural environments.
“Our research poses an interesting question about how to engineer technologies we can trust — are machines more reliable programmers than humans after all? We, as humans set the boundaries of what the robots can do so we can control their behaviour, but the programming can be done by the machine, which reduces human error,” said Dr Roderich Gross from Sheffield.
Reducing human error in programming also has potentially significant financial implications.
The global cost of debugging software is estimated at $312 billion annually and on average and software developers spend 50 percent of their programming time finding and fixing bugs.