Artificial intelligence brings efficiency to processing pictures and sound

Researchers into artificial intelligence have found a new way of approaching a visual reasoning principle called perceptual grouping. A robot learned to group its observations meaningfully unsupervised so that it had not been taught grouping criteria separately.
鈥淲hen a robot sees pictures, it learns not only to distinguish between the independent parts of the picture, but also to combine the parts that belonged together into wholes and, if necessary, to fill in the missing parts of the picture. For example, a household robot learned to navigate around furniture and other obstacles and to distinguish which objects were situated behind others. The task set for the household robot could be to take hold of a mat whose edges were visible from under the edges of a sofa. To carry out the task, the robot has to learn that the two pieces of mat are part of a complete mat and that it is enough if it takes hold of the mat from one side of the sofa鈥, explains Antti Rasmus who is carrying out the research for his doctoral dissertation.
Unsupervised grouping of observations has not yet really been researched, but it could be utilised for example in image processing in opening different image layers and in selecting layers to form the final image. This property for example makes it easy to remove disturbing elements from an image.
鈥淭he property can be utilised in noisy situations too when there is a need to concentrate on only one sound. In this case the robot makes it possible to separate one audio signal from another鈥, observes Rasmus.
Deep neural networks, which previously required a lot of data, learn more efficiently when using new visual reasoning. Every image brings more information for the robot鈥檚 learning task and so the efficiency of an individual image improves and fewer images are needed than previously.
鈥淢ovement is also a strong clue to the robot about things that belong together because parts that are linked to one another always move in the same direction. For example, the robot finds it easier to spot a dog behind a fence when the dog starts to move鈥, he adds.
The research is being carried out by Antti Rasmus, Mathias Berglund and Tele Hotloo Hao from the Department of Computer Science and The Curious AI Company, Klaus Greff and J眉rgen Schmidhuber from IDSIA, the Swiss research laboratory that specialises in artificial intelligence, and The Curious AI Company鈥檚 CEO Harri Valpola. The research is part of Mr Rasmus鈥 and Mr Berglund鈥檚 doctoral research.
More information:
Antti Rasmus
Doctoral student
Aalto University, Department of Computer Science
antti.rasmus@aalto.fi
Article Tagger:
Read more news

Aalto computer scientists in STOC 2025
Two papers from Aalto Department of Computer Science were accepted to the Symposium on Theory of Computing (STOC).
Suggest the School of Electrical Engineering Alum of the Year 2025!
Submit your suggestion by 31 August 2025.
Aalto University again ranked Finland鈥檚 top university in the QS World University Rankings
Aalto placed 114th globally