Invited Speakers

Invited speakers (abstracts and bios below)

  • Christian Balkenius, Can robots have empathy?
  • Caswell Barry, Plus ca Change - homeostasis and visually driven transitions in place cells
  • Josh Bongard, From rigid to soft to biological robots
  • Benoit R. Cottereau, Emergence of motion and depth selectivity in primate visual cortex through experience-driven plasticity
  • Kerstin Dautenhahn, Interaction Studies with Social Robots
  • Heiko Hamann, Opportunities of Bio-hybrid Systems with Natural Plants: Shaping and Sensing
  • Sabine Hauert, Swarms for people
  • Mehdi Khamassi, Some applications of the model-based / model-free reinforcement learning framework to Neuroscience and Robotics
  • Jeffrey L. Krichmar, Neuromodulation and Behavioral Trade-Offs
  • AJung Moon, The road to designing interactive robots with ethics in mind
  • Andy Philippides, Ants and robots: Insect-inspired visual navigation
  • Tony J. Prescott, Understanding the layered architecture of the mammalian brain through robotics
  • Jenny C. A. Read, Stereoscopically sensitive behaviour without correspondence
  • Francesca Sargolini, Grid cells and spatial navigation
  • Denis Sheynichovich, A panoramic visual representation in the parietal-medial temporal pathway and its role in spatial and non-spatial behaviors
  • Guy Theraulaz, The collective intelligence of superorganisms
  • Jochen Triesch, Self-calibration of active vision: from brains to robots
  • Elio Tuci, Heterogeneity in swarm robotics as a tool to generate desired collective responses
  • Stéphane Viollet, From insects to robots and vice versa

 

Abstracts

 

balkenius.png

Christian Balkenius

Lund University Cognitive Science, Sweden

Can robots have empathy?

ABSTRACT:

It has proven difficult to program robots to follow ethical rules or to reason morally. I will suggest an alternative that bases the robot's actions in emotional processes, in empathy and and in an understanding of others. This approach could potentially enable robots to learn social norms and moral behavior from people around them. Central to the idea is the use of generative models to understand the goal of others, essentially interpreting the behaviors of others based on a model of yourself. Such models can be used to infer the goals of agents that operate in a dynamical environment and for the robot to select appropriate actions towards others.

BIO:

Christian Balkenius is a professor of Cognitive Science at Lund University Cognitive Science (LUCS). His main research goal is to construct systems level computational models of functional subsystems in the mammalian brain and their interaction using artificial neural networks. His work focuses on various forms of cognitive processes, including sequential processing, categorisation, motivation and action selection as well as spatial learning, conditioning and habituation. He has published some 200 research papers and articles on neural network based modelling of cognitive processes, robotics, vision and learning theory. Balkenius leads Lund University Cognitive Robotics Lab and is the director of the Graduate school within The Wallenberg AI, Autonomous Systems and Software Program – Humanities and Society (WASP-HS).

caswell.png

Caswell Barry

Cell and Developmental Biology Department, UCL London, UK

Plus ca Change - homeostasis and visually driven transitions in place cells

The hippocampus occupies a central role in mammalian navigation and memory. Yet an understanding of the rules that govern the statistics and granularity of the spatial code, as well as its interactions with perceptual stimuli, are lacking. We analysed CA1 place cell activity recorded while rats foraged in different large-scale environments. We found that place cell activity was subject to an unexpected but precise homeostasis - the summary statistics of population-level firing being constant at all locations within and between environments. Using a virtual reconstruction of the largest environment, we showed that the rate of transition through this statistically-stable population matches the rate of change in the animals’ visual scene. Thus place fields near boundaries were small but numerous, while in the environment interior they were larger but more dispersed. These results indicate that hippocampal spatial activity is governed by a small number of simple laws and in particular suggest the presence of an information-theoretic bound imposed by perception on the fidelity of the spatial memory system.

bongard.png

Josh Bongard

Department of Computer Science, University of Vermont, USA

From rigid to soft to biological robots

ABSTRACT:

Organisms and robots must find ways to return to a viable state when confronted with unexpected internal surprise such as injury, or external surprise, such as a new environment. Rigid robots can only confront such challenges by adapting behaviorally. Soft robots have the added option of morphological adaptation: changing shape, material properties, topology, plurality, and/or mass. Finally, biological robots -- machines built completely from biological tissues -- inherit the protean nature of their donor organisms, providing them with forms of morphological and behavioral adaptation beyond even today’s most morphologically plastic soft robots. In this talk I will review our recent efforts to create biological robots, and how their protean natures have led us to rethink how we approach soft robotics, embodied cognition, and intelligence in general.

BIO:

Josh Bongard is the Veinott Professor of Computer Science at the University of Vermont and director of the Morphology, Evolution & Cognition Laboratory. His work involves automated design and manufacture of soft-, evolved-, and crowdsourced robots, as well as computer-designed organisms: the so-called “xenobots”. A PECASE, TR35, and Cozzarelli Prize recipient, he is the co-author of the book How The Body Shapes the Way We Think, the instructor of a reddit-based evolutionary robotics MOOC, and director of the robotics outreach program Twitch Plays Robotics.

cottereau.png

Benoit R. Cottereau

Cerco laboratory (CNRS UMR 5549), SV3M team, France

Emergence of motion and depth selectivity in primate visual cortex through experience-driven plasticity

Neural selectivity in primate visual cortex strongly reflects the statistics of our environment. Although various hypotheses have been proposed to account for this relationship, an explanation as to how the cortex might develop the computational architecture to support these encoding schemes remains elusive. In this talk, I will present recent results from my lab showing how visual experience can modify the way we process and perceive our surrounding space. I will describe a novel approach which combines spiking neural networks with a biologically plausible plasticity rule (spike-timing dependent plasticity or ‘STDP’) to model the emergence of motion and depth selectivity in primate visual cortex. I will compare the outputs of the model to electrophysiological and psychophysical data measured in macaques and humans. Finally, I will show how such an approach can be implemented in adaptive artificial systems.

pp_etis_3_.png

Kerstin Dautenhahn

Social and Intelligent Robotics Research Laboratory, University of Waterloo, Canada

Interaction Studies with Social Robots

ABSTRACT:

The talk will survey findings from some recent studies that my research group at University of Waterloo has conducted over the past 2 years. This includes challenges on how to continue research and enable students to graduate during Covid-19 when in person experiments were prohibited, which required a re-thinking of how we can conduct research in this domain. I will further discuss application areas that we are exploring in term of developing robots as socially assistive tools. This includes developing robots to facilitate robot-assisted play for children with upper-limb challenges, as well as using social robots to explore bullying interventions for school-aged children.

BIO:

Kerstin Dautenhahn, IEEE Fellow, is Full Professor and Canada 150 Research Chair in Intelligent Robotics at University of Waterloo in Ontario, Canada. She has a joint appointment with the Departments of Electrical and Computer Engineering and Systems Design Engineering and is cross-appointed with the David R. Cheriton School of Computer Science at University of Waterloo. In Waterloo she directs the Social and Intelligent Robotics Laboratory. Her research areas are social robotics, human-robot interaction, assistive robotics, cognitive and developmental robotics. She has published more than 100 peer-reviewed journal articles (H-Index 84), and frequently gives invited keynote presentations at international conferences. She has several senior Editorial Roles in international journals.

Hamann

Heiko Hamann

Department of Computer and Information Science, University of Konstanz, Germany

Opportunities of Bio-hybrid Systems with Natural Plants: Shaping and Sensing

Natural plants are still rather underrepresented in modern research on bio-hybrid systems. However, plants offer many interesting features, such as adaptive behavior, growth, and sensitive physiological systems. We have explored engineering methods to autonomously shape plants. One key challenge was to develop data-driven simulators of a plant's growth and motion. We can exploit natural adaptive behaviors to build, for example, systems that self-repair. In a more recent approach we use plants as sensors, which is called phytosensing. The idea is to correlate, for example, specific forms of air pollution with measurable effects in the plant. These ideas hopefully can help to shape and protect our future cities and to prepare them for the upcoming challenges.

sabine_1.png 

Sabine Hauert

Bristol Robotics Laboratory, University of Bristol, UK

Swarms for people

As tiny robots become individually more sophisticated, and larger robots easier to mass produce, a breakdown of conventional disciplinary silos is enabling swarm engineering to be adopted across scales and applications, from nanomedicine to treat cancer, to cm-sized robots for large-scale environmental monitoring or sophisticated robots for intralogistics. This convergence of capabilities is facilitating the transfer of lessons learned from one scale to the other. Cm-sized robots that work in the 1000s may operate in a way similar to reaction-diffusion systems at the nanoscale, while sophisticated microrobots may have individual capabilities that allow them to achieve swarm behaviour reminiscent of larger robots with memory, computation, and communication. Although the physics of these systems are fundamentally different, much of their emergent swarm behaviours can be abstracted to their ability to move and react to their local environment. This presents an opportunity to build a unified framework for the engineering of swarms across scales that makes use of machine learning to automatically discover suitable agent designs and behaviours, digital twins to seamlessly move between the digital and physical world, and user studies to explore how to make swarms safe and trustworthy. Such a framework would push the envelope of swarm capabilities, towards making swarms for people.

BIO:

Sabine Hauert is Associate Professor (Reader) of Swarm Engineering at University of Bristol. She leads a team of 20+ researchers working on making swarms for people, and across scales, from nanorobots for cancer treatment, to larger robots for environmental monitoring, or logistics (https://hauertlab.com/). Before joining the University of Bristol, Sabine engineered swarms of nanoparticles for cancer treatment at MIT, and deployed swarms of flying robots at EPFL. Her recent work has been published in Science Robotics, Science Advances, Advanced Intelligent Systems, Nature Machine Intelligence, and ICRA/IROS and Ra-L. She’s PI or Co-I on more than 20M GBP in grant funding and has served on national and international committees, including the UK Robotics Growth Partnership, the Royal Society Working Group on Machine Learning and Data Community of Interest, and several IEEE boards. She is President and Executive Trustee of non-profits robohub.org and aihub.org, which connect the robotics and AI communities to the public. As an expert in science communication, she is often invited to speak with media and at conferences (over 50 invited talks).

pp_etis_1_1.png

Mehdi Khamassi

ISIR, Sorbonne University, France

Some applications of the model-based / model-free reinforcement learning framework to Neuroscience and Robotics

The model-free reinforcement learning (RL) framework, and in particular Temporal-Difference learning algorithms, have been successfully applied to Neuroscience since about 20 years. It can account for dopamine reward prediction error signals in simple Pavlovian and single-step  decision-making tasks. However, more complex multi-step tasks employed  both in Neuroscience and Robotics illustrate their computational  limitations. In parallel, the last 10 years have seen a growing interest in  computational models for the coordination of different types of learning  algorithms, e.g. model-free and model-based RL. Such a coordination  enables to explain more diverse behaviors and learning strategies in  humans, monkeys and rodents. Such coordination also seems a promising  way to endow robots (and more generally autonomous agents) with the  ability to autonomously decide which learning strategy is appropriate in  different encountered situations, while at the same time permitting to  minimize computation cost. In particular, I will show some results in  robot navigation and human-robot interaction tasks where the robot  learns (1) which strategy (either model-free or model-based) is the most  efficient in each situation, and (2) which strategy has the lowest  computation cost/time when both strategies offer the same performance in  terms of reward obtained from the environment.

These robotic applications of a neuro-inspired framework for the  coordination of model-based and model-free reinforcement learning  provide new insights into the dynamics of learning in more realistic,  noisy, embodied situations, which can bring some feedback and novel  hypotheses for Neuroscience and Psychology.

BIO:


Mehdi Khamassi is a permanent research scientist employed by the CNRS and working at the Institute of Intelligent Systems and Robotics,  Sorbonne Université, Paris, France. He as been trained with a double  background in machine learning / robotics and experimental / computational neuroscience. His main research interests include  decision-making, reinforcement learning, performance monitoring,  meta-learning, and reward signals in social and non-social contexts.

14.png

Jeffrey L. Krichmar

Department of Cognitive Sciences, University of California, Irvine, USA

Neuromodulation and Behavioral Trade-Offs

ABSTRACT:

Biological organisms need to consider many trade-offs to survive. These trade-offs regulate basic needs such as whether to forage for food, which might expose oneself to predators, or hide in one’s home, which is safer but does not provide sustenance. These trade-offs can also appear in cognitive functions such as introverted or extroverted behavior. Incredibly, many of these trade-offs are regulated by chemicals in our brain and body, such as neuromodulators or hormones. Neuromodulators send broad signals to the brain that can dramatically change behaviors, moods, decisions, etc. The brain can control these modulatory and hormonal systems by setting a context or making an adjustment when there are prediction errors. The interaction between these neuromodulators can tip the balance between one behavior over another. In this talk, I will explore several of these behavioral trade-offs and the neuroscience behind the trade-offs. I will go on to show how applying these concepts to robots and models results in behavior that is more interesting and more realistic.

BIO:

Jeffrey L. Krichmar received a B.S. in Computer Science in 1983 from the University of Massachusetts at Amherst, a M.S. in Computer Science from The George Washington University in 1991, and a Ph.D. in Computational Sciences and Informatics from George Mason University in 1997. He spent 15 years as a software engineer on projects ranging from the PATRIOT Missile System at the Raytheon Corporation to Air Traffic Control for the Federal Systems Division of IBM. In 1997, he became an assistant professor at The Krasnow Institute for Advanced Study at George Mason University. From 1999 to 2007, he was a Senior Fellow in Theoretical Neurobiology at The Neurosciences Institute. He currently is a professor in the Department of Cognitive Sciences and the Department of Computer Science at the University of California, Irvine. His research interests include neurorobotics, embodied cognition, biologically plausible models of learning and memory, and the effect of neural architecture on neural function

 

moon.png

AJung Moon

Electrical and Computer Engineering, McGill University, Canada

The road to designing interactive robots with ethics in mind

ABSTRACT:

Interactive robots promise to address some of the world's toughest problems. However, unexpected social, ethical, and legal issues can arise as we design systems that interact with people. For instance, should a robot be designed to always yield to humans when limited resources are concerned? What if we know that humans will always yield when robots behave in certain ways? What norms should we follow to incorporate ethics into the design of interactive robots? More importantly, how can we even begin to integrate ethics into design when discussions about what is 'right' and 'wrong' always seem to lead to more questions than solutions? Building on the latest findings in human-robot interaction, this talk will present new ways to ground the discussions of ethics in technical design.

BIO:

AJung Moon is an experimental roboticist specializing in ethics and responsible design of interactive robots and AI systems. At McGill University, she directs the McGill Responsible Autonomy & Intelligent System Ethics (RAISE) lab, an interdisciplinary group that investigates what it means for engineers to design and deploy autonomous systems responsibly for a better, technological future. Prior to joining McGill, she served as Senior Advisor for the UN Secretary-General’s High-level Panel on Digital Cooperation, CEO of an AI ethics consultancy, and founder of the Open Roboethics Institute.

13.png

Andy Philippides

Centre for Computational Neuroscience and Robotics, University of Sussex, UK

Ants and robots: Insect-inspired visual navigation

ABSTRACT:

The use of visual information for navigation is a universal strategy for sighted animals, amongst whom desert ants are particular experts. Despite having brains of only a million neurons and low-resolution vision equivalent to a 1 kilobyte camera, desert ants learn long paths through complex terrain after only a single exposure to the training data. Such rapid learning with small brains is possible because learning is an active process scaffolded by specialised behaviours which have co-evolved with the ant’s brain and sensory system to robustly solve the single task of navigation. In this talk, I will show how an agent – insect or robot – can robustly navigate without ever knowing where it is, without specifying when or what it should learn, nor requiring it to recognise specific objects or places. This leads to an algorithm in which visual information specifies actions not locations and in which route navigation is recast as a search for familiar views. I will show that this simplification allows the information needed to robustly navigate routes to be rapidly and robustly encoded by a single layer neural network making it plausible for small-brained animals and lightweight robots with all computation on-board.

BIO:

I am Professor of Biorobotics at the University of Sussex where I co-direct the Centre for Computational Neuroscience and Robotics, an interdisciplinary research group at the interface of Artificial Intelligence and Neuroscience, and be.AI: the Leverhulme Doctoral Centre for Biomimetic Embodied AI. By considering intelligence as an active process in which adaptive behaviour emerges from the interaction of body, brain and environment, my research aims to both better understand intelligence and develop novel AI and biorobotic algorithms. This is exemplified by my work on robotic visual navigation and exploration inspired by the remarkable visual navigation and learning abilities of ants and bees.

 

 

 prescot.png

Tony J. Prescott

Sheffield Robotics, University of Sheffield, UK

Understanding the layered architecture of the mammalian brain through robotics

ABSTRACT:

The functional organization of the mammalian brain is widely considered to form a layered control architecture. However, how this architecture contributes to adaptive behavior, and how it has emerged through evolution, and is constructed during development, remain amongst the most challenging questions in science.  Our research explores the role of layered brain architectures in two contexts—active touch sensing in mammals and sense of self in humans.  Recently, we have also applied the framework of constraint closure, viewed as a general characteristic of living systems, to the problem of brain organization.  This analysis draws attention to the capacity of layered brain architectures to scaffold themselves across multiple timescales.  This talk will explore how the approach of building control systems for autonomous robots, incorporating computational neuroscience models at different levels of abstraction, can help illuminate the organisation of the brain, viewed as a layered control architecture, whilst also creating opportunities for technological innovation. 

 

BIO:

Tony Prescott (him/his) is a Professor of Cognitive Robotics at the University of Sheffield who develops robots that resemble animals including humans. His goal is both to advance the understanding of biological life and to create useful new technologies such as assistive and educational robots. With his collaborators he has developed several animal-like robots including the whiskered robots Scratchbot and Shrewbot, and the pet-like robot MiRo-e which is currently in use for research and education and is being explored as a potential therapeutic tool for children with anxiety. He has published over 200 refereed articles and journal papers at the intersection of psychology, brain science and robotics. 

 

pp_etis_2_.png

Jenny C. A. Read

Biosciences Institute, Newcastle University, UK 

Stereoscopically sensitive behaviour without correspondence

Stereoscopic vision, or stereopsis, is generally assumed to require stereo correspondence, i.e. identifying which point in the left eye’s image corresponds to the same scene object as a given point in the right. However, the discovery of stereopsis in small-brained animals such as the praying mantis motivates us to think about what can be achieved with simpler forms of stereopsis. Insect stereopsis likely evolved to produce simple behaviour, such as orienting towards the closer of two objects or triggering a strike when prey comes within range, rather than to achieve a rich perception of scene depth. I will show that this sort of adaptive behaviour can be produced with very basic stereoscopic algorithms which make no attempt to achieve fusion or correspondence, or to produce even a coarse map of depth across the visual field. Such algorithms may be all that is required for insects, and may also prove useful in some autonomous applications.

francesca_1.png

Francesca Sargolini

Laboratoire de neurosciences cognitives de Marseille, Aix-Marseille University, France

Grid cells and spatial navigation

Grid cells are spatially selective neurons in entorhinal and parahippocampal cortices, that fire in a hexagonal grid-like pattern covering the entire surface explored by the animals. It has been suggested that their activity provides the brain with an invariant metric map that allows rats (and more in general mammals) to navigate in space using mainly self-movements. However, recent studies show that grid cell activity is also modulated by environmental (allocentric) cues, a result that is consistent with the fact that entorhinal and parahippocampal lesions impair differents spatial abilities. In this conference I will present recent data showing that grid cells compute information necessary for both self-motion-based and allocentric navigation. Moreover, they suggest that grid cells may be implicated in organizing experience according to a specific temporal structure. This function is a key aspect that may reconcile the two opposite views on the role of grid cells in spatial navigation.

shey.png

Denis Sheynichovich

Sorbonne Université, UPMC Institut de la Vision Paris, France

A panoramic visual representation in the parietal-medial temporal pathway and its role in spatial and non-spatial behaviors

While an important role of the primate hippocampus in visual perception is suggested by steadily mounting evidence, the processing of visual information on its way to memory structures and its contribution to the generation of visuospatial behaviors is poorly understood. Recent imaging studies suggest the existence of scene-sensitive areas in the dorsal visual stream that are likely to combine visual information from successive egocentric views, whereas behavioral evidence indicates the memory of surrounding visual space in extraretinal coordinates. The present work explores the idea that a high-resolution panoramic visual representation links visual perception and mnemonic functions during natural behavior. In a spiking neural network model of the parietal-medial temporal pathway of the dorsal visual stream it is shown that such a representation can mediate hippocampal involvement in a range of spatial and non-spatial tasks, accounting for the proposed role of this structure in scene perception, representation of physical and conceptual spaces, serial image memorisation and spatial reorientation. More generally, the model predicts a common role of view-based allocentric memory storage in spatial and visual behavior.

guy.png

Guy Theraulaz

Research Center on Animal Cognition, CNRS, Université Paul Sabatier, France

The collective intelligence of superorganisms

In this talk I will review our current knowledge of the behavioral mechanisms underlying collective intelligence in animal societies such as social insects, fish schools or flocks of birds. These mechanisms enable a group of individuals to coordinate their actions, solve a wide variety of problems and acts as a single superorganism. Collective intelligence emerges from rudimentary and local interactions among individuals during which these ones exchange information. Despite their simplicity, these social interactions allow groups of individuals to collectively process information and self-organize. To study collective intelligence phenomena in animal societies we developed a general methodology that consists in monitoring and quantifying the behaviors of individuals and, in parallel, the collective organization and behaviors at the group-level, and then connect both levels by means of computational models. This approach that tightly combines experiments with models has revealed how individuals combine the information about the behavior of their neighbors with environmental cues to coordinate their own actions and make collective decisions.

BIO:

Short bio: Guy Theraulaz is a senior research fellow at the National Center for Scientific Research CNRS) and an expert in the study of collective animal behaviors. He is also a leading researcher in the field of swarm intelligence, primarily studying social insects but also distributed algorithms, e.g. for collective robotics, directly inspired by nature. His research focuses on the understanding of a broad spectrum of collective behaviors in animal societies by quantifying and then modeling the individual level behaviors and interactions, thereby elucidating the mechanisms generating the emergent, group-level properties. He was one of the main characters of the development of quantitative social ethology and collective intelligence in France. He published many papers on nest construction in ant and wasp colonies, collective decision in ants and cockroaches, and collective motion in fish schools and pedestrian crowds. He has also coauthored five books, among which Swarm Intelligence: From Natural to Artificial Systems (Oxford University Press, 1999) and Self-organization in biological systems (Princeton University Press, 2001) that are now considered as reference textbooks.

Jochen_.png

Jochen Triesch

FIAS Frankfurt Institute for Advanced Studies, Germany

Self-calibration of active vision: from brains to robots

Biological vision systems learn to actively perceive the world with little to no external guidance. What principles drive this learning and how can it be emulated in robots? Active Efficient Coding is a theoretical framework that aims to explain such learning based on information theoretic principles. It posits that biological sensory systems strive to learn efficient representations of their sensory inputs while also adapting their behavior, in particular their eye movements, to optimize sensory coding. For example, Active Efficient Coding models have successfully explained the self-calibration of active stereo and motion vision, reproducing many biological findings and making novel predictions. At the same time, these models have been validated on biomimetic robots. Recently, the approach has also been applied to a completely different active perception system: echo location in bats. In this talk, I will review the fundamentals of Active Efficient Coding and describe recent developments and future challenges. In particular, I will discuss how far we still are from building biomimetic vision systems that autonomously learn to perceive the world.

 

etuci.png

 

Elio Tuci

Department of Computer Science, University of Namur, Belguim

Heterogeneity in swarm robotics as a tool to generate desired collective responses

Swarm robotics is a sub-domain of a larger research area dedicated to the design and control of multi-robot systems. Swarm robotics draws inspiration from the behaviour of social insects. In swarm robotics, each robot has its own control system, perception is based on sensors mounted directly on the robots' body, and communication between robots is relatively simple. So far, most of the research in swarm robotics has been devoted to the so called homogeneous robot swarms: swarms in which all robots are morphologically and functionally identical. A significant share of research has been also devoted to the so called heterogeneous robot swarms: swarms in which robots differ in their hardware, control software, or both. My analysis of the swarm robotics literature points to the fact that heterogeneity in robot swarms has not been fully exploited, despite the potential advantages it can offer. I illustrate the results of a series of studies that explore a novel perspective in which heterogeneity takes centre stage and becomes the main design variable on which one acts to obtain desired collective behaviours. The results of this work offer to the swarm robotics community effective and innovative methods to design and control the robots’ collective responses

sb.png

 

Stéphane Viollet

Biorobotics, Aix Marseille University, France

From insects to robots and vice-versa 

The Biorobotic approach is a meeting point where robotics and neuroscience are used to try to explain the behaviour of animals, especially winged insects (fly, bee, wasp...) and to model the processing of the sensory modalities at work in these outstanding animals. The neurophysiology is also used to better understand the sensorimotor reflexes at work in insects. The robots are a kind of embodiment of this insect-based knowledge to validate our models. Recent studies carried out at our laboratory focused on foldable aerial robots inspired by birds and ant-inspired navigation strategies by means of a celestial compass. Several questions will be addressed: could future robotic applications take a great benefit of the skylight polarization? What will future morphing robots look like? Several bio-inspired visuals sensors, as well as bio-inspired robots, will be presented in this talk.

Online user: 2 RSS Feed | Privacy
Loading...