I shall present 3 examples of bio-inspired problems that are tackled by optimal control methods, and in particular by the Pontryagin's maximum principle. The first problem is the problem of motions of the arms, that was the subject of my talk at the (commemorative) centennial anniversary of Lev Pontryagin. In this study, we discovered a biomechanical principle (called the inactivation principle), that allowed to explain the presence of total silence periods in the motion of human arms. The second subject is also an inverse optimal control problem. It is the problem of reconstructing the cost minimized by good plane pilots, for the purpose of giving autonomy to HALE Drones. We have here a general abstract result for a certain class of (inverse) optimal control problems. The third subject is the problem of Validating the theory of Jean Petitot relative to the geometry of vision. In this problem, very special optimal control problems appear, that are in fact problems of Subriemannian geometry. Besides the optimal control questions, there is here the natural presence of certain Hypoelliptic diffusion operators, that can be considered as the equivalent of the heat equation in Riemannian geometry. We present some very recent results of (human-like) image reconstruction, using these operators.
12.30 Lunch Buffet at Neptune Palais
13.30 - Session 2: Motion Planning
Why do humans move the way they do ? Direct and inverse optimal control
We address the question of motor planning in humans. Assuming that human movements are optimal, the problem is to identify the underlying cost function. Identifying this cost is important as it may unveil the variables possibly represented by the brain during the planning of actions, but also for all the applications in which generating human-like motion is desired: humanoid robotics, neural prosthetics, motor rehabilitation or character animation. Historically, motor control researchers tackled this issue by using direct optimal control approaches: a cost function is designed a priori and then confronted to some experimental data. More recently, inverse optimal control approaches have been considered to objectively infer the cost function (or some of its properties) right from experimental observations. I will first present results about the control of single-joint arm movements. Using Pontryagin's maximum principle and a non-smooth yet physically-sensed cost function (thus a direct approach), we derived an inactivation principle, a singular prediction that we verified experimentally. Actually, the process can be reversed (leading to an inverse approach) by showing that if some optimal trajectories contain such inactivation then the cost function cannot be smooth (with respect to the control variable). Therefore, the cost accounting for human arm movements must include a term similar to the absolute work of torques, i.e. a measure of the mechanical energy expenditure. Second, with the aim of identifying all the components of a possibly composite cost function, I will present a new experimental paradigm involving target redundancy developed to favor the discrimination between cost functions. Based on a family of candidate costs already proposed in the literature, we used state-of-art numerical methods to find out which linear combination accounted the best for the observed arm trajectories. Results show that the observed arm trajectories are best compatible with the goal of moving with both minimal mechanical energy expenditure and maximal joint smoothness.
Optimal control problems for the human locomotion: stability and robustness of minimizers
In recent papers models of the human locomotion by means of an optimal control problem have been proposed. In this paradigm, the trajectories are assumed to be solutions of an optimal control problem whose cost has to be determined.
In the approach initiated and developed in , goal-oriented human locomotion is understood as the motion in the 3-D space of both its position (x,y) and its orientation r of a person walking from an initial point (x_0,y_0,r_0) to a final point (x_1,y_1,r_1). The locomotor trajectories are assumed to be the solutions of an optimal control problem for the cost function l.
It is reasonable to assume that the highest order time-derivative of r appearing in the control system is the second derivative. Therefore, the class L of admissible optimal costs can then be divided into two sub-classes L_1 and L_2 that depend on whether the highest derivative of the angle r explicitly appearing in the cost is the first or the second one.
We prove strong convergence result for the solutions of the optimal control problem on the one hand for perturbations of the initial and final points (stability), and on the other hand for perturbations of the cost (robustness).
We also develop a test, to be carried out on the experimental data, to determine to which of the two classes L_1 and L_2 the cost does belong.
(Joint work with F. Jean and P. Mason)
 G. Arechavaleta, J.-P. Laumond, H. Hicheur, and A. Berthoz. "An optimality principle governing human walking." IEEE Transactions on Robotics, 2008.
Online Handwritten Signatures Modelling through the Minimum Jerk Principle
J. Canuto, PhD student at Institut Mines-Telecom, Telecom SudParis
Handwritten signatures result from voluntary and typically complex gestures of the human hand. The main use of handwritten signatures is in documents authentication, but some less straightforward uses may include, for instance, detection of neuromotor and motor planning disorders (such as Parkinson’s disease) or even daily stress measurement. However, beyond potential applications, modelling gestures behind signatures is a challenging enough matter for scientific research. In this brief work, we focus on a physiologically grounded model – under the Minimum Jerk criterion – applied to signatures represented as a sequence of position coordinates acquired at a fixed sample rate on a specific recording device, such as tablet or smart phones (online signatures).
On authentication tasks, signature signal is often decomposed in elementary segments called “strokes” but no agreement can be found in the literature for the definition of this notion. Usually, a stroke corresponds to a portion of the signature between two minima or maxima. This notion can be somehow related to the signing gesture but no explicit relationship is normally made about such assumption. We propose using the minimum jerk principle  for defining strokes. This is, to the authors’ knowledge, its first use in signature representation.
We consider a complex trajectory, such as that of a signature, as a sequence of such polynomial strokes, yielding a piecewise polynomial representation of the trajectory. We propose a novel iterative scheme for the application of the minimum jerk based segmentation to any complex trajectory.
In order to evaluate the goodness of fit of the model we use the velocity SNR (signal to noise ratio), since this is the main drawback pointed out in  to the Minimum Jerk model. Moreover, we compute the compression rate of the representation (the report between the number of resulting coefficients and the number of points in the original signature).
As pointed out in , a 15dB SNR is enough for movement modelling applications. For such SNR level we observe a compression rate of 73.3%, which should decrease the computational burden in authentication tasks. Besides, the minimum jerk principle is directly related to the smoothness of the movement, therefore there should be a correlation between the number of detected strokes and the effects of neuromotor and motor planning disorders.
(Joint work with B. Dorizzi and J. Montalvão)
 N. Hogan, “An organization principle for a class of voluntary movements”, J. of Neurosciences, vol. 4, pp. 2745-2754, 1984.
 C. O’Reilly and R. Plamondon, “Development of a sigma-lognormal representation for on-line signatures”, Pat. Rec., vol. 42, pp. 3324-3337, 2009.
 C. Feng, A. Woch and R. Plamondon, “A comparative study of two velocity profile models for rapid stroke analysis”, in Proceedings of the 16th Int. Conf. on Pat. Rec., vol. 4, pp. 52-55, 2002.
How humans fly
T. Maillot and A. Ajami, Phd at LSIS, Univ Sud Toulon-Var
The general problem considered is the reconstruction the cost from the observation of trajectories, in a problem of optimal control. It is motivated by the following applied problem, concerning HALE drones: one would like them to decide by themselves for their trajectories, and to behave at least as a good human pilot. This applied question is very similar to the problem of determining what is minimized in human locomotion. These starting points are the reasons for the particular classes of control systems and of costs under consideration. To summarize, our conclusion is that in general, inside these classes, three experiments visiting the same values of the control are needed to reconstruct the cost, and two experiments are in general not enough.
(Joint work with J.-P. Gauthier and U. Serres)
X. Halkias received her Master's and PhD from the Electrical Engineering Department of Columbia Univ. NY, and was part of the Lab. for the Recognition and Organization of Speech and Audio. Her doctorate research focused on advanced signal processing and audio analysis combined with machine learning techniques as they apply to bioacoustics. She also obtained her breadth requirement in biomedical signal processing and computational neuroscience. She is currently a post-doctorate fellow and adjunct pr. at the Univ. du Sud - Toulon working on advanced machine learning and specifically deep architectures and their applications. Xanadu is also the co-founder of the company In Actu, LLC based in the US, and housing her patent pending medical invention regarding arresting the tremors of patients who suffer from movement disorders such as Parkinson's or Huntington's diseases etc. Her talk will focus on the use of machine learning approaches for controlling and alleviating the tremors in these types of neuro-degenerative diseases, based on her patent pending invention (US Patent App. US 13/180,865,7/2011) regarding *Systems, Methods and Media for Finding and Matching Tremor Signals*. This concept is based on the known principles of noise cancellation in advanced audio processing as applied to biomedical signals i.e. EMG signals. In addition to the algorithms used for the amplitude and temporal identification of the desired tremors, a machine learning approach is proposed to ensure customized delivery of the cancellation signal. Overall, a non-parametric approach may be suitable in capturing the evolution of the disease depending both in time and in the patient's actions. Finally, a proposed device is presented with an electromechanical output that may arrest the tremors of patients with movement disorders.
Towards a bio-inspired generic perceptive model electronic based with visual application
Patrick PIRIM, Brain Vision Systems
Since 1986, the first implementation of this model in silicon chip, many applications have driven the technique as evolution does in time. Collaborative work with academic persons as Alain BERTHOZ, Yves BURNOD, Jean-Arcady MEYER and many others have help the integration in this direction. Today we can propose a generic perceptive model in a single silicon chip with different kind of inputs as vision, sound, touch, etc… After a brief story of theses 26 years of development, we will present the actual chip version BIPS (Bio-Inspired Perception System) applied in vision field.
This perception model presented is divided in three modalities; Global, Dynamic, and Structural. For each modality a neural population activities represented by a spatio-temporal histogram computation gives the “What and Were” situation. The adjunction of the dynamic recruitment and the lateral inhibition permits the self organization of object description.
For an application, it is necessary to perceive, understand and react in real time, more you perceive more you can understand easier, the perception is mandatory. We will show you as example: an autonomous car driver, a crossing road control, ..etc.
Seeing the future today, we present the next evolution of this model towards a generic cortical column with a scalable integration. It becomes a generic perception computer, by an external driving depending of the application, the perspective is huge.
When insects are flying forward, the image of the ground sweeps backward across their ventral viewfield and forms an 'optic flow' which depends on both the groundspeed and the groundheight. To explain how these animals manage to avoid the ground and lateral surfaces as well as to adjust its speed by using this visual motion cue, we suggest that insect navigation hinges on a visual-feedback loop we have called the optic-flow regulators, which controls the vertical lift, the side thrust or the forward thrust. To test these ideas, we built several aerial robots equipped with optic-flow regulators and with tiny bioinspired optic-flow sensors. These fly-by-sight microrobots can perform exacting tasks such as take-off, terrain following, level flight, speed control and landing. Our control scheme accounts for many hitherto unexplained findings published during the last 70 years on insects' visually guided performances.
In addition, insects can accurately navigate and track sound sources in complex environments using efficient methods based on highly appropriate sensory-motor processing systems. A novel bio-mimetic acoustic sensor inspired by the cricket's auditory system has been developed and tested. The sound sensor consists of two tiny omnidirectional microphones weighing only 0.1g, equipped with acoustic resonators and a biomimetic temporal processing system.
N. Franceschini, F. Ruffier, J. Serres (2007)
'A bio-inspired flying robot sheds light on insect piloting abilities',
Current Biology, 17(4) :329-335
G. Portelli, J. Serres, F. Ruffier, N. Franceschini (2010)
'Modelling honeybee visual guidance in a 3-D environmt'
Journal of Physiology - Paris, 104(1-2):27-39
F. L. Roubieu, J. Serres, N. Franceschini, F. Ruffier and S. Viollet (2012),
'A fully autonomous hovercraft inspired by bees : wall following and speed control in straight and tapered corridors',
Proceedings of IEEE RoBio 2012 Conf, China
F. Expert, F. Ruffier (2012)
'Controlling docking, altitude and speed in a circular high-roofed tunnel thanks to the optic flow,
Proceedings of IEEE IROS 2012 Conf, Portugal, pp. 1125-1132
F. Ruffier, S. Benacchio, F. Expert, E. Ogam (2011)
'A tiny directional sound sensor inspired by crickets designed for Micro-Air Vehicles'
IEEE Sensors 2011 conference, Limerick, Ireland, pages 970-973
Franck Ruffier received an engineering degree in 2000 and a PhD degree from INP-Grenoble in 2004 as well as a Habilitation (HDR in French) from Aix-Marseille University in 2013. His present position is CNRS Research Scientist and co-Head of the Biorobotics Lab at the Institute of Movement Science (ISM). His main interest areas are vision chips, bio-inspired optic flow processing and biomimetic sensory motor control laws for micro-aerial vehicles. Dr. Franck Ruffier is the senior representative of ISM.
15.00 - Session 5: Neuro-robotics
Discussant P. Gorce, Pr at USTV, Dir. Handibio
Role of rotational axes in the multisensory control of unconstrained 3D arm motions.
Controlling both daily motor activities and skilled athletic movements require complex 3D rotational motions of the upper limbs in different ranges of angular acceleration, often in the absence of visual regulation. Most studies in motor control have systematically investigated movements in 2D, and very few studies have focused on movements in 3D. The third dimension adds new and unexpected complexities that may explain why existing 2D models fail to adequately predict, in different movement paradigms, the common features of unconstrained movements of the upper limbs in 3D. Rules of production and regulation of movements in 3D cannot be simply identified by adapting existing models of movements in 2D just by adding another dimension. Furthermore, a variety of intriguing and puzzling observations are present, related to large interindividual differences that challenge both our understanding of rules of motor control and the heuristic value of existing 2D models. These interindividual differences appear when the task constraints of the movements become less strict (redundant), or when uncertainties increase in the sensory inputs, frames of reference or motor commands.
One assumption is that rotational axes may constitute an organizational key factor in motor control and provide a parsimonious basis for controlling the multiple degrees of freedoms of our upper limbs during 3D rotational movements. Various authors provided evidence that rotational axes are spatial invariants, specified in both the dynamics and kinematics of the arm movements. For instance, there usually is a separation between the axis of minimal inertia (e3) , the shoulder-centre of mass axis (SH-CM) and the shoulder-elbow axis (SH-EL) of the upper-limb, due to different arm configurations involving flexion-extension of the elbow. During cyclic external-internal rotations at the shoulder, the rotation axis of the arm may coincide with one of these rotation axes. The choice of axis has implications on the amount of torque that must be produced, and also may have on the energy costs associated with the task.
We will present some findings from a series of experimental studies conducted to deepen the understanding of the role of rotation axes during unconstrained 3D movements of the arm in tasks (pointing and reaching experiments, athletic gestures) requiring different objectives (such as precision, accuracy, velocity or distance or dynamic parameters). We will also present experimental findings from studies addressing the issue of the recurrent interindividual variability in motor control and multisensory integration. Moreover, we discuss how the issue of rotational axes and cost function identification may contribute together to improve our understanding of the rules of production and multisensory regulation of movement in 3D and of the interindividual variability in low and high demanding tasks.
Gaze and Brain Computer Interface data fusion for wheelchair navigation.
H. A. Lamti, PhD student at HANDIBIO, Université du Sud Toulon--Var
For disabled or elderly persons with heavily reduced physical and/or mental abilities steering powered
wheelchairs by means of a conventional joystick is hardly possible. For these persons special control devices
(e.g. button pads or sip-puff devices) have been developed. Due to the reduced command set such input devices
provide, their use is time-consuming and tiring, leading to significantly reduced acceptance among the user
group. Eye-Gaze interaction can be a convenient and a natural addition to human-computer interaction, this
means an intuitive substitution of the conventional mouse control by eyes movements. Although these efforts,
the design of gaze-based systems has to consider unintentional fixations and sporadic dwellings on objects that
typically occur during visual search or when people are engaged in demanding mental activity: in fact, the
perfect relation between gaze duration and user intention is impossible to find and it will be beneficial to replace
this way by a more direct one.
Advances in brain-computer interfaces (BCI) has opened new opportunities for providing input to a
machine based only on cerebral activity without the need for physical motor control. Through EEG technology,
thoughts or intention activity as well as conscious and unconscious states could be detected as specific EEG
patterns. However it still suffers from major drawbacks such as additional time required to adjust BCI itself and
needs to be trained before every usage due to the variation of EEG signals from user to another. This can lead to
the impossibility of use outside laboratories. Another problem is that BCIs typically can differentiate between
two to four commands as they analyze whether an imaginary movement is reflected in the right or left
hemispherical cortex. Our contribution is to put in place a Brain-Eyes based interface that can help a severly
disabled person controlling a wheelchair and can assist him by measuring his psycho physiological status based
on normal brain signals and providing his brainwave profile including emotions.
The system was tested on ten subjects ranging from 23 to 35 years old who tried to command a khepera
robot using a gaze-based system and a combined one. The former was faster in point of view navigation time, yet
many commands were mistreated. The efficiency of the combined system was much better than the gaze-based
one (over than 60%). However, the navigation time is twice longer than the gaze-based one (an average of 20 s
to reach the goal point). In conclusion, we can say that the combined system is more stable because it inflicts less
errors but slower than a standard gaze based system. This means that for each system advantages and drawbacks
are to be considered. The choice could be done by evaluating the project context; for example, if we want to use
such a system for internet navigation purposes, the speed assessment would be very important and in this case, a
gaze based system would be the best placed. For a severely disabled people, security is very crucial and the use
of a fast system is not the most important criteria to consider and this leads us to opt for the combined system.
(Joint work with M. M. Ben Khelifa, P. Gorce and A. M. Alimi)