
10.2021-Sense.nano-Neville-Hogan

-
Video details
Session 1: Movement and Motion
-
Interactive transcript
NEVILLE HOGAN: Hello. My name is Neville Hogan and I am the director of the Newman laboratory for biomechanics and human rehabilitation in the MIT Mechanical Engineering Department. And I'd like to tell you today about some work we've been doing on trying to perform some surprisingly difficult measurements of human balanced data performance. So we've been looking at the measurement of human motor behavior in a particular context.
Part of the reason that we've been doing this is that I've had a long-standing interest in developing technology to help people recover after neurological or orthopedic injury. And one of the reasons that this is important is because here I can show you the demographics of the United States population in 1960 and again projected out to 2060. I think it's obvious what the major change is, and that is there's a lot more people who are what would normally be considered elderly.
Of course, this is a good thing. People are living longer and they have fuller lives as a result. At the same time, as you age, you become more vulnerable to diseases such as cardiovascular accidents, stroke, other things like that. Or even just normal aging. And that generates a need that I think we should address. And I've been interested in developing technology that I have come to call Therapeutic Mechatronics.
And here's an example of it. The idea is to take mechanical, electrical, and information technology and configure them in such a way as to help people with problems that they may have. what I'm showing you here is some work we've done in developing robotic tools to help people recover their upper extremity function after a stroke. And this actually is now a commercial product. And it's reasonably successful.
What we've been doing more recently is targeting balance disorders. And of course, the reason for this I think should be quite obvious. Elders disproportionately exhibit balance disorders. And even more important, falls that result from balance disorders can be fatal in some cases. For example, if an elderly woman should fall, she is likely to break her hip. If she breaks a hip, that may compromise her ability to move around. That may in turn compromise her cardiovascular conditioning. And that may in turn compromise her overall health. And you wind up with a downward spiral that can result in death.
This is this is not a trivial matter. So the goal of the work that we're doing, the broad goal, is to develop mechatronic technology to see if we can treat balance disorders. Or if not, maybe we can compensate for them in some way. Now, a key part of the story, though, is that we know from previous work that the key to success is to be able to understand and understand quantitatively the dynamics and neural control of the thing that we're interested in helping. In this case, upright balance.
Well, the challenge here is how do you measure the dynamics of upright balance? Now, I'm sure we're all familiar with the fact that if we think at the quantum level, that measurement fundamentally disrupts the thing that you're measuring. And that's, I believe, well established. However, we have the same problem when we look at measurements at a misoscale level of centimeters to meters. That is, when you measure what's going on with a human, you're very likely to perturb it.
The key problem is that humans are supremely adaptive. Now, the conventional approach to identifying a dynamic system is to apply perturbations, observe the consequences, and then use some of the many mathematical tools that are available to figure out the details of the operation between the applied input perturbations and the observed outputs. The problem is that because of the supreme adaptability of humans, as you perturb them, you're going to change their balance strategy. And it's actually fairly obvious if you think about it.
If you walk up to a quietly standing human standing upright and you perturb them, they're likely to assume a slightly crouched posture. Well, that's a different balance strategy. That's not the one that we want to measure. So the question then is, is it possible to identify unperturbed balanced dynamics in human subjects? And here I think that the answer will lie in the fact-- another key observation about humans. And that is that human neuromotor system is fundamentally noisy, as we can't actually stand still.
When you're standing upright, the center of pressure between your feet and the ground is actually meandering around. Meandering around as though it was going through a random process. The process might not actually be random. It may result from chaotic nonlinear dynamics but it doesn't matter. It looks like it's a random process. So can we capitalize on that neuro load of noise to help identify the dynamic system?
And the answer is yes. So the way we go about this is that we assume a model of what we reasonably well. And in this case, that's the gravito-inertial dynamics of upright posture. We know the length and shape of the limbs, we can come up with good estimates of the mass distribution and inertial properties of the limbs. We know where gravity points. So that part we've got pretty much well under control.
What we then use is the intrinsic data variation to identify or infer the controller parameters. And we've shown, at least in simulation, that this can work very well. Of course, there's a snag. And that is it depends upon the structure of the model that we use. But as I show you, it turns out we can address that problem based on analysis. So let me show you what we've done.
This here is an example of some recent work published by a colleague at the University of Wisconsin, Kreg Gruben and his colleagues. So the thing is that postural stability implies that there's got to be a correlated variation of the origin of the ground force vector and the orientation of the ground force vector. And that's depicted here in this panel. So if you take two adjacent time slices of the observed force vector, they will in general intersect at a point, which is called the intersection point.
Turns out that if you analyze the variation of that intersection point in the frequency domain, you get this very interesting pattern. So at very low frequencies, you find that the intersection point lies above the center of mass. And to some extent, that makes good mechanical sense. If you can imagine the intersection point above the center of mass, that's equivalent to suspending the body from that intersection point. And of course, that will then be intrinsically stable.
At the same time, what Gruben and colleagues have shown is that if you go up to higher frequencies-- here we're talking about three to 8 hertz-- that the intersection point lies below the center of mass. And that's consistent with correlating forces and moments about the joints to compensate for at least a second order of motor behavior.
So we set out to see if we could in fact first of all describe this behavior. And secondly, use it to identify something about the neural control strategy that results in this behavior. And so the first thing is that we can establish the minimal model order just by looking at the data. It's very common in studies of human upright balance to assume that the body can be described as a single degree of freedom inverted pendulum. And that's not unreasonable.
However, it cannot describe the intersection point below the center of mass of high frequencies, which we find in these data. That's something that we've shown. Turns out that the simplest model that's competent to describe this behavior has to have two degrees of freedom. A two degree of freedom inverted pendulum, with this the first link representing the legs between the ankle and the hip and the second link representing the body above the hip.
If we do that, we can then use that as a model to describe the data and extract from the data the parameters of a controller. Turns out that a minimum effort control model is remarkably competent. So here I'm showing you these are the human data that I showed you a couple of slides ago. Here's the data from simulations that we have developed. And they're very similar. And in fact, here over on the right, I'm showing you the superimposition of the human and simulation data right on top of each other.
And not to put too fine a point on it, we nailed it. Basically, we've clearly been able to reproduce the human under the experimental observations of human data. Now, there are a couple of important details here. As I mentioned before, we assumed a two degree of freedom inverted pendulum model. And I can justify that. But we also assumed a linearized model with full state feedback. That I probably need to explain.
So the basic controller was assumed that the controller was linear and it was based on position and velocity feedback from each joint. The joints in particular here are the ankle and the hip. That controller is equivalent to assuming that we have an apparent multivariable stiffness and damping about each joint. And that's going to be true for any controller, no matter what its internal details are.
We also added motor noise to each of these degrees of freedom because we need the noise to reproduce the stochasticity of the behavior. The key point, though, is that we used the linear quadratic regulator procedure to identify controller gains, we used that for one key reason. And that is that we know that human upright behavior is stable. The linear quadratic control and regulator guarantees to give you a stable controller.
Secondly, it has some very important properties. One of them is that if you go to minimum effort, the minimum effort behavior is completely insensitive to the details of how you design the controller. So you wind up with the same basic behavior no matter what you put into the controller. In practice, what we did is we assumed that the penalty here-- we have a quadratic penalty trading off deviations of state from deviations of control effort.
We assume that identity metrics with the state weighting matrix. And we wrote the control weighting matrix to separate out this parameter alpha, which penalized the overall control effort and parameter beta, which relatively weights the ankle and the hip. We also added in a third parameter which weights the noise ratio in the ankle and hip.
So we take those and look at the effect of them. And of course, the first result we see is that over here in the top left, if we assume a relatively low value of alpha, which means we're going to allow the system to use a lot of control effort, we wind up with the intersection point above the center of mass at low frequencies, below at high frequencies. But the pattern of variation is very different from what you see experimentally.
On the other hand, if you penalize the control effort, that is use it the least possible, you get a very accurate description of the behavior. And going from an alpha of 10 to the 4 to 10 to the 6 makes essentially no difference. If I looked at-- assuming that value of alpha, we looked at variations of the parameter beta, that's the relative weighting of hip and ankle. And you see that makes a difference somewhere in this region, this gray shaded region where the difference between the intersection point in the center of mass/height are not statistically significant.
Here what we find is that the best ratio is obtained from a value that uses the ankle more than the hip. Over at top right here, we've looked at the relative noise level. But there's an important detail here. And that is because of the extremely long time delays in human neural motor control system-- they can be 100 milliseconds or maybe up to 300 depending on whose papers you read-- this is clearly not being achieved by real time feedback control. The delays are too great.
However, the intrinsic properties of the muscles can it operate essentially instantaneously. So here what we've been able to show is that if we use the noise parameter, which is the parameter of the muscle itself and we adjust it, we can get a good fit at the high frequencies. So the basic story here is that the general trend of how the intersection point varies with frequency appears to reflect just biomechanics.
However, the specific details of how the intersection point varies in frequency clearly didn't reflect neural control priorities. And the important thing here is that minimal control effort yields the best fit. So I think we can learn quite a lot about human motor control just from observations of humans standing quietly.
And this might have potential clinical application. For example, the instrumentation to get these data, it's quite simple. It can be made relatively cheap. And it might be something that could be used in the doctor's office or a therapy clinic. And let me stop at that point and thank my sponsors. Many of them have supported this and other work. And thank you for your attention.