In the ACELab, we study what appear to be simple motor behaviours like reaching to take a sip of tea from a mug on a messy desk. These actions are simple in that we never even have to think of them, but they only appear that way, because the exact way the brain accomplishes the control of these movements remains a scientific mystery and baffles even the most complex robot.
At the heart of the ACELab research enterprise is the question: What can we learn about human thought by studying human movement? Under this general research question, we are pursuing the following specific research projects:
1. Factors influencing motor plan competition
When performing actions in complex, real-world environments (like a messy desk), we are faced with many objects that could be potential targets for action (e.g. mug of tea, stapler) and more objects that aren’t relevant but could still influence the movement (the desk, a computer monitor, or any object that gets in the way as an obstacle). Under one influential model (Cisek & Kalaska, 2010) potential targets compete for action selection. We developed a paradigm to test motor plan competition in rapid human reach behaviour.
In this task, participants are required to quickly reach towards a touch screen where multiple potential targets (e.g. outlined circles) are displayed. Importantly each target is equally likely to be selected as the final target of the action, but the participant initiates the reach prior to knowing which target will be selected. At movement onset, one of the targets is filled in, and is thus selected as the final target.
Analyzing the initial portion of the reach trajectory reveals how participants were influenced by the competing potential targets. The plot below shows the three-dimensional view of the experimental setup and the reach trajectories across an average of 22 subjects (tubes around trajectories represents average standard error). When targets appear close together on one side of space (green and black traces) the hand travels straight to their location. However, when the targets are spaced further apart (red and blue traces) the hand initially follows a path that spatially averages between the target locations.
Using this paradigm, we have shown that hand trajectories reveal sensitivity to many of the factors that affect motor plan competition: the spatial position of potential targets, the recent trial history of spatial selection, the total number of potential targets, the luminance of potential targets and the format of target information. Most recently, we have shown that, while verbal report is susceptible to an illusion of object connectedness, rapid pointing is not.
Ongoing research is specifically interested in how reward influences target competition. Rewarding actions are known to show enhanced neural encoding, and we predict this will extend to reaching moments, with reaches drawn toward rewarding targets. More interestingly, we are studying how loss targets – i.e. one’s that penalize you – affect movements. Intuitively, we believe that losing is the opposite of winning, but there is some evidence to suggest that the neural mechanisms regarding loss avoidance are very different from those governing reward seeking. We predict that comparing reward to loss targets might also reveal this asymmetry of processing.
2. Neural signatures of target selection and obstacle avoidance
We are using motion tracking combined with EEG to observe how the selection of an object for action changes visual processing at its location. Specifically, as depicted in the left figure above, when the black object is the target of action, and a visual flash is presented at its location it elicits a visually evoked potential (VEP, see right figure) that is larger than the same flash (over the black object) when the grey object is the target of action. We are using the same paradigm to examine how an object that is an obstacle will affect visual processing. Our prediction is that obstacle objects will actually show suppressed activity, resulting in smaller VEPs. We are also just starting to explore using flickering objects, known to induce a steady state visually evoked potential (or SSVEP), as the target and obstacle objects in our task, as this may give an even stronger signal of the predicted target enhancement and obstacle suppression.
3. Economic decision making in action
Since the seminal work of Kahneman and Tversky, it is relatively well known that humans are susceptible to a host of “irrational” thinking when it comes to making decisions, especially ones with financial outcomes. One example of this is the so called “free effect” where people disproportionately choose, or like items that are free (e.g. Shampanier, Mazar, & Ariely, 2007). This helps explain why someone might be willing to wait in line for hours to receive a free item worth only a few dollars.
Recently, the ACELab has been exploring how these economic biases influence reach movements. Starting with the free effect, we used a task where participants “bought” arbitrary shapes presented on a touch-table by reaching out and touching them. As the above figure (left) demonstrates, movements to choose the free item ($0.00) over a 5-cent item ($0.05) were straighter and faster than movements to choose a 15-cent item ($0.15) over a 20-cent item ($0.20) – this despite the fact that the net difference between choices was identical (5-cents). So, it appears that movements are sensitive to the type of irrationality that effects our economic decision making. However, specifically regarding the free effect, we have actually challenged the notion that this has anything to do with free. In a second experiment (right figure), we show results that appear identical to the first (left figure). But now, there are NO free objects – instead, the lowest price choice is the 5-cent ($0.05) option. In this experiment, 5-cents becomes the new free, suggesting that the free effect exists only because in our common consumer framework, free is the best option available, not because there is something particularly special about about a zero cost item.
4. How hedonic preferences shape movements
Work in the ACELab demonstrates that a bias to move toward a certain object can be generated because that object is more likely to be selected for action, or is rewarding (see Research project 1), because it is cued as the target for an action (see Research project 2), or because of its economic value (see Research project 3). In a recent line of work, we have started to explore whether movement biases can also be generated internally – that is, whether your personal preference for an object shapes how you move toward it. In preliminary work, we hard participants reach toward one of two pictures of snack foods on a touch table (4 snacks total). They were told to choose the one they liked the most on every trial, and to provide incentive, we randomly selected two trials from the experiment and they received those snacks. We found profound effects of preference on movement – so much so that we were able to accurately predict how much participants liked each of the four items, just by looking at how their hand moved through space when choosing it. In the figure below, we show one participants data who really like chocolate bars, Dairy Milk most of all. This same participant didn’t like potato chips as much. On trials choosing the Dairy Milk over the potato chips (grey area and black lines), their movements were straight and fast. However, in trials choosing the Dairy Milk over the Oh Henry bar (black area, white lines), the reaches were much more curved and slower. Importantly, all of this data is from trials where they chose Dairy Milk, so just looking at what they chose is not enough to reveal this preference – instead, you have to look at how they moved for this preference to be revealed.
5.Developing outcome metrics for prosthetic limb patients
This project is done as part of the larger Bionic Limbs for Improved Natural Control (BLINC) group’s research aim of building more functional prosthetic devices.
The specific goal of this project was to begin developing a better outcome metric for prosthetic limb use. Currently, a variety of tests are used to assess how well a patient is able to use their prosthetic, but very few standardized tasks exist. Of those that do exist, very few are amenable to motion tracking and analysis. Since motion tracking has the capability to provide exquisite sensitivity regarding motor behavior and impairment, we sought to modify an existing prosthetic limb task (known as the Box and Blocks task) such that we would be able to use motion tracking to view how restricted limb articulation would affect reaching and grasping movements (see picture on right for model of our experimental setup). In our modified task, participants move one, two or four blocks from one side of a box to another, and do so while either unimpeded or wearing a wrist brace (to simulate restricted articulation). Our preliminary results suggests that the right task design will allow for quantitative motion tracking outcomes to be derived, with potential applications for actual prosthetic limb users.