Dec 4, 2017 - Even-Chen et al 2017: Augmenting intracortical brain-machine interface with neurally driven error detectors

Table of Contents

Augmenting intracortical brain-machine interface with neurally driven error detectors

Detecting errors (outcome-error or execution-error) while performing tasks via BMI from the same cortical populations can improve BMI performance through error prevention and decoder adaptation. Justin Sanchez was one of the earliest proponents of this idea and had several theoretical work on it. Joseph Francis conducted studies on detecting reward-signal in the M1 and PMd in monkeys in 2015 and continues now. There have been multiple studies on detecting these error-signals in humans, via ECoG or EEG. The detection of error-signal or reward-signal, which can be closely related, in motor and premotor areas have been rising in popularity owing to its implication to improve BMI.

It looks like Nir Even-Chen has gotten this very important first flag (with experimental paradigm pretty similar to what I wanted to do as well, so at least my hypothesis is validated).

Experiment

Monkey first performed arm-reach to move a cursor to different target. This training data was used to fit either a ReFIT decoder (requiring a second decoder fitting) or just a regular FIT decoder. Both decoders are velocity Kalman Filters that utilized intention estimation when fitting the training data.

During the BMI task, whenever a cursor overlaps a target, it starts a 300ms hold period. The color of the cursor changes depending on whether the hovered target is correct. If this selection period is completed, that target is selected. Target selection is followed by a 600ms waiting period after which liquid reward and auditory cue signals the outcome of the selection.

This target reach task is fashioned as a "typing task", i.e. the goal is to select specific sequences of targets, or letters.

Neural signals used were threshold-crossings.

Results

Trial-averaged PSTH based on task outcome showed signficant differences electrode-wise, in the period [-300, 600ms] with respect to selection onset.

Online decoding of task outcome

This motivates decoding the task outcome from using activities from different lengths of time within this time-window, on a trial-by-trial basis. Decoder used was a linear SVM on five PC components. There can be multiple ways of performing the PCA dimensionality reduction:

  1. Use the first n BMI trials as training trials. The task difficulty can be varied to achieve a certain success rate. Get the trial-average for different task outcomes, perform PCA on it, and train the SVM using the top five PCs.

In subsequent trials, project the trials' activities in the same time window to the previously selected PCs, then run the SVM.

  1. Initialization of the decoder same as above. However, with every new trial, the error-decoder can be run again. More trials would then lead to a more accurate decoder. As the authors noted,

We found that decoding performance converged after a large quantity of approximately 2000 training trials, and that these decoders worked well across days.

  1. Initialize outcome-decoder using previous days' data -- this was the approach taken during the online experiments.

Online error-correction

Two methods of error-correction in the context of the experiment were implemented:

  1. Error auto-deletion: After detecting that an error has happened, the previously selected target or "letter" will be deleted and the previous target will be cued again.

  2. Error prevention: As the task-outcome can be decoded with decent accuracy before target selection is finalized, when an error outcome is detected, the required hold period is extended by 50ms, allowing the monkey more time to move the cursor. This is actually pretty clever.

They found that error prevention resulted in higher performance as measured by "bit-rate" for both monkeys.

Outcome error signal details

The first worries coming to mind is whether these outcome error signals are in fact encoding the kinematic differences with respect to different trial outcomes. These kinematic differences include cursor movements and arm movements (monkey's arms were free to move).

Other confounding variables include (1) reward expectation (2) auditory feedback difference (3) colored cue differences.

To control for kinematic differences, three analysis were done:

  1. Linear regression of the form yk=Axk+b were performed, where xk includes the cursor velocity or hand velocity, and yk represents neural activity vectors at time k. The residual ykres=ykAxkb were then used to classify task-outcome, and this did not affect the accuracy very much.

This analysis makes sense, however, why do they only regress out either cursor or hand velocity, but not both at the same time??

  1. Used either the hand or cursor velocity to decode trial-outcome. The results were significantly better than chance but also significantly worse than that using the putative error signal.

  2. Because of the BMI paradigm, there is knowledge of the causual relationship between the neural activities and the cursor velocity, as defined by the matrix M that linearly maps the two in the Kalman Filter equation.

From the fundamental theorem of linear algebra, we know that a matrix is only capable of mapping vectors into its row-space. This means the cursor velocity results only from the projection of the neural activity vectors into the Ms row-space. Shenoy and colleagues term this output-potent subspace of the neural population activities.

In contrast, the output-null subspace is orthogonal to the output-potent subspace, and is therefore the null space of M. Thus, if the error-signal is unrelated to the neural acvitivies responsible for the decoded kinematics, we would expect it lying in the output-null subspace. Quantitatively, this means the projection of the task-outcome signal into the output-null subspace would explain for the majority of its variance.

To extract the outcome-related signal, neural activities are first trial-averaged based on the task outcome (success or fail), then subtracted from each other. The row-space and null-space of M are found from SVD. The outcome-related matrix ( N×T, N=number of neurons, T=number of time bins) are then projected into these spaces. The variances of these projections are calculated by first subtracting the row-means from each row then taking the sum of squared of all elements in the matrix.

It turns out that the variances of the outcome-signal explained by the projection into the null-space is much more than that explained by the row-space. A good result.

To visualize the error-signal, the principal components of the "difference-mode" of the neural activities were plotted. The idea of applying "common-mode" and "difference-mode" to neural activities is simlar to ANOVA quantifying the between-group and within-group variances. Common-mode neural activities is equal to trial-averaging regardless of task-outcomes. Difference-mode is equal to the difference between the trial-averaged success trials and failed trials.

To control for reward-expectation, experiments were controlled where rewards were delivered for every trial regardless of success. It was found this did not make a significant difference to the decoding performance. Not sure how they look, but according to Ramakrishnan 2017, M1 and PMd neurons exhibit a decreased firing rate to lower-than-expected reward. This, combined with similar decoding performance is a good sign for this error-signal to be different from reward-expectation.

To control for auditory feedback, it was turned off, decoding performance did not decrease significantly.

To control for color cues, the color of the selected target stayed constant. The resulted in a significant, but minor (5%) performance decrease. May be due to the monkey's increase in uncertainty under the absence of color changes. Or maybe it is due to a change in execution-error -- the monkey expects the taraget to change, but it doesn't. More work needs to be done here.

It is very surprising that their monkeys performed so many trials under so many different conditions...in my experience they either freak out and perform terribly or just refuse, and we have to wait for them to get their shit together again.

Execution-Error

Execution-error differs from outcome-error in that the former is execution sensitive. In this context, execution-error implies an error-signal that varies with where the correct target is with respect to the selected target. In contrast, outcome-error is simply whether the selected target is correct.

I am not convinced that the authors' definition is correct here. Techincally outcome-error should be if the monkey selects the letter "A" but the letter "B" appears, and execution-error is when the monkey wants to move the cursor left, but the cursor went in another direction.

Regardless, it was found the direction of the correct target explained a small percentage (~15%) of the error-signal variance. Very promising!


This paper has basically validated and performed my planned (focus on past tense) first stage experiments using BMI-controlled cursor tasks. Another thing to note is that PMd shows significant outcome-modulation earlier than M1, fitting with the role of that area.

Next step would probably be online adaptation of the decoder informed by the error-signal. An important point yet to be addressed is that, degrading and nonstationary signals motivate online adaptation, the error-signal decoder would also require adaptation. This adaptation in the case of BMI-controlled typing is simple -- is the backspace being pressed? In other tasks, external knowledge still needs to be known...

Oct 14, 2017 - Chi-squared post-hoc test, and how trolly statistical testing is

Chi-squared test is any statistical hypothesis test wherein the sampling distribution of the test statistic is a chi-squared distribution when the null hypothesis is true. It is commonly used to determine whether there is a significant difference between the expected frequencies and the observed frequencies in one or more categories.

The calculations for this test is fairly simple, good examples, and is used to compare if the proportions of a category in a group is significantly different than expected.


On first look, the difference between ANOVA and Chi-squared is pretty straight-forward. ANOVA tries to identify whether the variances observed in a dependent variable can be explained by different flevels in categorical independent variables.

Do hours of sleep affect health? (One-way ANOVA)

Do gender and hours of sleep affect health? (Two-way ANOVA)

Using ANOVA, we would survey a bunch of people about their hours of sleep, gender, and healthy. Suppose we divide hours of sleep into 3 levels, and the health score varies from 1:10. In one-way ANOVA, for example, we would get the mean and standard deviation of the health scores for people with low, medium, or high hours of sleep. Depending on the overlap between those distributions, we can either reject or fail to reject the hypothesis that hours of sleep affect health.

Chi-squared can also be used to answer if there's a relationship between hours of sleep and health. We can build a contingency table where the columns are level of sleep, and the rows are level of health score. The chi-squared test would then tell us if the number of people in each cell is expected.

Of course, we can also simply divide the health score into either high or low, and make a logistic regression where the independent variable is the hours of sleep.

ANOVA is probably the most appropriate test here. Different people would give different answeres to the same question. The point is that simply saying "chi-squared test deals with count data" is right and wrong at the same time. But I digress


So, chi-squared test itself serves as an omnibus test like ANOVA, indicating whether the observed counts or cells in the contingency table are significantly different from expected. But it does not tell us which cell. In ANOVA, there is post-hoc testing, and MATLAB handles it very easily by passing the result from anova1 or anovan into multcompare, which would further handle multiple comparison.

In comparison, post-hoc chi-squared test is not as common -- MATLAB does not have one and it is not well-documented in other packages like SPSS.

There are several methods documented. My favorite, and probably most intuitive one is residual method by Beasley and Shumacker 1995. After the omnibus chi-squared test rejects null hypothesis, post-hoc steps include:

  1. Make the contingency table M as in any Chi-squared test.

  2. Get the expected value E(i,j) for each cell. If [i,j] indexes the table M, then E(i,j)=(iM(i,j))(jM(i,j))/n, where n=ijM(i,j).

  3. Obtain standardized residuals for each cell: e(i,j)=M(i,j)E(i,j)E(i,j). These values are equivalent to the square root of each cell's chi-squared values.

  4. The standardized residuals follow a standard normal distribution. So we can obtain two-tailed or one-tailed pvalues from them. Multiple comparison procedure can be applied as usual to the resulting pvalues.

MATLAB implementation here.

Oct 14, 2017 - More Jekyll and CSS

Github's gh-pages have updated to use rouge, a pure ruby-based syntax highlighter. I have had enough with the build-failure emails because I have been using the Python highlighter pygment. Time to upgrade.

Installing Ruby
My Ubuntu machine was outdated and according to github, I should have Ruby 2.x.x rather than staying at my 1.9.

Installed the ruby version manager -- rvm, and followed the instructions on the github page. Had a strange permission error [https://github.com/bundler/bundler/issues/5211] during bundle install, solved it finally by deleting the offending directory.

Basic syntax highlighting using the fenced-code blocks can be achieved following instructions here, however, enabling line numbers requires using the ```<language> tag, which is not consistent with the custom rouge syntax highlighting themes out of the box. Required a bunch of CSS stylings to get them to work. In the following steps /myjekyll represents the path to my jekyll site's root directory.

  1. All my css style sheets are in the directory /myjekyll/css. In my /myjekyll/_includes/header.html file, the syntax highlighting stylesheet is included as syntax.css. I did

    rougify style monokai > /myjekyll/css/syntax.css
    

This will give me good looking fenced code blocks with monokai theme, but the syntax highlighting and line number will be completely screwed up. Specifically, the text colors will be that of the default highlighting theme.

  1. In the new syntax.css stylesheet, we have the following:

    .highlight {
      color: #f8f8f2;
      background-color: #49483e;
    }
    

    We want to have the code blocks to be consistent with these colors. Inspecting the code block elements allow us to set the appropriate css properties. Notably, the highlighted code-block with line numbers is implemented as a table, one td for the line numbers, and one td for the code. I added the following into my /myjekyll/css/theme.css file (or whatever other stylesheet that's included in your header)

    /* Syntax highlighting for monokai -- see syntax.css */
    .highlight pre,
    .highlight .table 
    {
        border: box;
        border-radius: 0px;
        border-collapse: collapse;
        padding-top: 0;
        padding-bottom: 0;
    }
    
    .highlight table td { 
        padding: 0px; 
        background-color: #49483e; 
        color: #f8f8f2;
        margin: 0px;
    }
    
    /* Border to the right of the line numbers */
    .highlight td.gutter {
        border-right: 1px solid white;
    }
    
    .highlight table pre { 
        margin: 0;
        background-color: #49483e; 
        color: #f8f8f2;
    }
    
    .highlight pre {
        background-color: #49483e; 
        color: #f8f8f2;
        border: 0px;    /* no border between cells! */
    }
    
    /* The code box extend the entire code block */
    .highlight table td.code {
        width: 100%;
    } 
    
  2. The fenced code-block by default wraps a long line instead of creating a horizontal scrollbar, unlike using the highlight tag. According to the internet, this can be done by adding to the style sheet:

    /* fence-block style highlighting */
    pre code {
        white-space: pre;
    }
    
  3. One last modification for in-line code highlighting:

    /* highlighter-rouge */
    code.highlighter-rouge {
        background-color: #f8f8f8;
        border: 1px solid #ddd;
        border-radius: 4px;
        padding: 0 0.5em;
        color: #d14;
    }
    
  4. A final thing that is broken about the

    {% highlight %}

    tag is that using it inside a list will break the list. In this post, all items after the list with line numbers have started their number all over agian. According to the ticket here, there is no easy solution to this because it is related to the kramdown parser. Using different indentation (3 spaces or 4 spaces) in list items does not change. Some imperfect solutions are suggested here and here. None can fix the indentation problem. But, by placing

    {:start="3"}

    one line before my third list item allows the following items to have the correct numbering.

Aug 9, 2017 - BMI skill acquisition through stimulation

There is an interesting section on Approaches for BCI Learning in the review Brain computer interface learning for systems based on electrocorticography and intracortical microelectrode arrays. Specifically:

Since cortical stimulation can modulate cortical activity patterns (Hummel and Cohen, 2006; Harvey and Nudo, 2007), it is conceivable that cortical stimulation may be able to replace or supplement repetitive behavior training to induce changes in cortical activity and accelerate BCI learning (Soekadar et al., 2014). While this approach has not been well investigated for BCI learning, previous studies about neuroplasticity (Gage et al., 2005; Jackson et al., 2006) and rehabilitation using neurostimulation (Ziemann et al., 2002; Hummel et al., 2005; Hummel and Cohen, 2006; Harvey and Nudo, 2007; Perez and Cohen, 2009; Plow et al., 2009; Reis et al., 2009) can shed some light on the feasibility of this approach. At the macroscopic level, cortical areas can be stimulated non-invasively using transcranial magnetic or current stimulations. In the context of stroke rehabilitation, it has been suggested that such stimulation can enhance motor cortical excitability and change cortical connectivity (Hummel et al., 2005; Hummel and Cohen, 2006; Perez and Cohen, 2009). [...] A recent pilot study has shown that transcranial direct current stimulation induces event-related desynchronization associated with sensorimotor rhythm (Wei et al., 2013). This event-related desynchronization, along with motor imagery, was used to improve the performance of an EEG based BCI.

Nothing interesting there, there have been quiet some evidence that anodal tDCS has positive effects on motor and cognitive skill acquisition. I particularly like Soekadar 2014: tDCS was used to help with the subjects to train to modulate sensorimotor rhythms (SMR, 8-15Hz). They hypothesized that M1 had a causal link to modulate SMR, so anodal tDCS was applied there. They found anodal stimulation resulted in better performance than sham and cathodal stimulation. I will not comment on if this experiment alone establishes its conclusion that "M1 is a common substrate for acquisition of physical motor skills and learning to control brain oscillatory activity", but it certainly serves as evidence that tDCS may help in BMI control acquisition.

At the microscopic level, based on the concept of Hebbian or associative learning, motor cortical reorganization can be induced by coupling action potentials of one motor cortical neuron with electrical stimulation impulses of another motor cortical neuron (Jackson et al., 2006; Stevenson et al., 2012). Besides electromagnetic stimulation, optogenetics is another approach to stimulate cortical tissue.

They reference the Jackson, Mavoori, and Fetz 2006 paper Long-term motor cortex plasticity induced by an electronic neural implant. In this paper, stimulation was delieverd on one electrode upon detection of action potential on another one. This was done for 17 pairs of electrodes over 8 to 9 sessions spread between 1 to 4 days. The stimulation current is just above threshold current needed to elicit a muscle response (wrist). They first measured the muscle response torque vector elicited from stimulating the recording electrode, stimulating electrode, and a control electrode. After conditioning, they showed that the response vector elicited from stimulating the recording electrode has has shifted toward the response vector of the stimulating electrode. Meanwhile, the control electrode response vector does not change significantly. This results seems consistent with the second neuron firing in sync with the first neuron. Furhter, varying the lag betweeen record and stimulation has an effect on this shift, consisten with spike-timing dependence. The authors then suggest that this can be a method to induce "artifical synapses". While the use of even cortical stimulation to selectively strengthening specific neural pathways during rehab is not a brand new idea, the ability to create artificial connections is crazy cool for BMI.

In BMI, we know exactly (exaggeration) what signals will be mapped by a decoder to movement commands for a BMI. This means if we use a random decoder assigning weights to the units recorded at the different electrodes, we can potentially apply stimulation at those electrodes according to our decoding weight matrix to enhance the BMI skill acquisition.

A more recent study from Fetz' group Paired stimulation for spike-timing-dependent plasticity in primate snesorimotor cortex uses slightly different approach to induce STDP. They first map out connections between neurons near implanted pairs of electrodes, judging from the measured LFP in response to stimulation. Neuron A is putatively pre-synaptic, Neuron B post-synaptic, and Neuron C has recurrent connections with both of them and serves as a control. During conditioning they stimulated A then followed by B. The results again measured the evoked EP for A->B, B->A, and all to C.

Results are mixed: 2 out of 15 pairs showed increased EP in A->B direction. Network changes were seen -- neurons near the implanted area not involved in paired-stimulation also showed changes in their EP. Some showed depressed EP. Conditioning with spike-triggered stimulation from the 2006 study produced a larger proportion of positive plasticity effects than paired electrical stimulation and the authors propose possible mechanisms:

  1. Stimulating at site A rather than using recorded trigger spikes would have activated a larger population of more diverse cell types and consequently can recurit a broad range of plasticity mechanisms, such as anti-Hebbian effects.

  2. The triggering spikes in 2006 study occured in association with normal behavior, whereas the paired stimulation was delivered in a preprogrammed manner independently of the modulation of local activity with movements or sleep spindles.

  3. Most importantly, the 2006 study measured plasticity effects in terms of behavior, rather than EP.

So, looks like behavior is important and spike-triggered stimulation may result in better plasticity mechanisms, I hypothesizie.. Ideally, stimulation specificity afforded by optogenetics may be a better tool to study this effect.

And apparently someone in our group from 2010-2012 had a similar idea about using stimulation to assit in BMI skill acquisition, and the results were bad, resulting in mastering-out.. So I better put this on the back burner.

Jul 27, 2017 - Assisted MCMC -- Learning from human preferences

The problem with using intention estimation in BMI is that the algorithm designers need to write down an objective function. While this is easy to do for a simple task, it is unclear how and not practical to do this for a general purpose prosthesis.

Using reinforcement learning based decoder deriving error-signal directly from cortical activities would be nice and in my opinion the holy grail of BMI decoder. However, deriving the proper error signal and constructing an error decoder seems to present the same problem -- the designers have to first establish what exactly is an error condition.

A compromise is perhaps a human assisted search in the decoding space. The possible decoding space is large, but as Google+Tri Alpha demonstrated, by guiding MCMC search via human inspection of the results, good results in such complicated problems such as plasma confinement is possible.

This general approach of learning from human preferences is also becoming a hot topic recently, as getting an objective function slightly wrong may result in completely unseen consequences (much worse than arrogantly assuming a monkey's goal in an experiment is to always get a grape).

Feb 24, 2017 - Using FemtoBeacon with ROS

FemtoDuino Offical Site

FemtoBeacon Specs

We bought the 5-coin + dongle version. Each chip has onboard a ATMEL SAM R21E (ATSAMR21E18A), which is Arduino compatible. The coins have onbaoard precision altimeter and a 9 Axis IMU (MPU-9250). The wireless communication between the coins and the dongle can be programmed t use any available 2.4GHz RF stack. FemtoDuino implements Atmel's LwMesh stack.

The design files for the boards and example programs are in the femtobeacon repository. The repository's README includes setup instructions to program the FemtoBeacon with bareemtal C with the Atmel Software Framework. Alex the maker of FemtoBeacons personal uses and suggests using the Arduino core and bootloader instead.Discussion.

I am using the Arduino core and bootloader for my development as well.

Machine Setup

Femtoduino has a list of instructions. There is also a list of instructions on Femtoduino's github repo.

Here I compile the relevant procedures I have taken, on Ubuntu 14.04.

  1. Download Arduino IDE, version 1.8.1 or higher.

  2. From Arduino IDE's board manager, install Arduino SAMD Boards (32-bits ARM Cortex-M0+) by Arduino. Femtoduino recommends version 1.6.11. I have tested 1.6.7 to 1.6.12, doesn't seem to make too much of a difference.

  3. Add package URL (given in ArduinoCore repo) to the Additional Board Manger URLs field in the Arduino IDE via File > Preferences (Settings Tab).

    The Stable Release URL is:

    https://downloads.femtoduino.com/ArduinoCore-atsamd21e18a/package_atsamd21e18a-release-build_index.json.

    The hourly build URL is:

    https://downloads.femtoduino.com/ArduinoCore-atsamd21e18a/package_atsamd21e18a-hourly-build_index.json.

    I have found the hourly build works without compilation errors.

  4. This core is now available as a package in the Arduion IDE board manager. Use Arduino's Board Manager to install Atmel SAM D21/R21 core (ATSAMD21E18A/ATSAMR21E18A) by Femtoduino.

    At this point, with default installations, there should be a .arduino15/packages/femtoduino/hardware/samd/9.9.9-Hourly directory, which contains the Arduion core for our device. This directory should contain files in the ArduinoCore-atsamd21e18a git repo. The example files for the RF-dongle and RF-coins are in the libraries/ folder of this directory.

  5. Install the FreeIMU libraries from Femtoduino's fork of FreeIMU-Updates. This is needed for the onboard MCUs to talk to the IMU chips.

    This is done by either forking or downloading FreeIMU-Updates's FemtoBeacon branch (important!). Make the libraries visible to Arduion -- two ways to do this:

    1. Copy all the folders, except the MotionDriver/ folder in FreeIMU-Updates/libraries into the Arduion libraries folder. By default, the Arduion libraries folder is under /Arduino/libraries. See Arduino library guide for more instructions.

    2. As I might make changes to FreeIMU library code, it's easier to symbolic link the libraries to Arduino's library directory. Do this with:

      cd ~/Arduino/libraries

      `cp -r --symbolic-link ~/PATH_TO/FreeIMU-Updates/libraries/* .

      Remember to delete the symbolic link to MotionDriver/ since we don't need it.

    In FreeIMU/FreeIMU.h header file (from folders copied previously), the line #define MPU9250_5611 is the only uncommented line under 3rd party boards.

    In FreeIMU/FreeIMU.h, find the following section:

    //Magnetic declination angle for iCompass
    //#define MAG_DEC 4 //+4.0 degrees for Israel
    //#define MAG_DEC -13.1603  //degrees for Flushing, NY
    //#define MAG_DEC 0.21  //degrees for Toulouse, FRANCE
    //#define MAG_DEC 13.66 // degrees for Vallejo, CA
    //#define MAG_DEC 13.616 // degrees for San Francisco, CA
    #define MAG_DEC -9.6    // degrees for Durham, CA 
    //#define MAG_DEC 0
    

    and enter the magnetic declination angle for your location. Do this by going to NOAA, enter your zip code and get the declination result. For the result, an east declination angle is positive, and west declination agnle is negative. This is needed to for magnetometer reading responsible for yaw calculations.

  6. Install the FemtoDuino port of the LwMesh library. This is needed to run the wireless protocols.

    Fork or download the at86rf233 branch of FemtoDuino's fork of library-atmel-lwm. Having the incorrect branch will break compilation.

    Move all the files in that repository into the Arduino library folder as at86rf233.

  7. Install the Femtoduino fork of RTCZero library, osculp32k branch. This is needed to use an external 32kHz clock onboard the chips (see compilation issue thread for discussion).

    Fork or download osculp32k branch of Femtoduino's fork of RTCZero. Move it to Arduion library folder as RTCZero.

Testing Coin and IMU

To check if the machine setup has gone successfully, in Arduion IDE, select the Board: "ATSAMR21E18A (Native USB Port)" option. If it's not available, step 1-4 of the Machine Setup was probably done incorrectly.

Select the port corresponding to the connected Coin-chip.

Open FemtoBeacon_Rf_FreeIMU_raw.ino through File > Examples > FemtoBeacon_Rf.

Compile and upload. If machine setup has gone correctly, should be able to see IMU readings in the serial monitor. Serial plotter can also visualize the outputs.

Testing Dongle

Testing the Dongle requires some modification of FemotoBeacon_Rf_MESH_IMU_Dongle.ino (can also be acessed via Files > Examples > FemtoBeacon_Rf). This program makes the Dongle listen for wireless traffic, and prints them in Serial. If there's no traffic, then nothing is done.

This is not very informative if we just want to see if the Dongle's working. So change the original handleNetworking() method:

void handleNetworking()
{
    SYS_TaskHandler();
}

to

unsigned long start = millis(); // Global variable, asdffffffffffffffffffffffffffffffffffffffffffffffffffff

void handleNetworking()
{
    SYS_TaskHandler();
    if (millis() - start > 1000) {
        Serial.print("Node #");
        Serial.print(APP_ADDRESS);
        Serial.println(" handleNetworking()");
        start = millis();
    }
}

This way, even without wireless traffic, the dongle will print out "Node #1 handleNetworking()" every second in the serial monitor.

Testing Femtobeacon Networking

Upload one Femtobeacon coin with FemtoBeacon_Rf_MESH_IMU_Coin.ino and the dongle with FemtoBeacon_Rf_MESH_IMU_Dongle.ino.

Keep the dongle plugged into your computer with Arduino IDE running, and the other coin unconnected but powered through its USB connector.

In the Serial monitor, you should be able to see outputs such as

Node #1 handleNetworking()
Node #1 handleNetworking()
Node #1 handleNetworking()
Node #1 handleNetworking()
Node #1 handleNetworking()
Node #1 receiveMessage() from Node #2 = lqi: 156  rssi: -91  data:   154.23,   16.56,   34.45
Node #1 receiveMessage() from Node #2 = lqi: 172  rssi: -91  data:   153.93,   16.89,   34.93
Node #1 receiveMessage() from Node #2 = lqi: 220  rssi: -91  data:   153.66,   17.18,   35.32
Node #1 receiveMessage() from Node #2 = lqi: 160  rssi: -91  data:   153.39,   17.43,   35.61

where Node#1 represent the dongle, Node #2 represents the coin beacon.

Calibrating IMU

Follow the official Femtoduino's instructions. Note that the calibration utility cal_gui.py is in the FreeIMU-Updates folder that was forked in machine setup step 5.

Download processing and run the cube sketch to check for results.

Before starting to collect samples from the GUI, in Arduino's Serial Monitor, send a few "q" commands (reset IMU) with some time in between or a "r" command (reset quaternion matrix) for best results. See post here

Common Errors

  1. If serial port permission problems aren't setup correctly, we may see this error:

    Caused by: processing.app.SerialException: Error touching serial port '/dev/ttyACM0'..

    Solve by adding your user account to the dialout group:

    sudo usermod -a -G dialout yourUserName

  2. The FreeIMU's variants of getYawPitchRoll methods doesn't actually give the yaw, pitch, and roll one may expect. In the comments for the method:

    Returns the yaw pitch and roll angles, respectively defined as the angles in radians between
    the Earth North and the IMU X axis (yaw), the Earth ground plane and the IMU X axis (pitch)
    and the Earth ground plane and the IMU Y axis.
    
    @note This is not an Euler representation: the rotations aren't consecutive rotations but only
    angles from Earth and the IMU. For Euler representation Yaw, Pitch and Roll see FreeIMU::getEuler
    

    The one that I expected is given by the getEuler() method:

    Returns the Euler angles in degrees defined with the Aerospace sequence.
    See Sebastian O.H. Madwick report "An efficient orientation filter for 
    inertial and intertial/magnetic sensor arrays" Chapter 2 Quaternion representation
    
  3. After the previous step, we should have reasonable yaw, pitch, roll readings in degrees. However, the yaw reading may exhibit a huge drift/set point behavior -- while the beacon sits flat on a surface and is rotated along its z-axis, the yaw reading would initially change to some reasonable measurement, then drift back to the same value.

    As the magnetometer should be pretty stable, unless the environment has a lot of changing magnetic field, it would be due to how the sensor-fusion algorithm is updating the measurements. In FreeIMU.h, look for the lines:

    // Set filter type: 1 = Madgwick Gradient Descent, 0 - Madgwick implementation of Mahoney DCM
    // in Quaternion form, 3 = Madwick Original Paper AHRS, 4 - DCM Implementation
    #define MARG 3
    

    Out of the possible methods, only method 3 gives me yaw reading with a non-instant drift-time...

Directories

  • Arduino/libraries: All the third-party libraries for lwmesh, RTCZero, FreeIMU_Updates. FreeIMU_Updates symbolic-linked to FreeIMU_Updates git repo.
  • .arduino/.../femtoduino../: Arduino board-manager installed cores.

References

Compilation issue

Magnetometer Reading Explanations

Complementary Filters

Dec 17, 2016 - Lebedev, Carmena et al 2005: Cortical ensemble adaptation to represent velocity of an artificial actuator controlled by a brain-machine interface

Cortical ensemble adaptation to represent velocity of an artificial actuator controlled by a brain-machine interface

Context
Follow-up to the Carmena 2003, Learning to control a brain-machine interface for reaching and grasping by primates paper. In the experiments, monkey first learned to do a target-acquisition task via joystick-control (pole-control), then brain-control was used to control the cursor while the monkey was still allowed to move the pole (brain control with hand movement (BCWH)), and finally the joystick was removed (brain-control without hand movement (BCWOH)). All bin-size=100ms.

Individual cortical neurons do not have a one-to-one relationship to any single motor parameter, and their firing patterns exhibit considerable variablitiy. Precise motor control is achieved through the action of large neuronal ensembles.

The key questions addressed was: How do the neuronal representations of the cursor movement and arm movement of the recorded ensembles change between pole-control, BCWH, and BCWOH?

Method

Implants in M1, PMd, SMA, S1, and PP.

  1. Tuning to velocity during pole and brain control. Constructed linear regression model to predict neuronal firing rates based on velocity:
n(t+τ)=a(τ)Vx(t)+b(τ)Vy(t)+c(τ)+ϵ(t,τ)$$,where$t$istime,$n(t+τ)$isneuronalfiringrateattime$τ$,$τ$isatimelag.Thesquarerootof$R2$forthisregressionwastermedthevelocitytuningindex(VTI)at$τ$.AVTIcurveisthenconstructedfor$τ$rangingfrom[1,1]second.2.Preferreddirectionofaneuronisdeterminedas$PD(τ)=arctan(b(τ)/a(τ))$.3.ToexaminethechangeinPDbetweendifferenttasks,correspondenceindexisdefinedas:$$C=90|αβ|90$$,where$α$and$β$arestatisticalvaluescalculatedfromensemblesPDmeasuredindegreesforthedifferenttasks.Valuesof$C$approachingzeromeansnocorrespondencebetweenthePDs,valueapproaching1meanstheopposite.4.Shuffletesttoexaminehowcorrelationsbetweenneuronscontributetotuningpropertiesdestroycorrelationsbetweenneuronsbyshiftingspiketrainsofdifferentneuronswithrepsecttoeachotherbyarandomintervalrangingfrom0to200s.Aftershuffling,VTIsarecalculatedfromtheregressionmodelsin(1).HigherunshuffledVTIwouldindicatecorrelatedfiringbetweenneuronsimprovetuningcharacteristicsofindividualneurons.5.Straightlineanalysisextractedtimeswhentrajectoriesarestraightforatleast300ms.VTIwerecalculatedforthesetimes.Directionaltuningdepthiscalculatedasthedifferentinaveragefiringratebetweenthedirectionofmaximumandminimumfiring,dividedbytheSDofthefiringrate.6.Offlinepredictionsofhandvelcocityissimilartoconstructionofonlinedecoderwith:$$Vx(t)=b+τ=mnw(τ)n(t+τ)+ϵ(t)
  1. Random neuron dropping: 10min of neuronal population data fit the velocity prediction model. Model then used to predict on a different 10min period. A single neuron is randomly dropped from the population, train then test. This process is repeated until no neurons remained. This entire process (dropping 1 to entire population) is repeated 100 times to yield R as a function of number of neurons.

Results

  1. Hand and robot velocity in pole-control vs. BCWH -- More correlated during pole-control (expected), less during BCWH. Ranges of velocity similar. In BCWH, robot moved faster than hand. Ranges of robot velocity similar between BCWH and BCWOH.

  2. Neuronal tuning during different modes -- Velocity tuning defined as correlation between neuronal rate and velocity. Individual neurons exhibited diversity of tuning patterns:

    • Fig.3: Tuned to both pole-control and brain-control (shown in 3A). VTI higher before instaneous velocity measurement (IVM). VTI for differnet modes significantly different (Kruskal-Wallis ANOVA, Tukey's multiple comparison). Highest VTI during pole-control. After transitioning to brain control, this neuron retained many features of its original velocity tuning, but also became less tuned to hand movements.

    • Fig.4: Tuned to hand movements. Changes in VTI between different modes are significant. Tuning present during pole-control and BCWH, but vanished in BCWOH. Highest VTI before IVM. Observed lag-dependent PD to both hand and robot velocity. Because of the strong dependency of tuning on the presence of hand movements, this neuron is likely to have been critically involved in generation of descending motor commands.

    • Fig. 5: Tuned to BCWOH.

    Pairwise comparison of pole-control with BCWH and BCWOH (within same sessions) showed in majority of neurons, peak VTI for hand decreased after transitioning to BCWH. In BCWH, the peak VTI for robot is significantly greater than peak VTI for hand for a majority of neurons. Peak VTI during BCWOH was greater than during pole control in 38% of neurons.

  3. Tuning patterns for ensembles

    • Fig.6: Ensemble VTI for different control mode. In majority of neurons, and also average VTI, tuning to hand movement decreased from pole-control to BCWH. VTIs for robot movement is higher pre-IVM, because only bins preceding IVM were used for online prediction, and delay of the robot/cursor introduces a lag. VTI for hand movement peaked closer to IVM than for robot movement. Average VTI similar to M1 VTI due to stronger tuning.

      Occurence of ensemble tuning to robot velocity in BCWOH is not surprising as neural activities controlled robot movement, but it's not simply a consequence of using the linear model. The shuffled test showed average VTI decreased after the shuffling procedure, indicating role of inter-neuronal correlation in cortical tuning. This corroborates Carmena 2003's finding that firing of individual neurons is correlated and increase in brain control.

    • Fig 7: Ensemble PD. Generally there was no fixed PD for each neuron and rotated with lag changes (may be due to representation of accelerative and decelerrative forces a la Sergio & Kalaska). In BCWH, PDs for both hand and robot resemble that of pole-control. Correspondence of hand PD between pole-control and BCWH is highest at IVM. For robot PD correspondence is highest 100ms before IVM, which is in agreement with average velocity VTIs. PDs during BCWOH were confined to a narrower angular range and rotated less.

  4. Neuronal tuning for selected hand movements.

    • Fig 8: Comparing directional tuning to hand movements in pole-control and BCWH. Both directional tuning depth and VTI w/ respect to hand are less in BCWH. Differences are statistically significant.
  5. Offline predictions of hand velocity during pole and brain control. Three prediction curves were made, 1) Using pole-control to train models, then predict pole-control; 2) Train with pole-control, predict BCWH; 3) Train with brain-control, predict BCWH.

    • Fig 9: Prediction quality: (1)>(3)>(2). This decrease in prediction accuracy applies to all lags. Suggests cortical representation of the robot ws optimized at the expense of representation of the animal's own limb.

Conclusion

  1. Principle finding: Once cortical ensemble activity is switched to represent the movements of the artificial actuator, it is less representative of the movement of the animal's own limb, as evidence by how tuning to hand velocity decreases from pole-control to BCWH.

    • Neuronal tuning to robot movements may be enhanced due to increased correlation between neurons (attention leads to synchrony 2).

    • Supports evidence that cortical motor areas include cognitive signals as well as limb movements. Limb representation is highly flexible and susceptible to illusions and mental imagery (rubber arm). Neuronal mechanisms underlying adaptive properties 1 may be responsible for cortical ensemble adaptation during the BMI operation.

  2. BMI design makes any modulation of neuronal activity translate into movement of the actuator. To interpret it as reflection of new representation of artificial actuators, uses the following evidence:

    • Nonrandomness of actuator movements and behavioral improvements. (Peaking of VTI pre-IVM not neccessarily sufficient because that's built into prediction algorithms).

    • Increased correlation between neurons during brain control (via Carmena 2003).

    • Decreased tuning to hand velocity in BCWH suggests different representation.

  3. Neurons tuned to hand velocity in pole control code for robot velocity in brain-control. How do those neurons' modulation not lead to hand movements in brain-control? In other words how does the monkey inhibit hand movements during brain control? Suggests activity on the level of ensemble combination changes to accomodate it. The recorded population is small part, may be preferentially assigned weights to control the actuator, whereas the others operate normally. This is the topic of Ganguly and Carmena, 2011.

  4. Neuronal ensemble represents robot velocities better during brain control supports optimal feedback control theory -- motor system as a stochastic feedback controller that optimizes only motor parameters necessary to achieve task goals (meh).

  5. Once the neuronal ensemble starts to control a BMI, imperfections in open-loop model may be compensated by neuronal adaptations to improve control, concurs with (Taylor 2002) using adaptive BMI. Justifies using fixed decoder, and closed-loop decoder calibration stage (CLDA, ReFIT, Schwartz algs).

Further Questions

  1. How exactly does the cortical plasticity process help? Very likely features of adaptations depend on requirements of BMI operations. If the task requires both limb movements and BMI operations, what will be the optimal neural system state? It's possible certain adapation states is only temporary to maintain performance.

    • PDs of many neurons become more similar in BCWOH -- may be maladaptive since PD diversity may be needed to improve directional encoding.

    • Shuffling procedures show increased correlations accompanies increased individual neuronal tuning...seems to contradict the previous point.

    • Therefore, neuronal ensemble controlling hte BMI may have been optimizing its performance by trading magnitude of tuning and the diversity of PD, or that increased correlation is only a temporary effect and will decrease with prolonged training (attention in learning stage increases synchrony).

Probably led to Orsborn & Carmena 2014's task of pressing a button while doing BMI center-out.

1$$Corticalneuronscanencodemovementdirectionsregardlessofmusclepatterns,representtargetofmovementandmovementofthehandcontrolledvisualmarkertowarditratherthanactualhandtrajectory,encodemultiplespatialvariables,reflectorientationofselectivespatialattention,andrepresentmisperceivedmovementsofvisualtargets.$$2$$MurphyandFetz,1992,1996a,b;Riehleetal.,1997,2000;Hatsopoulosetal.,1998;LebedevandWise,2000;Bakeretal.,2001.

Sep 15, 2016 - Notes on comparison metrics

R-Squared, r, and Simple Linear Regression

Linear regression is perhaps the simplest and most common statistical model (very powerful nevertheless), and fits the model of the form:

y=Xβ+ϵ

This model makes some assumptions about the predictor variables, response variables, and their relationship:

  1. Weak exogeneity - predictor variables are assumed to be error-free.

  2. Linearity - response variables is a lienar combination of the predictor variables.

  3. Constant variance - response variables have the same same variance in their errors, regardless of the values of the predictor variables. This is most likely invalid in many applications. Especially so when the data is instationary.

  4. Independence of errors - assuming errors of response variables are uncorrelated with each other.

  5. Lack of multicollinearity - in the predictors. Regularization can be used to combat this. On the other hand, when there is a lot of data, the collinearity problem is not as significant.

The most common linear regression is fitted with ordinary least-squares (OLS) method, where the sum of the squares of the differenceces between the observed responses in the dataset and those predicted by the model is minimized.

Two very common metrics associated with OLS linear regression is coefficient-of-determination R2, and correlation coefficient, specifically Pearson correlation coefficient r.

The first measures how much variance is explained by the regression, and is calculated by 1SSresSStot. The second measures the the lienar dependence between the response and predictor variables. Both have value between 0 and 1. Say we fit linear regression and want to check how good that model is, we can get the predicted value y^=Xβ and check the R2 and r values between y and y^. When a linear regression is fit using OLS, then the R2=r2.

However, if two vectors are linearly related but not fit using OLS regression, then they can have high r but negative R2. In MATLAB:

>> a = rand(100,1);
>> b = rand(100,1);
>> corr(a,b)

ans =

    0.0537

>> compute_R_square(a,b)

ans =

   -0.7875

>> c = a + rand(100,1);
>> corr(a,c)

ans =

    0.7010

>> compute_R_square(a,c)

ans =

   -2.8239

Even though a and c above are clearly linearly related, their R2 value is negative. This is because the 1SSresSStot definition assumes c is generated from an OLS regression.

According to Wikipedia,

Important cases where the computational definition of R2 can yield negative values, depending on the definition used, arise where the predictions that are being compared to the corresponding outcomes have not been derived from a model-fitting procedure using those data, and where linear regression is conducted without including an intercept. Additionally, negative values of R2 may occur when fitting non-linear functions to data.[5] In cases where negative values arise, the mean of the data provides a better fit to the outcomes than do the fitted function values, according to this particular criterion

This point really tripped me up when I was checking my decoder's performance. Briefly, I fit a Wiener filter to training data, apply on testing data, and then obtain the R2 and r between the predicted and actual response of the testing data. I expected R2=r2. However, R2 was negative and its magnitude was not related to r at all -- precisely for this reason.

In the case of evaluating decoder performance then, we can instead:

  1. Fit a regression between y and ypred, then check the R2 of that regression. Note that when there is a lot of data, regardless of how weak the correlation is between them, the fit will be statistically significant. So while the value of R2 is useful, its pvalue is not as useful.

  2. Correlation coefficient r. However it is susceptible to outliers when there is not much data. This has been my go-to metric for evaluating neural tuning (as accepted in the field) and decoder performance.

  3. Signal-to-Noise (SNR) ratio. As Li et al., 2009 pointed out, r is scale and translational invariant, this means a vector [1,2,3,4,5] and [2,3,4,5,6] will have r=1, which does not capture this offset.

SNR is calculated as varmse (often converted to dB) where var is the sample variance y, and mse is the mean squared error of ypred. This then measures how ypred 'tracks' y, as desired.

May 7, 2016 - McNiel, [...] & Francis 2016: Classifier performance in primary somatosensory cortex towards implementation of a reinforcement learning based brain machine interface

Classifier performance in primary somatosensory cortex towards implementation of a reinforcement learning based brain machine interface, submitted to Southern Biomedical Engineering Conference, 2016.

In the theme of RL-based BMI decoder, this paper evaluates classifiers for identifying the reinforcing signal (or ERS according to Millan). Evaluates the ability of several common classifiers to detect impending reward delivery within S1 cortex during a grip force match to sample task performed by monkeys.

Methods

  1. Monkey trained to grip right hand to hold and match a visually displayed target.

  2. PETHs from S1 generated, from 1) aligning with timing of visual cue denoting the impending result of trial (reward delivered or withheld), 2) aligning with timing of trial outcome (reward delievery or withold).

    PETH time extended 500ms after stimulus of interest using a 100ms bin width. (So only 500ms total signal, 5-bin vector samples).

  3. Dimension-reduction of PETH via PCA. Use only the first 2 PCs, feed into classifiers including:

    • Naive Bayes
    • Nearest Neighbor (didn't specify how many k...or just one neighbor..)
    • Linear SVM
    • Adaboost
    • Quadratic Discriminant Analysis (QDA)
    • LDA

    Trained on 82 trials, tested on 55 trials. Goal is to determine if a trial is rewarding or non-rewarding.

Results

  1. For all classifiers, post-cue accuracy higher than post-reward accuracy. Consistent with previous work (Marsh 2015) showing the presence of conditioned stimuli in reward prediction tasks shift rerward modulated activity in the brain to the earliest stimulus predictive of impending reward delivery.

  2. NN performs the best out of all classifiers. But based on the sample size (55 trials) and the performance figures, not significantly better.

Takeaways

Results are good, but should have more controlled experiments to eliminate confounds...could be seeing an effect of attention level.

Also, still limited to discrete actions.

May 7, 2016 - Platt & Glimcher 1999: Neural correlates of decision variables in parietal cortex

Neural correlates of decision variables in parietal cortex

Motivation

Sensory-motor reflex has been tacitly accepted by many older physiologists as an appropriate model for describing the neural processes that underlie complex behavior. More recent findings (in the 1990s) indicate that, at least in some cases, the neural events that connect sensation and movement may involve processes other than classical relfexive mechanisms and these data suppport theoretical approaches that have challenged reflex models directly.

Researchers outside physiology have long argued (since 1960s) for richer models of the sensory-motor process. These models invok3 the explicit representation of a class of decision variables, which carry information about the environment, are extracted in advance of response selection, aid in the interpretatin or processing of sensory data and are a prerquisite for rational decision making.

This paper describes a formal economic-mathematical approach for the physiological studey o fthe senosry-motor process/decision-making, in the lateral intra-parietal (LIP) area of the macaque brain.

Caveats

  1. Expectation of reward magnitude result in neural modulations in the LIP. LIP is known to translate visual signals into eye-movement commands.

  2. The same neurons are sensitive to expected gain and outcome probability of an action. Also correlates with subjective estimate of the gain associated with different actions.

  3. These indicate neural basis of reward expectation.

Decision-Making Framework

The rational decision maker makes decision based on two environmental variables: the gain expected to result from an action, and the probability that the expected gain will be realized. The decision maker aims to maximize the expected reward.

Neurobiological models of the processes that connect sensation and action almost never propose the explicit representation of decision variables by the nervous system. Authors propose a framework containing

  1. Current sensory data - reflect the observer's best estimate of the current state of the salient elements of the environment.

  2. Stored representation of environmental contingencies - represent the chooser's assumptions about current environmental contingencies, detailing how an action affects the chooser.

This is a Bayesian framework.

Sensory-Motor integration in LIP

In a previous experiment, the authors concluded that visual attention can be treated as conceptually separable from other elements of the decision-making process, and that activity in LIP does not participate in sensory-attentional processing but is correlated with either outcome continguencies, gain functions, decision outputs or motor planning.

Experiments

  1. Cue saccade trials: a change in the color of a centrally located fixation stimulus instructed subjects to make one of two possible eye-movement responses in order to receive a juice reward. Before the color change, the rewarded movement was ambiguous. The successive trials, the volume of juice delivered for each instructed response (expected gain), or the probability that each possible response would be instructed (outcome probability) are varied.

    Goal is to test whether any LIP spike rates correlate to variations in these decision variables.

  2. Animals were rewarded for choosing either of two possible eye-movement responses. In subsequent trials, varied the gain that could be expected from each possible response. The frequency with which the animal chose each response is then an estimate of the subjective value of each option.

    Goal is to test whether any LIP activity variation correlates with the subjective estimate of decision variable.

Analysis

Experiment 1

For each condition: 1) keeping outcome probability fixed while varying expected gain, and 2) keeping expected gain fixed while varying outcome probability, a trial is divded into 6 200ms epochs. The activity during these epochs are then used to calculate regressions of the following variables:

  1. Outcome probability.
  2. Expected gain.
  3. Type of movement made (up or down saccade).
  4. Amplitude of saccade.
  5. Average velocity of saccade.
  6. Latency of saccade.

Multiple regression were done for each epoch, percentage of recorded neurons that show significant correlations between firing rate and the decision variables were calculated.

Average slope of regression against either decision variables were taken as a measure of correlation. The same regressio slopes were calculated for type of movement made as well.

For both expected gain and outcome probability, the regression slopes were significantly greater than 0, and greater than that for type of movement during early and late visual intervals, and decay during late-cue and movement. The opposite is seen for type of movement regression slopes.

Experiment 2

Similar analysis is done to evaluate neural correlate of subjective estimate of reward. Results show activity level correlates with the perceived gain associated with the target location within the neuron's receptive field. The time evolution of the regression slopes show a similar decay as before.

Consistent with Herrnstein's matching law for choice behavior, there was a linear relationship between the proportion of trials on which the animal chose the target inside the response field and the proportion of total juice available for gaze shifts to that target.

Animal's estimate of the relative value of the two choices is correlated with the activation of intraparietal neurons.

Discussions

  1. It would seem obvious that there are neural correlates for decision variables, rather than the traditional sensory-motor reflex decision-making framework. But this may be solely in the context of investigating sensory areas..probably have to read Sherrington to know how he reached that conclusion.

  2. The data in this paper can be hard to reconcile with traditional sensory-attentional models, which would attribute modulations in the activity of visually responsive neurons to changes in the behavior relevance of visual stimuli, which would then argue that the author's manipulations of expected gain and outcome probably merely altered the visual activity of intraparietal neurons. Explainations include:

    • Previous work show intraparietal neurons are insensitive to changes in the behavioral relevance of either presented stimulus when it is not the target of a saccade.

    • Experiment 2 showed that intraparietal neuron activity was modulated by changes in the estimated value of each response during the fixation intervals BEFORE the onset of response targets.

  3. Modulations by decision variables early in trial. Late in trial more strongly modulated by movement plan. Suggests intraparietal cortex lies close to the motor output stages of the sensory-deicsion-motor process. Prefrontal cortex perhaps has reprsentation of decision variables throughout the trial.