linking back to brembs.net






My lab:
lab.png

In the first talk of this afternoon session at the Winter Conference on Brain Research (WCBR) Jeff Beeler showed data that hyperdopaminergic dopamine transporter knock-down mice (which have increased dopamine levels) don't distinguish between two different lever where one lever gives a reward after a few presses and the other requires a lot of presses for a food reward. Wild type mice, not surprisingly, prefer pressing the 'easy' lever. However, if the levers are switched, the DATkd mice show that they can distinguish between the two levers, so it's not that they can't distinguish between the easy and difficult lever. Instead, dopamine seems to affect the motivation of the mice to press the levers in general. Interestingly, these mice were all tested in their home cages and the food reward they obtain by pressing the levers is the only food they get. This difference in motivation is expressed in DATkd mice eating fewer meals per day, but these meals were larger in size than those of wild type mice. These differences in meal-patterns cancel each other out such that total energy consumption is about equal between the two strains. In conclusion, the data presented here suggests a hypothesis according to which increased dopamine levels seem to favor energy expenditure as well as how this expenditure is distributed. It is the latter point where value comes in, the factor where dopamine had been initially associated with. At the very end, he showed a Drosophila ADHD model: hyperdopaminergic fmn mutant flies.

Next up was Sean Ostlund, a former graduate student of my colleague and friend Bernard Balleine. He started his talk by showing examples of motivating cues influencing behavior. Sean uses instrumental conditioning to get animals to learn the incentive value of a cue. Rats press levers for a food reward in the presence of a second cue (in addition to the lever). Because of the confusion of operant classical processes during such learning, Sean is also using Pavlovian-Instrumental transfer: first he pairs a cue (a tone) with a feed reward and then tests how often the rat will press the lever in the presence of that Pavlovian cue. Dopamine receptor blockade abolishes this transfer effect. It gets a little bit more complicated when they use two cues (tone or light) with two rewards (sugar or food) which have previously been associated with the left or the right lever, respectively. Upon presentations of each cuw, the animals then go to the appropriate lever. Blocking dopamine transmission using Flupenthixol (dopamine receptor antagonist) resulted in the rats pressing both levers very little, but still differentiating between the two levers. So dopamine is involved in cue-dependent action invigoration in this paradigm, but not in action selection. Or, as Sean puts it, dopamine provides the 'push' but doesn't 'steer' the behavior, at least in these experiments.

The third speaker was the organizer of this session, Paul Phillips. He started by explaining the difference between model-based and model-free learning. He went on to show an instance of Model-free learning, the Rescorla-Wagner model. Dopaminergic neurons are known to fire according to the prediction-error in this model: whenever the expectation of reward is exceeded, dopaminergic neurons fire more, when it is not reached, they fire less. After these classical experiments, he went on to show operant experiments, where rats had to press one of two levers that would give either a large or a small reward. Pressing the lever for the large reward leads to more dopamine release than pressing for a small reward. In the next experiment, one lever provides water and the other food. Selective satiation (on either food or water) will then bias lever-pressing towards the lever providing the non-devalued reward. When measuring dopamine release, it follows the bias of the animal: the devalued lever leads to less release than the non-devalued lever. In my eyes, he showed two different experiments (operant and classical), but tested the same thing: dopamine responses to cues: explicit cues in one case and the lever in the other. No surprise they don't find any difference. He then showed some more data on dopamine invigorating responding if the animals had only one of the levers and they tested whether or not the animal would reject pressing it. In his model, action selection happens in prefrontal areas and the ventral tegmental area provides dopamine to the nucleus accumbens, where both system project to. Dopamine there invigorates whatever the action selection system selected. This model acknowledges my comment above that dopamine is not directly involved in the operant (action selection) process, but rather in the process of attributing value to external stimuli. In that way this talk dovetailed nicely with Sean's talk.

Final speaker of this so far excellent session was Saleem Nicola. He started by showing that high-effort tasks are disrupted more by dopamine depletion than low-effort tasks. He went on to his experiment of an FR8 task (eight lever presses yield one food reward) where he is looking at the behavior during the inter-press intervals in the presence and absence of dopamine antagonists. What he found was that intervals in which the animals leave the lever for a distance of more than 4cm become longer after administration of a dopamine antagonist. He then looked at the interval the animal needs from pressing the lever to moving towards the reward receptacle and the much longer interval between getting the reward and pressing the lever the next time. The first interval was unaffected by dopamine antagonists and the second interval was increased. Two more experiments showed that a simple dopamine-effort relation can be ruled out. He deduced from these experiments the hypothesis that dopamine receptor activation in the nucleus accumbens core is required for 'flexible approach', i.e., when different actions are required to reach the reward. Recording from neurons in the accumbens (onto which dopaminergic neurons project), he found that their firing encoded future movement latency: high frequency firing preceded fast approach initiation, while low frequency predicted slow (long latency) approach. His conclusion was that dopamine invigorates reward-seeking by enabling accumbens neurons to encode reward prediction. This encoding drives a short-latency flexible approach behavior.
Posted on Wednesday 25 January 2012 - 02:38:15 comment: 0
{TAGS}

Render time: 0.0892 sec, 0.0068 of that for queries.