Publications by authors named "Vasilisa Skvortsova"

7 Publications

  • Page 1 of 1

Cerebellar anaplastic astrocytoma in adult patients: 15 consecutive cases from a single institution and literature review.

J Clin Neurosci 2021 Sep 22;91:249-254. Epub 2021 Jul 22.

Burdenko Neurosurgery Center, Moscow, Russia Federation.

Adult cerebellar anaplastic astrocytomas (cAA) are rare entities and their clinical and genetic appearances are still ill defined. Previously, malignant gliomas of the cerebellum were combined and reviewed together (cAA and cerebellar glioblastomas (cGB), that could have possibly affected overall survival (OS) and progression-free survival (PFS). We present characteristics of 15 adult patients with cAA and compared them to a series of 45 patients with a supratentorial AA (sAA) in order to elicit the effect of tumor location on OS and PFS. The mean age at cAA diagnosis was 39.3 years (range 19-72). A history of neurofibromatosis type I was noted in 1 patient (6.7%). An IDH-1 mutation was identified in 6/15 cases and a methylated MGMT promoter in 5/15 cases. Patients in study and control groups were matched in age, sex and IDH-1 mutation status. Patients in a study group tended to present with longer overall survival (50 vs. 36.5 months), but the difference did not reach statistical significance. In both cAA and supratentorial AA groups presence of the IDH-1 mutation remains a positive predictor for the prolonged survival. The present study suggests that adult cAA constitute a group of gliomas with relatively higher rate of IDH-1 mutations and prognosis similar to supratentorial AA. The present study is the first to systematically compare cAA and supratentorial AA with respect to their genetic characteristics and suggests that both groups show a similar survival prognosis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jocn.2021.07.010DOI Listing
September 2021

Obsessive-compulsive symptoms and information seeking during the Covid-19 pandemic.

Transl Psychiatry 2021 05 21;11(1):309. Epub 2021 May 21.

Max Planck UCL Centre for Computational Psychiatry and Ageing Research, London, UK.

Increased mental-health symptoms as a reaction to stressful life events, such as the Covid-19 pandemic, are common. Critically, successful adaptation helps to reduce such symptoms to baseline, preventing long-term psychiatric disorders. It is thus important to understand whether and which psychiatric symptoms show transient elevations, and which persist long-term and become chronically heightened. At particular risk for the latter trajectory are symptom dimensions directly affected by the pandemic, such as obsessive-compulsive (OC) symptoms. In this longitudinal large-scale study (N = 406), we assessed how OC, anxiety and depression symptoms changed throughout the first pandemic wave in a sample of the general UK public. We further examined how these symptoms affected pandemic-related information seeking and adherence to governmental guidelines. We show that scores in all psychiatric domains were initially elevated, but showed distinct longitudinal change patterns. Depression scores decreased, and anxiety plateaued during the first pandemic wave, while OC symptoms further increased, even after the ease of Covid-19 restrictions. These OC symptoms were directly linked to Covid-related information seeking, which gave rise to higher adherence to government guidelines. This increase of OC symptoms in this non-clinical sample shows that the domain is disproportionately affected by the pandemic. We discuss the long-term impact of the Covid-19 pandemic on public mental health, which calls for continued close observation of symptom development.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41398-021-01410-xDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8138954PMC
May 2021

A Causal Role for the Pedunculopontine Nucleus in Human Instrumental Learning.

Curr Biol 2021 03 21;31(5):943-954.e5. Epub 2020 Dec 21.

Motivation, Brain and Behavior (MBB) laboratory, Paris Brain Institute (ICM), Groupe Hospitalier Pitié-Salpêtrière, Paris 75013, France; INSERM Unit 1127, CNRS Unit 7225, Sorbonne Universités (SU), Paris 75005, France. Electronic address:

A critical mechanism for maximizing reward is instrumental learning. In standard instrumental learning models, action values are updated on the basis of reward prediction errors (RPEs), defined as the discrepancy between expectations and outcomes. A wealth of evidence across species and experimental techniques has established that RPEs are signaled by midbrain dopamine neurons. However, the way dopamine neurons receive information about reward outcomes remains poorly understood. Recent animal studies suggest that the pedunculopontine nucleus (PPN), a small brainstem structure considered as a locomotor center, is sensitive to reward and sends excitatory projection to dopaminergic nuclei. Here, we examined the hypothesis that the PPN could contribute to reward learning in humans. To this aim, we leveraged a clinical protocol that assessed the therapeutic impact of PPN deep-brain stimulation (DBS) in three patients with Parkinson disease. PPN local field potentials (LFPs), recorded while patients performed an instrumental learning task, showed a specific response to reward outcomes in a low-frequency (alpha-beta) band. Moreover, PPN DBS selectively improved learning from rewards but not from punishments, a pattern that is typically observed following dopaminergic treatment. Computational analyses indicated that the effect of PPN DBS on instrumental learning was best captured by an increase in subjective reward sensitivity. Taken together, these results support a causal role for PPN-mediated reward signals in human instrumental learning.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cub.2020.11.042DOI Listing
March 2021

Computational noise in reward-guided learning drives behavioral variability in volatile environments.

Nat Neurosci 2019 12 28;22(12):2066-2077. Epub 2019 Oct 28.

Laboratoire de Neurosciences Cognitives et Computationnelles, Inserm U960, Département d'Études Cognitives, École Normale Supérieure, PSL University, Paris, France.

When learning the value of actions in volatile environments, humans often make seemingly irrational decisions that fail to maximize expected value. We reasoned that these 'non-greedy' decisions, instead of reflecting information seeking during choice, may be caused by computational noise in the learning of action values. Here using reinforcement learning models of behavior and multimodal neurophysiological data, we show that the majority of non-greedy decisions stem from this learning noise. The trial-to-trial variability of sequential learning steps and their impact on behavior could be predicted both by blood oxygen level-dependent responses to obtained rewards in the dorsal anterior cingulate cortex and by phasic pupillary dilation, suggestive of neuromodulatory fluctuations driven by the locus coeruleus-norepinephrine system. Together, these findings indicate that most behavioral variability, rather than reflecting human exploration, is due to the limited computational precision of reward-guided learning.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41593-019-0518-9DOI Listing
December 2019

How context alters value: The brain's valuation and affective regulation system link price cues to experienced taste pleasantness.

Sci Rep 2017 08 14;7(1):8098. Epub 2017 Aug 14.

INSERM, U960 Laboratoire de Neuroscience Cognitive, Economic Decision-Making Group, Ecole Normale Supérieure, 75005, Paris, France.

Informational cues such as the price of a wine can trigger expectations about its taste quality and thereby modulate the sensory experience on a reported and neural level. Yet it is unclear how the brain translates such expectations into sensory pleasantness. We used a whole-brain multilevel mediation approach with healthy participants who tasted identical wines cued with different prices while their brains were scanned using fMRI. We found that the brain's valuation system (BVS) in concert with the anterior prefrontal cortex played a key role in implementing the effect of price cues on taste pleasantness ratings. The sensitivity of the BVS to monetary rewards outside the taste domain moderated the strength of these effects. These findings provide novel evidence for the fundamental role that neural pathways linked to motivation and affective regulation play for the effect of informational cues on sensory experiences.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-017-08080-0DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5556089PMC
August 2017

A Selective Role for Dopamine in Learning to Maximize Reward But Not to Minimize Effort: Evidence from Patients with Parkinson's Disease.

J Neurosci 2017 06 24;37(25):6087-6097. Epub 2017 May 24.

Motivation, Brain and Behavior Laboratory, Brain and Spine Institute, Paris, 75013, France,

Instrumental learning is a fundamental process through which agents optimize their choices, taking into account various dimensions of available options such as the possible reward or punishment outcomes and the costs associated with potential actions. Although the implication of dopamine in learning from choice outcomes is well established, less is known about its role in learning the action costs such as effort. Here, we tested the ability of patients with Parkinson's disease (PD) to maximize monetary rewards and minimize physical efforts in a probabilistic instrumental learning task. The implication of dopamine was assessed by comparing performance ON and OFF prodopaminergic medication. In a first sample of PD patients ( = 15), we observed that reward learning, but not effort learning, was selectively impaired in the absence of treatment, with a significant interaction between learning condition (reward vs effort) and medication status (OFF vs ON). These results were replicated in a second, independent sample of PD patients ( = 20) using a simplified version of the task. According to Bayesian model selection, the best account for medication effects in both studies was a specific amplification of reward magnitude in a Q-learning algorithm. These results suggest that learning to avoid physical effort is independent from dopaminergic circuits and strengthen the general idea that dopaminergic signaling amplifies the effects of reward expectation or obtainment on instrumental behavior. Theoretically, maximizing reward and minimizing effort could involve the same computations and therefore rely on the same brain circuits. Here, we tested whether dopamine, a key component of reward-related circuitry, is also implicated in effort learning. We found that patients suffering from dopamine depletion due to Parkinson's disease were selectively impaired in reward learning, but not effort learning. Moreover, anti-parkinsonian medication restored the ability to maximize reward, but had no effect on effort minimization. This dissociation suggests that the brain has evolved separate, domain-specific systems for instrumental learning. These results help to disambiguate the motivational role of prodopaminergic medications: they amplify the impact of reward without affecting the integration of effort cost.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1523/JNEUROSCI.2081-16.2017DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6596498PMC
June 2017

Learning to minimize efforts versus maximizing rewards: computational principles and neural correlates.

J Neurosci 2014 Nov;34(47):15621-30

Motivation, Brain and Behavior Laboratory, Neuroimaging Research Center, Brain and Spine Institute, INSERM U975, CNRS UMR 7225, UPMC-P6 UMR S 1127, 7561 Paris Cedex 13, France,

The mechanisms of reward maximization have been extensively studied at both the computational and neural levels. By contrast, little is known about how the brain learns to choose the options that minimize action cost. In principle, the brain could have evolved a general mechanism that applies the same learning rule to the different dimensions of choice options. To test this hypothesis, we scanned healthy human volunteers while they performed a probabilistic instrumental learning task that varied in both the physical effort and the monetary outcome associated with choice options. Behavioral data showed that the same computational rule, using prediction errors to update expectations, could account for both reward maximization and effort minimization. However, these learning-related variables were encoded in partially dissociable brain areas. In line with previous findings, the ventromedial prefrontal cortex was found to positively represent expected and actual rewards, regardless of effort. A separate network, encompassing the anterior insula, the dorsal anterior cingulate, and the posterior parietal cortex, correlated positively with expected and actual efforts. These findings suggest that the same computational rule is applied by distinct brain systems, depending on the choice dimension-cost or benefit-that has to be learned.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1523/JNEUROSCI.1350-14.2014DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6608437PMC
November 2014
-->