Publications by authors named "Leila Azizi"

7 Publications

  • Page 1 of 1

Prestimulus Alpha Oscillations and the Temporal Sequencing of Audiovisual Events.

J Cogn Neurosci 2017 Sep 11;29(9):1566-1582. Epub 2017 May 11.

Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin center, 91191 Gif/Yvette, France.

Perceiving the temporal order of sensory events typically depends on participants' attentional state, thus likely on the endogenous fluctuations of brain activity. Using magnetoencephalography, we sought to determine whether spontaneous brain oscillations could disambiguate the perceived order of auditory and visual events presented in close temporal proximity, that is, at the individual's perceptual order threshold (Point of Subjective Simultaneity [PSS]). Two neural responses were found to index an individual's temporal order perception when contrasting brain activity as a function of perceived order (i.e., perceiving the sound first vs. perceiving the visual event first) given the same physical audiovisual sequence. First, average differences in prestimulus auditory alpha power indicated perceiving the correct ordering of audiovisual events irrespective of which sensory modality came first: a relatively low alpha power indicated perceiving auditory or visual first as a function of the actual sequence order. Additionally, the relative changes in the amplitude of the auditory (but not visual) evoked responses were correlated with participant's correct performance. Crucially, the sign of the magnitude difference in prestimulus alpha power and evoked responses between perceived audiovisual orders correlated with an individual's PSS. Taken together, our results suggest that spontaneous oscillatory activity cannot disambiguate subjective temporal order without prior knowledge of the individual's bias toward perceiving one or the other sensory modality first. Altogether, our results suggest that, under high perceptual uncertainty, the magnitude of prestimulus alpha (de)synchronization indicates the amount of compensation needed to overcome an individual's prior in the serial ordering and temporal sequencing of information.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1162/jocn_a_01145DOI Listing
September 2017

High-frequency neural activity predicts word parsing in ambiguous speech streams.

J Neurophysiol 2016 12 7;116(6):2497-2512. Epub 2016 Sep 7.

Cognitive Neuroimaging Unit, CEA DRF/I2BM, Institut National de la Santé et de la Recherche Médicale, Université Paris-Sud, Université Paris-Saclay, Gif/Yvette, France.

During speech listening, the brain parses a continuous acoustic stream of information into computational units (e.g., syllables or words) necessary for speech comprehension. Recent neuroscientific hypotheses have proposed that neural oscillations contribute to speech parsing, but whether they do so on the basis of acoustic cues (bottom-up acoustic parsing) or as a function of available linguistic representations (top-down linguistic parsing) is unknown. In this magnetoencephalography study, we contrasted acoustic and linguistic parsing using bistable speech sequences. While listening to the speech sequences, participants were asked to maintain one of the two possible speech percepts through volitional control. We predicted that the tracking of speech dynamics by neural oscillations would not only follow the acoustic properties but also shift in time according to the participant's conscious speech percept. Our results show that the latency of high-frequency activity (specifically, beta and gamma bands) varied as a function of the perceptual report. In contrast, the phase of low-frequency oscillations was not strongly affected by top-down control. Whereas changes in low-frequency neural oscillations were compatible with the encoding of prelexical segmentation cues, high-frequency activity specifically informed on an individual's conscious speech percept.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1152/jn.00074.2016DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5133297PMC
December 2016

Disruption of hierarchical predictive coding during sleep.

Proc Natl Acad Sci U S A 2015 Mar 3;112(11):E1353-62. Epub 2015 Mar 3.

Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, U992, F-91191 Gif/Yvette, France; NeuroSpin Center, Institute of BioImaging, Commissariat à l'Energie Atomique, F-91191 Gif/Yvette, France; Department of Life Sciences, Université Paris 11, 91400 Orsay, France; and Chair of Experimental Cognitive Psychology, Collège de France, F-75005 Paris, France

When presented with an auditory sequence, the brain acts as a predictive-coding device that extracts regularities in the transition probabilities between sounds and detects unexpected deviations from these regularities. Does such prediction require conscious vigilance, or does it continue to unfold automatically in the sleeping brain? The mismatch negativity and P300 components of the auditory event-related potential, reflecting two steps of auditory novelty detection, have been inconsistently observed in the various sleep stages. To clarify whether these steps remain during sleep, we recorded simultaneous electroencephalographic and magnetoencephalographic signals during wakefulness and during sleep in normal subjects listening to a hierarchical auditory paradigm including short-term (local) and long-term (global) regularities. The global response, reflected in the P300, vanished during sleep, in line with the hypothesis that it is a correlate of high-level conscious error detection. The local mismatch response remained across all sleep stages (N1, N2, and REM sleep), but with an incomplete structure; compared with wakefulness, a specific peak reflecting prediction error vanished during sleep. Those results indicate that sleep leaves initial auditory processing and passive sensory response adaptation intact, but specifically disrupts both short-term and long-term auditory predictive coding.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1073/pnas.1501026112DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4371991PMC
March 2015

Inhibition and structural changes of liver alkaline phosphatase by tramadol.

Drug Metab Lett 2014 ;8(2):129-34

BioResearch Lab, Faculty of Biological Sciences, Shahid Beheshti University,G.C. Iran.

Tramadol is a potent analgesic drug which interacts with mu-opioid and has low effect on other opioid receptors. Unlike other opioids, it has no clinically significant effect on respiratory or cardiovascular parameters. Alakaline phosphatase is a hydrolase enzyme that prefers alkaline condition and removes phosphate group from different substrates. In this study, the interaction between tramadol and calf liver alkaline phosphatase was investigated. The results showed that tramadol can bind to alakaline phosphatase and inhibit the enzyme in an un-competitive manner. Ki and IC50 values of tramadol were determined as about 91 and 92 μM, respectively. After enzyme purification, structural changes on alakaline phosphatase-drug interaction were studied by circular dichroism and fluorescence measurement. These data revealed the alteration in the content of secondary structures and also conformational changes in enzyme occurred when the drug bound to enzyme-substrate complex.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2174/1872312808666140506093756DOI Listing
October 2015

Supramodal processing optimizes visual perceptual learning and plasticity.

Neuroimage 2014 Jun 22;93 Pt 1:32-46. Epub 2014 Feb 22.

CEA, DSV/I2BM, NeuroSpin Center, F-91191 Gif-sur-Yvette, France; INSERM, U992, Cognitive Neuroimaging Unit, F-91191 Gif-sur-Yvette, France; Univ Paris-Sud, Cognitive Neuroimaging Unit, F-91191 Gif-sur-Yvette, France. Electronic address:

Multisensory interactions are ubiquitous in cortex and it has been suggested that sensory cortices may be supramodal i.e. capable of functional selectivity irrespective of the sensory modality of inputs (Pascual-Leone and Hamilton, 2001; Renier et al., 2013; Ricciardi and Pietrini, 2011; Voss and Zatorre, 2012). Here, we asked whether learning to discriminate visual coherence could benefit from supramodal processing. To this end, three groups of participants were briefly trained to discriminate which of a red or green intermixed population of random-dot-kinematograms (RDKs) was most coherent in a visual display while being recorded with magnetoencephalography (MEG). During training, participants heard no sound (V), congruent acoustic textures (AV) or auditory noise (AVn); importantly, congruent acoustic textures shared the temporal statistics - i.e. coherence - of visual RDKs. After training, the AV group significantly outperformed participants trained in V and AVn although they were not aware of their progress. In pre- and post-training blocks, all participants were tested without sound and with the same set of RDKs. When contrasting MEG data collected in these experimental blocks, selective differences were observed in the dynamic pattern and the cortical loci responsive to visual RDKs. First and common to all three groups, vlPFC showed selectivity to the learned coherence levels whereas selectivity in visual motion area hMT+ was only seen for the AV group. Second and solely for the AV group, activity in multisensory cortices (mSTS, pSTS) correlated with post-training performances; additionally, the latencies of these effects suggested feedback from vlPFC to hMT+ possibly mediated by temporal cortices in AV and AVn groups. Altogether, we interpret our results in the context of the Reverse Hierarchy Theory of learning (Ahissar and Hochstein, 2004) in which supramodal processing optimizes visual perceptual learning by capitalizing on sensory-invariant representations - here, global coherence levels across sensory modalities.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroimage.2014.02.017DOI Listing
June 2014

Time-resolved diffusing wave spectroscopy with a CCD camera.

Opt Express 2010 Aug;18(16):16289-301

Laboratoire de Physique des Lasers, Institut Galilée, Université Paris 13, 99 Ave JB Clément, F-93430 Villetaneuse, France.

We show how time-resolved measurements of the diffuse light transmitted through a thick scattering slab can be performed with a standard CCD camera, thanks to an interferometric protocol. Time-resolved correlations measured at a fixed photon transit time are also presented. The high number of pixels of the camera allows us to attain a quite good sensitivity for a reasonably low acquisition time.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1364/OE.18.016289DOI Listing
August 2010

Ultimate spatial resolution with Diffuse Optical Tomography.

Opt Express 2009 Jul;17(14):12132-44

Laboratoire de Physique des Lasers, CNRS UMR 7538, Université Paris 13, 99 av J-B Clément, 93430 Villetaneuse, France.

We evaluate the ultimate transverse spatial resolution that can be expected in Diffuse Optical Tomography, in the configuration of projection imaging. We show how such a performance can be approached using time-resolved measurements and reasonable assumptions, in the context of a linearized diffusion model.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1364/oe.17.012132DOI Listing
July 2009
-->