131 |
Scalability Analysis and Optimization for Large-Scale Deep LearningPumma, Sarunya 03 February 2020 (has links)
Despite its growing importance, scalable deep learning (DL) remains a difficult challenge. Scalability of large-scale DL is constrained by many factors, including those deriving from data movement and data processing. DL frameworks rely on large volumes of data to be fed to the computation engines for processing. However, current hardware trends showcase that data movement is already one of the slowest components in modern high performance computing systems, and this gap is only going to increase in the future. This includes data movement needed from the filesystem, within the network subsystem, and even within the node itself, all of which limit the scalability of DL frameworks on large systems. Even after data is moved to the computational units, managing this data is not easy. Modern DL frameworks use multiple components---such as graph scheduling, neural network training, gradient synchronization, and input pipeline processing---to process this data in an asynchronous uncoordinated manner, which results in straggler processes and consequently computational imbalance, further limiting scalability. This thesis studies a subset of the large body of data movement and data processing challenges that exist in modern DL frameworks.
For the first study, we investigate file I/O constraints that limit the scalability of large-scale DL. We first analyze the Caffe DL framework with Lightning Memory-Mapped Database (LMDB), one of the most widely used file I/O subsystems in DL frameworks, to understand the causes of file I/O inefficiencies. Based on our analysis, we propose LMDBIO---an optimized I/O plugin for scalable DL that addresses the various shortcomings in existing file I/O for DL. Our experimental results show that LMDBIO significantly outperforms LMDB in all cases and improves overall application performance by up to 65-fold on 9,216 CPUs of the Blues and Bebop supercomputers at Argonne National Laboratory.
Our second study deals with the computational imbalance problem in data processing. For most DL systems, the simultaneous and asynchronous execution of multiple data-processing components on shared hardware resources causes these components to contend with one another, leading to severe computational imbalance and degraded scalability. We propose various novel optimizations that minimize resource contention and improve performance by up to 35% for training various neural networks on 24,576 GPUs of the Summit supercomputer at Oak Ridge National Laboratory---the world's largest supercomputer at the time of writing of this thesis. / Doctor of Philosophy / Deep learning is a method for computers to automatically extract complex patterns and trends from large volumes of data. It is a popular methodology that we use every day when we talk to Apple Siri or Google Assistant, when we use self-driving cars, or even when we witnessed IBM Watson be crowned as the champion of Jeopardy! While deep learning is integrated into our everyday life, it is a complex problem that has gotten the attention of many researchers.
Executing deep learning is a highly computationally intensive problem. On traditional computers, such as a generic laptop or desktop machine, the computation for large deep learning problems can take years or decades to complete. Consequently, supercomputers, which are machines with massive computational capability, are leveraged for deep learning workloads. The world's fastest supercomputer today, for example, is capable of performing almost 200 quadrillion floating point operations every second. While that is impressive, for large problems, unfortunately, even the fastest supercomputers today are not fast enough. The problem is not that they do not have enough computational capability, but that deep learning problems inherently rely on a lot of data---the entire concept of deep learning centers around the fact that the computer would study a huge volume of data and draw trends from it. Moving and processing this data, unfortunately, is much slower than the computation itself and with the current hardware trends it is not expected to get much faster in the future.
This thesis aims at making deep learning executions on large supercomputers faster. Specifically, it looks at two pieces associated with managing data: (1) data reading---how to quickly read large amounts of data from storage, and (2) computational imbalance---how to ensure that the different processors on the supercomputer are not waiting for each other and thus wasting time. We first analyze each performance problem to identify the root cause of it. Then, based on the analysis, we propose several novel techniques to solve the problem. With our optimizations, we are able to significantly improve the performance of deep learning execution on a number of supercomputers, including Blues and Bebop at Argonne National Laboratory, and Summit---the world's fastest supercomputer---at Oak Ridge National Laboratory.
|
132 |
Disentangling neural heterogeneity in autismBertelsen, Natasha 08 March 2024 (has links)
Two main theories of neural atypicality have been postulated in autism. One theory proposes that autism can be explained as the result of atypical patterns of hypo and hyper functional connectivity (FC) within and between brain areas. A complementary theory suggests that atypical functional communication in autism could result from an altered ratio between excitatory and inhibitory input (E:I imbalance). These theories have been previously explored as they might apply to all individuals with a behavioral diagnosis. However, given the multiscale heterogeneity characterizing autism, different subsets of individuals with autism may display different patterns of functional connectivity atypicalities and E:I imbalance. This thesis sets out to explore how neural atypicalities in connectivity and E:I imbalance might be differentially expressed in subsets of the autistic population.
To this end, two empirical investigations were conducted. First, the connectivity hypothesis was explored by investigating whether behaviorally-defined subtypes were associated with different patterns of FC atypicalities. Behaviorally-defined subtypes were obtained by stratifying autistic individuals based on their relative balance between social communication (SC) and restricted and repetitive behaviors (RRBs) core symptom domains. This approach yielded three behaviorally-based subtypes: SC>RRB, SCRRB displayed hypoconnectivity between somatomotor and perisylvian circuitry, while subtypes SC=RRB showed hypoconnectivity between somatomotor and visual association areas and hyperconnectivity between medial motor and anterior salience networks. Finally, these subtype-specific FC alterations were shown to be enriched for partially distinct genetic mechanisms, some of which related to excitatory-inhibitory neurons and astrocytes.
In a second study, the EI imbalance hypothesis was explored by investigating whether autism subtypes could be identified based on an E:I-sensitive metric computed from electroencephalographic (EEG) data. Specifically, the Hurst exponent (H) – a metric that has been shown to be affected by changes in excitatory input – was computed on EEG time-series data, obtained in two resting state conditions of eyes open and closed. H-based clustering revealed two E:I-based neurosubtypes across conditions with opposing patterns of E:I imbalance compared to neurotypical controls. Autism neurosubtype 1 showed on-average higher H values, while neurosubtype 2 displayed on-average lower H. These opposing E:I balance patterns were present globally across the brain, with the limited exception of an orthogonal larger decrease in H in non-frontal electrodes in neurosubtype 2. Finally, investigation at the behavioral level identified distinct multivariate brain-behavior relationships between age, intelligence, autistic traits and H.
Taken together, these empirical findings demonstrate that the two major theories of neural atypicality in autism – FC alteration and E:I imbalance – do not apply equally to all individuals with a behavioral diagnosis. Rather, different subtypes of autism exist that display contrasting patterns of neural atypicality compared to typically-developing individuals. These contrasting patterns might be driven by differentially altered primary or compensatory E:I mechanisms shaping distinct atypical cortical organizations within the subtypes. Interestingly, the relationship between specific neural atypicalities and variability at the behavioral and genetic level is, however, subtle across the subtypes. This limited multiscale association could suggest that heterogeneity in autism might be due to the presence, within the larger population, of subtype-specific mosaic-like patterns of atypicalities at the behavioral and biological level. Further research is required to thoroughly characterize how these levels map onto one another within the subtypes and determine the pathophysiological mechanisms driving their development.
|
133 |
Scalable Data Management for Object-based Storage SystemsWadhwa, Bharti 19 August 2020 (has links)
Parallel I/O performance is crucial to sustain scientific applications on large-scale High-Performance Computing (HPC) systems. Large scale distributed storage systems, in particular the object-based storage systems, face severe challenges for managing the data efficiently. Inefficient data management leads to poor I/O and storage performance in HPC applications and scientific workflows. Some of the main challenges for efficient data management arise from poor resource allocation, load imbalance in object storage targets, and inflexible data sharing between applications in a workflow. In addition, parallel I/O makes it challenging to shoehorn new interfaces, such as taking advantage of multiple layers of storage and support for analysis in the data path. Solving these challenges to improve performance and efficiency of object-based storage systems is crucial, especially for upcoming era of exascale systems.
This dissertation is focused on solving these major challenges in object-based storage systems by providing scalable data management strategies. In the first part of the dis-sertation (Chapter 3), we present a resource contention aware load balancing tool (iez) for large scale distributed object-based storage systems. In Chapter 4, we extend iez to support Progressive File Layout for object-based storage system: Lustre. In the second part (Chapter 5), we present a technique to facilitate data sharing in scientific workflows using object-based storage, with our proposed tool Workflow Data Communicator. In the last part of this dissertation, we present a solution for transparent data management in multi-layer storage hierarchy of present and next-generation HPC systems.This dissertation shows that by intelligently employing scalable data management techniques, scientific applications' and workflows' flexibility and performance in object-based storage systems can be enhanced manyfold. Our proposed data management strategies can guide next-generation HPC storage systems' software design to efficiently support data for scientific applications and workflows. / Doctor of Philosophy / Large scale object-based storage systems face severe challenges to manage the data efficiently for HPC applications and workflows. These storage systems often manage and share data inflexibly, without considering the load imbalance and resource contention in the underlying multi-layer storage hierarchy. This dissertation first studies how resource contention and inflexible data sharing mechanisms impact HPC applications' storage and I/O performance; and then presents a series of efficient techniques, tools and algorithms to provide efficient and scalable data management for current and next-generation HPC storage systems
|
134 |
Time-Varying Frequency Selective IQ Imbalance Estimation and CompensationInti, Durga Laxmi Narayana Swamy 14 June 2017 (has links)
Direct-Down Conversion (DDC) principle based transceiver architectures are of interest to meet the diverse needs of present and future wireless systems. DDC transceivers have a simple structure with fewer analog components and offer low-cost, flexible and multi-standard solutions. However, DDC transceivers have certain circuit impairments affecting their performance in wide-band, high data rate and multi-user systems.
IQ imbalance is one of the problems of DDC transceivers that limits their image rejection capabilities. Compensation techniques for frequency independent IQI arising due to gain and phase mismatches of the mixers in the I/Q paths of the transceiver have been widely discussed in the literature. However for wideband multi-channel transceivers, it is becoming increasingly important to address frequency dependent IQI arising due to mismatches in the analog I/Q lowpass filters.
A hardware-efficient and standard independent digital estimation and compensation technique for frequency dependent IQI is introduced which is also capable of tracking time-varying IQI changes. The technique is blind and adaptive in nature, based on the second order statistical properties of complex random signals such as properness/circularity.
A detailed performance analysis of the introduced technique is executed through computer simulations for various real-time operating scenarios. A novel technique for finding the optimal number of taps required for the adaptive IQI compensation filter is proposed and the performance of this technique is validated. In addition, a metric for the measure of properness is developed and used for error power and step size analysis. / Master of Science / A wireless transceiver consists of two major building blocks namely the RF front-end and digital baseband. The front-end performs functions such as frequency conversion, filtering, and amplification. Impurities because of deep-submicron fabrication lead to non-idealities of the front-end components which limit their accuracy and affect the performance of the overall transceiver.
Complex (I/Q) mixing of baseband signals is preferred over real mixing because of its inherent trait of bandwidth efficiency. The I/Q paths enabling this complex mixing in the front-end may not be exactly identical thereby disturbing the perfect orthogonality of inphase and quadrature components leading to IQ Imbalance. The resultant IQ imbalance leads to an image of the signal formed at its mirror frequencies. Imbalances arising from mixers lead to an image of constant strength whereas I/Q low-pass filter mismatches lead to an image of varying strength across the Nyquist range. In addition, temperature effects cause slow variation in IQ imbalance with time.
In this thesis a hardware efficient and standard-independent technique is introduced to compensate for performance degrading IQ imbalance. The technique is blind and adaptive in nature and uses second order statistical signal properties like circularity or properness for IQ imbalance estimation.
The contribution of this work, which gives a key insight into the optimal number of taps required for the adaptive compensation filter improves the state-of-the-art technique. The performance of the technique is evaluated under various scenarios of interest and a detailed analysis of the results is presented.
|
135 |
On the Use of Convolutional Neural Networks for Specific Emitter IdentificationWong, Lauren J. 12 June 2018 (has links)
Specific Emitter Identification (SEI) is the association of a received signal to an emitter, and is made possible by the unique and unintentional characteristics an emitter imparts onto each transmission, known as its radio frequency (RF) fingerprint. SEI systems are of vital importance to the military for applications such as early warning systems, emitter tracking, and emitter location. More recently, cognitive radio systems have started making use of SEI systems to enforce Dynamic Spectrum Access (DSA) rules. The use of pre-determined and expert defined signal features to characterize the RF fingerprint of emitters of interest limits current state-of-the-art SEI systems in numerous ways. Recent work in RF Machine Learning (RFML) and Convolutional Neural Networks (CNNs) has shown the capability to perform signal processing tasks such as modulation classification, without the need for pre-defined expert features. Given this success, the work presented in this thesis investigates the ability to use CNNs, in place of a traditional expert-defined feature extraction process, to improve upon traditional SEI systems, by developing and analyzing two distinct approaches for performing SEI using CNNs. Neither approach assumes a priori knowledge of the emitters of interest. Further, both approaches use only raw IQ data as input, and are designed to be easily tuned or modified for new operating environments. Results show CNNs can be used to both estimate expert-defined features and to learn emitter-specific features to effectively identify emitters. / Master of Science / When a device sends a signal, it unintentionally modifies the signal due to small variations and imperfections in the device’s hardware. These modifications, which are typically called the device’s radio frequency (RF) fingerprint, are unique to each device, and, generally, are independent of the data contained within the signal.
The goal of a Specific Emitter Identification (SEI) system is to use these RF fingerprints to match received signals to the devices, or emitters, which sent the given signals. SEI systems are often used for military applications, and, more recently, have been used to help make more efficient use of the highly congested RF spectrum.
Traditional state-of-the-art SEI systems detect the RF fingerprint embedded in each received signal by extracting one or more features from the signal. These features have been defined by experts in the field, and are determined ahead of time, in order to best capture the RF fingerprints of the emitters the system will likely encounter. However, this use of pre-determined expert features in traditional SEI systems limits the system in a variety of ways.
The work presented in this thesis investigates the ability to use Machine Learning (ML) techniques in place of the typically used expert-defined feature extraction processes, in order to improve upon traditional SEI systems. More specifically, in this thesis, two distinct approaches for performing SEI using Convolutional Neural Networks (CNNs) are developed and evaluated. These approaches are designed to have no knowledge of the emitters they may encounter and to be easily modified, unlike traditional SEI systems
|
136 |
Pracovní podmínky a podpora zdraví pracovníků v oblasti péče o seniory / Working Conditions and Health Promoion of Professionals in Elderly CareJirkovská, Blanka January 2016 (has links)
The dissertation brings sociological and socio-psychological view on working conditions and health promotion of professionals working in the field of long-term care and examines the impact of these conditions on their well-being. Its overall objective is to map the current work conditions and work situation of this working group in the Czech Republic and draw attention to an area that is not adequately reflected. Specific objective is the verification of the model Effort Reward Imbalance and the concept of wellbeing on the population studied. The side effect is to provide the results of the conducted applied research to the management of participating organizations for practical purposes. The work is divided into theoretical and empirical part. Theoretical background characterizes basic concepts; health promotion and long-term care. The central theoretical and methodological tools used in the applied research are then described: socio-psychological concept well-being and model Effort Reward Imbalance (ERI). The model, which was introduced by contemporary German sociologist J. Siegrist, presents a key resource of mapping stress in the workplace. The empirical part of the dissertation describes the research that was conducted in two parts during 2012 - 2014. In the first part held in form of focus...
|
137 |
Enhancing Neural Network Accuracy on Long-Tailed Datasets through Curriculum Learning and Data Sorting / Maskininlärning, Neuralt Nätverk, CORAL-ramverk, Long-Tailed Data, Imbalance Metrics, Teacher-Student modeler, Curriculum Learning, Tränings- schemanBarreira, Daniel January 2023 (has links)
In this paper, a study is conducted to investigate the use of Curriculum Learning as an approach to address accuracy issues in a neural network caused by training on a Long-Tailed dataset. The thesis problem is presented by a Swedish e-commerce company. Currently, they are using a neural network that has been modified by them using a CORAL framework. This adaptation means that instead of having a classic binary regression model, it is an ordinal regression model. The data used for training the model has a Long-Tail distribution, which leads to inaccuracies when predicting a price distribution for items that are part of the tail-end of the data. The current method applied to remedy this problem is Re-balancing in the form of down-sampling and up-sampling. A linear training scheme is introduced, increasing in increments of $10\%$ while applying Curriculum Learning. As a method for sorting the data in an appropriate way, inspiration is drawn from Knowledge Distillation, specifically the Teacher-Student model approach. The teacher models are trained as specialists on three different subsets, and furthermore, those models are used as a basis for sorting the data before training the student model. During the training of the student model, the Curriculum Learning approach is used. The results show that for Imbalance Ratio, Kullback-Liebler divergence, Class Balance, and the Gini Coefficient, the data is clearly less Long-Tailed after dividing the data into subsets. With the correct settings before training, there is also an improvement in the training speed of the student model compared to the base model. The accuracy for both the student model and the base model is comparable. There is a slight advantage for the base model when predicting items in the head part of the data, while the student model shows improvements for items that are between the head and the tail. / I denna uppsats genomförs en studie för att undersöka användningen av Curriculum Learning som en metod för att hantera noggrannhetsproblem i ett neuralt nätverk som är en konsekvens av träning på data som har en Long-Tail fördelning. Problemstälnningen som behandlas i uppsatsen är tillhandagiven av ett svensk e-handelsföretag. För närvarande använder de ett neuralt nätverk som har modifierats med hjälp av ett CORAL-ramverk. Denna anpassning innebär att det istället för att ha en klassisk binär regressionsmodell har en ordinal regressionsmodell. Datan som används för att träna modellen har en Long-Tail fördelning, vilket leder till problem vid prediktering av prisfördelning för diverse föremål som tillhör datans svans. Den nuvarande metod som används för att åtgärda detta problem är en Re-balancing i form av down-sampling och up-sampling. Ett linjärt träningschema introduceras, som ökar i steg om $10\%$ medan Curriculum Learning tillämpas. Metoden för att sortera datan på ett lämpligt sätt inspires av Knowledge-Distillation, mer specifikt lärar-elevmodell delen. Lärarmodellerna tränas som specialister på tre olika delmängder, och därefter används dessa modeller som grund för att sortera datan innan tränandet av elevmodellen. Under träningen av elevmodellen tillämpas Curriculum Learning. Resultaten visar att för Imbalance Ratio, Kullback-Libler-divergens, Class Balance och Gini-koefficienten är datat tydligt mindre Long-Tailed efter att datat delats in i delmängder. Med rätt inställningar innan tränandet finns även en förbättring i träningshastighet för elevmodellen jämfört med basmodellen. Noggrannheten för både elevmodellen och basmodellen är jämförbar. Det finns en liten fördel för basmodellen vid prediktering av föremål i huvuddelen av datan, medan elevmodellen visar förbättringar för föremål som ligger mellan huvuddelen och svansen.
|
138 |
Contextual and Personal Factors Contributing to the Mental Health of Norwegian Professional MusiciansGilberg, Asbjørn L. January 2014 (has links)
This master’s thesis investigates the contributing factors to Norwegian professional musicians’ psychological distress. Several researchers have pointed out that musicians seem to be a risk group in regards to mental health and work environment. In contrast, research regarding the explanatory variables of their mental health is scarce. Recently, a study indicated a high prevalence of psychological distress in Norwegian musicians. A qualitative study on Norwegian musicians reported a combination of family, social, and personal factors to be of particular importance regarding their mental health. The present study adds to the accumulated research base by conceptualizing contributing factors of musicians’ health in a job demands–resources framework, in which the total model as well as individual predictors are tested with a survey on 1,365 Norwegian professional musicians. Five out of ten hypotheses were supported using a hierarchical multiple regression procedure. Job demands and job control were positively related to psychological distress, whereas job-related social support, emotional stability and sense of mastery were negatively related to psychological distress. Work–nonwork interference, effort–reward imbalance and conscientiousness were not significantly related to the outcome. Unexpectedly, job control was positively related to psychological distress, which may have been influenced by the subjects’ levels of personal resources. Overall, the main findings was that a combination of contextual and personal variables were most influential, but that the work environment concepts investigated were only weakly or non-significantly related to musicians’mental health. The highest single contributors were emotional stability, sense of mastery and general social support, indicating that personal dispositions of emotionality, a strong sense of control over one’s life, and perceived social support from family and friends are of high significance for Norwegian professional musicians’ experience of anxiety and depression-like symptoms.
|
139 |
Modelling, estimation and compensation of imbalances in quadrature transceiversDe Witt, Josias Jacobus 03 1900 (has links)
Thesis (PhD (Electrical and Electronic Engineering))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: The use of the quadrature mixing topology has been severely limited in the past due to
its sensitivity towards mismatches between its signal paths. In recent years, researchers
have suggested that digital techniques can be used to compensate for the impairments in
the analogue quadrature mixing front-end. Most authors, however, focus on the modelling
and compensation of frequency-independent imbalances, reasoning that this approach is
sufficient for narrow band signal operation. This common assumption is, however, becoming
increasing less applicable as the use of wider bandwidth signals and multi-channel systems
becomes more prevalent.
In this dissertation, baseband equivalent distortion models are derived, which model
frequency-independent, as well as frequency-dependent contributions towards the imbalances
of the front-end. Both lowpass and bandpass imbalances are modelled, which extends current
modelling approaches found in literature. The resulting baseband models are shown to be
capable of explaining the imbalance characteristics observed in practical quadrature mixing
front ends, where existing models fail to do so.
The developed imbalance models is then used to develop novel frequency-dependent imbalance
extraction and compensation techniques, which directly extract the exact quadrature
imbalances of the front end, using simple test tones. The imbalance extraction and compensation
procedures are implemented in the digital baseband domain of the transceiver and
do not require high computational complexity. The performance of these techniques are
subsequently verified through simulations and a practical hardware implementation, yielding
significant improvement in the image rejection capabilities of the quadrature mixing
transceiver.
Finally, a novel, blind imbalance compensation technique is developed. This technique
is aimed at extracting frequency-independent I/Q imbalances in systems employing digital
modulation schemes. No test tones are employed and the imbalances of the modulator and
demodulator are extracted from the second order statistics of the received signal. Simulations
are presented to investigate the performance of these techniques under various operating
conditions. / AFRIKAANSE OPSOMMING: Die gebruik van die haaksfasige mengtopologie word geweldig beperk deur die sensitiwiteit
vir wanbalanse wat mag bestaan tussen die twee analoog seinpaaie. In die afgelope paar
jaar het navorsers digitale metodes begin voorstel om te kompenseer vir hierdie wanbalanse
in die analooggebied. Meeste navorsers fokus egter op frekwensie-onafhanklike wanbalanse.
Hulle staaf hierdie aanslag deur te redineer dat dit ’n aanvaarbare aaname is vir ’n nouband
stelsel. Hierdie algemene aanvaarding is egter besig om minder akkuraat te raak, namate
wyeband- en multikanaalstelses aan die orde van die dag raak.
In hierdie tesis word basisband-ekwiwalente wanbelansmodelle afgelei wat poog om die
effek van frekwensie-afhanklike en -onafhanklike wanbalanse akkuraat voor te stel. Beide
laagdeurlaat- en banddeurlaatwanbalanse word gemodelleer, wat ‘n uitbreiding is op die
huididge modellerings benaderings wat in literatuur gevind word. Dit word aangetoon dat
die modelle van hierdie tesis daarin slaag om die karakteristieke van ’n werklike haaksfasige
mengstelsel akkuraat te vervat – iets waarin huidige modelle in die literatuur nie slaag nie.
Die basisband-ekwiwalente modelle word dan gebruik om nuwe digitale kompensasie
metodes te ontwikkel, wat daarin slaag om die frekwensie-afhanklike wanbalanse van die
haaksfasige mengstelsel af te skat, en daarvoor te kompenseer in die digitale deel van die
stelsel. Hierdie kompensasiemetodes gebruik eenvoudige toetsseine om die wanbalanse af te
skat. Die werksverrigting van hiedie kompensasiemetodes word dan ondersoek deur middel
van simulasies en ’n praktiese hardeware-implementasie. Die resultate wys daarop dat hierdie
metodes daarin slaag om ’n aansienlike verbetering in die beeldonderdrukkingsvermo¨ens van
die haaksfasige mengers te weeg te bring.
Laastens word daar ook ’n blinde kompensasiemetode ontwikkel, wat gemik is op frekwensie-
onafhanklike wanbalanse in digital-modulasie-skama stelsels. Vir hierdie metodes
is geen toetsseine nodig om die wanbalanse af te skat nie, en word dit gedoen vanuit die
tweede-orde statistiek van die ontvangde sein. Die werksverrigting van hierdie tegnieke word
verder bevestig deur middel van simulasies.
|
140 |
INTERVENTION TO EXTRASYNAPTIC GABAA RECEPTORS FOR SYMPTOM RELIEF IN MOUSE MODELS OF RETT SYNDROMEZhong, Weiwei 10 May 2017 (has links)
Rett Syndrome (RTT) is a neurodevelopmental disorder affecting 1 out of 10,000 females worldwide. Mutations of the X-linked MECP2 gene encoding methyl CpG binding protein 2 (MeCP2) accounts for >90% of RTT cases. People with RTT and mice with Mecp2 disruption show autonomic dysfunction, especially life-threatening breathing disorders, which involves defects in brainstem neurons for breathing controls, including neurons in the locus coeruleus (LC). Accumulating evidence obtained from Mecp2−/Y mice suggests that imbalanced excitation/inhibition or the impaired synaptic communications in central neurons plays a major role. LC neurons in Mecp2−/Ymice are hyperexcited, attributable to the deficiency in GABA synaptic inhibition. Several previous studies indicate that augmenting synaptic GABA receptors (GABARs) leads to a relief of RTT-like symptoms in mice. The extrasynaptic GABARs located outside synaptic cleft, which have the capability to produce sustained inhibition, and may be a potential therapeutic target for the rebalance of excitation/inhibition in RTT. In contrast to the rich information of the synaptic GABARs in RTT research, however, whether Mecp2 gene disruption affects the extrasynaptic GABARs remains unclear. In this study, we show evidence that the extrasynaptic GABAR mediated tonic inhibition of LC neurons was enhanced in Mecp2−/Ymice, which seems attributable to the augmented δ subunit expression. Low-dose THIP exposure, an agonist specific to δ subunit containing extrasynaptic GABARs, extended the lifespan, alleviated breathing abnormalities, enhanced motor function, and improved social behaviors of Mecp2−/Ymice. Such beneficial effects were associated with stabilization of brainstem neuronal hyperexcitability, including neurons in the LC and the mesencephalic trigeminal V nucleus (Me5), and improvement of norepinephrine (NE) biosynthesis. Such phenomena were found in symptomatic Mecp2+/− (sMecp2+/−) female mice model as well, in which the THIP exposure alleviated the hyperexcitability of both LC and Me5 neurons to a similar level as their counterparts in Mecp2−/Y mice, and improved breathing function. In identified LC neurons of sMecp2+/− mice, the hyperexcitability appeared to be determined by both MeCP2 expression and their environmental cues. In conclusion, intervention to extrasynaptic GABAAR by chronic treatment with THIP might be a therapeutic approach to RTT-like symptoms in both Mecp2−/Y and Mecp2+/− mice models and perhaps in people with RTT as well.
|
Page generated in 0.0638 seconds