91 |
Analysis and Applications of the Heterogeneous Multiscale Methods for Multiscale Elliptic and Hyperbolic Partial Differential EquationsArjmand, Doghonay January 2013 (has links)
This thesis concerns the applications and analysis of the Heterogeneous Multiscale methods (HMM) for Multiscale Elliptic and Hyperbolic Partial Differential Equations. We have gathered the main contributions in two papers. The first paper deals with the cell-boundary error which is present in multi-scale algorithms for elliptic homogenization problems. Typical multi-scale methods have two essential components: a macro and a micro model. The micro model is used to upscale parameter values which are missing in the macro model. Solving the micro model requires, on the other hand, imposing boundary conditions on the boundary of the microscopic domain. Imposing a naive boundary condition leads to $O(\varepsilon/\eta)$ error in the computation, where $\varepsilon$ is the size of the microscopic variations in the media and $\eta$ is the size of the micro-domain. Until now, strategies were proposed to improve the convergence rate up to fourth-order in $\varepsilon/\eta$ at best. However, the removal of this error in multi-scale algorithms still remains an important open problem. In this paper, we present an approach with a time-dependent model which is general in terms of dimension. With this approach we are able to obtain $O((\varepsilon/\eta)^q)$ and $O((\varepsilon/\eta)^q + \eta^p)$ convergence rates in periodic and locally-periodic media respectively, where $p,q$ can be chosen arbitrarily large. In the second paper, we analyze a multi-scale method developed under the Heterogeneous Multi-Scale Methods (HMM) framework for numerical approximation of wave propagation problems in periodic media. In particular, we are interested in the long time $O(\varepsilon^{-2})$ wave propagation. In the method, the microscopic model uses the macro solutions as initial data. In short-time wave propagation problems a linear interpolant of the macro variables can be used as the initial data for the micro-model. However, in long-time multi-scale wave problems the linear data does not suffice and one has to use a third-degree interpolant of the coarse data to capture the $O(1)$ dispersive effects apperaing in the long time. In this paper, we prove that through using an initial data consistent with the current macro state, HMM captures this dispersive effects up to any desired order of accuracy in terms of $\varepsilon/\eta$. We use two new ideas, namely quasi-polynomial solutions of periodic problems and local time averages of solutions of periodic hyperbolic PDEs. As a byproduct, these ideas naturally reveal the role of consistency for high accuracy approximation of homogenized quantities. / <p>QC 20130926</p>
|
92 |
Gestion de la mobilité dans les réseaux femtocells / Mobility management in femtocells networksBen Cheikh, Ahlam 12 December 2016 (has links)
Les femtocellules sont déployées par des FAPs dans la couverture des macrocellules afin d'offrir aux utilisateurs un service continu aussi bien à l'intérieur qu'à l'extérieur.Elles sont caractérisées par une courte portée,faible puissance et ne peuvent couvrir qu'un nombre limité des utilisateurs.Ces caractéristiques rendent la gestion de la mobilité l'un des plus importants défis à résoudre.Dans cette thèse,nous proposons des nouveaux algorithmes de handover.En premier lieu,nous considérons la direction du mobile comme un paramètre clé pour la prise de décision de Handover.Nous proposons un algorithme de handover nommé OHMF basé sur l'optimisation de la liste de FAPs candidats tout en considérant la qualité de signal ainsi que la direction de mouvement de mobile.Ensuite,nous proposons un processus de prédiction de direction basé sur la régression linéaire.L'idée est de prédire la position future du mobile tout en tenant compte des positions actuelle et précédente.Cet algorithme est intitulé OHDP. En deuxième lieu,nous nous intéressons au problème de prédiction de mobilité pour être plus rigoureux lors de prise de décision de handover.Pour cela,nous utilisons les chaînes de markov cachées comme prédicteur du prochain FAP et nous proposons un algorithme de handover nommé OHMP. Afin d'adapter notre solution à toutes les contraintes du réseau femtocellules,nous proposons un algorithme de handover intitulé OHMP-CAC qui intègre un CAC approprié au réseau étudié et une différenciation de service avec et sans contraintes de QoS.Des études de performances basées sur des simulations et des traces de mobilité réelles ont été réalisées pour évaluer l'efficacité de nos propositions. / Femtocell network are deployed in the macrocell’s coverage to provide extended services with better performances. Femtocells have a short-range and low power transmissions.Each FAP supports a few number of connected users.Owing to these inherent features, one of the most challenging issues for the femtocellular network deployment remains the mobility management.In this thesis, we propose new handovers algorithms adapted to the characteristics of femtocells network.As a first part,we consider the direction of mobile user as a key parameter for the handover decision.To do so,we propose a new handover algorithm called OHMF. Its main purpose is the optimization of the list of FAPs candidates based on signal quality as well as the mobile direction to better choose the FAP target.After that, we propose an algorithm called OHDP based on the direction prediction using the linear regression.The idea behind this is to predict the future position of mobile based on its current and previous position. As a second part, we focus on mobility prediction problem to make an efficient handover decision.We propose a novel handoff decision algorithm called OHMP that uses HMM as a predictor to accurately estimate the next FAP that a mobile UE would visit,given its current and historical movement information.In order to adapt our solution to the characteristics of femtocells network,we propose a handover algorithm called OHMP-CAC based on HMM tool as a predictor, a proposed CAC and the availability of resources of the predicted FAP,SINR and the traffic type.In order to assess the efficiency of our proposals,all underlying algorithms are evaluated through simulations and real mobility traces.
|
93 |
DEEP ARCHITECTURES FOR SPATIO-TEMPORAL SEQUENCE RECOGNITION WITH APPLICATIONS IN AUTOMATIC SEIZURE DETECTIONGolmohammadi, Meysam January 2021 (has links)
Scalp electroencephalograms (EEGs) are used in a broad range of health care institutions to monitor and record electrical activity in the brain. EEGs are essential in diagnosis of clinical conditions such as epilepsy, seizure, coma, encephalopathy, and brain death. Manual scanning and interpretation of EEGs is time-consuming since these recordings may last hours or days. It is also an expensive process as it requires highly trained experts. Therefore, high performance automated analysis of EEGs can reduce time to diagnosis and enhance real-time applications by identifying sections of the signal that need further review.Automatic analysis of clinical EEGs is a very difficult machine learning problem due to the low fidelity of a scalp EEG signal. Commercially available automated seizure detection systems suffer from unacceptably high false alarm rates. Many signal processing methods have been developed over the years including time-frequency processing, wavelet analysis and autoregressive spectral analysis. Though there has been significant progress in machine learning technology in recent years, use of automated technology in clinical settings is limited, mainly due to unacceptably high false alarm rates. Further, state of the art machine learning algorithms that employ high dimensional models have not previously been utilized in EEG analysis because there has been a lack of large databases that accurately characterize clinical operating conditions.
Deep learning approaches can be viewed as a broad family of neural network algorithms that use many layers of nonlinear processing units to learn a mapping between inputs and outputs. Deep learning-based systems have generated significant improvements in performance for sequence recognitions tasks for temporal signals such as speech and for image analysis applications that can exploit spatial correlations, and for which large amounts of training data exists. The primary goal of our proposed research is to develop deep learning-based architectures that capture spatial and temporal correlations in an EEG signal. We apply these architectures to the problem of automated seizure detection for adult EEGs. The main contribution of this work is the development of a high-performance automated EEG analysis system based on principles of machine learning and big data that approaches levels of performance required for clinical acceptance of the technology.
In this work, we explore a combination of deep learning-based architectures. First, we present a hybrid architecture that integrates hidden Markov models (HMMs) for sequential decoding of EEG events with a deep learning-based postprocessing that incorporates temporal and spatial context. This system automatically processes EEG records and classifies three patterns of clinical interest in brain activity that might be useful in diagnosing brain disorders: spike and/or sharp waves, generalized periodic epileptiform discharges and periodic lateralized epileptiform discharges. It also classifies three patterns used to model the background EEG activity: eye movement, artifacts, and background. Our approach delivers a sensitivity above 90% while maintaining a specificity above 95%.
Next, we replace the HMM component of the system with a deep learning architecture that exploits spatial and temporal context. We study how effectively these architectures can model context. We introduce several architectures including a novel hybrid system that integrates convolutional neural networks with recurrent neural networks to model both spatial relationships (e.g., cross-channel dependencies) and temporal dynamics (e.g., spikes). We also propose a topology-preserving architecture for spatio-temporal sequence recognition that uses raw data directly rather than low-level features. We show this model learns representations directly from raw EEGs data and does not need to use predefined features.
In this study, we use the Temple University EEG (TUEG) Corpus, supplemented with data from Duke University and Emory University, to evaluate the performance of these hybrid deep structures. We demonstrate that performance of a system trained only on Temple University Seizure Corpus (TUSZ) data transfers to a blind evaluation set consisting of the Duke University Seizure Corpus (DUSZ) and the Emory University Seizure Corpus (EUSZ). This type of generalization is very important since complex high-dimensional deep learning systems tend to overtrain.
We also investigate the robustness of this system to mismatched conditions (e.g., train on TUSZ, evaluate on EUSZ). We train a model on one of three available datasets and evaluate the trained model on the other two datasets. These datasets are recorded from different hospitals, using a variety of devices and electrodes, under different circumstances and annotated by different neurologists and experts. Therefore, these experiments help us to evaluate the impact of the dataset on our training process and validate our manual annotation process.
Further, we introduce methods to improve generalization and robustness. We analyze performance to gain additional insight into what aspects of the signal are being modeled adequately and where the models fail. The best results for automatic seizure detection achieved in this study are 45.59% with 12.24 FA per 24 hours on TUSZ, 45.91% with 11.86 FAs on DUSZ, and 62.56% with 11.26 FAs on EUSZ. We demonstrate that the performance of the deep recurrent convolutional structure presented in this study is statistically comparable to the human performance on the same dataset. / Electrical and Computer Engineering
|
94 |
Decision Making in Manufacturing Systems: An Integrated Throughput, Quality and Maintenance Model Using HMMShadid, Basel 04 1900 (has links)
<p>The decision making processes in today's manufacturing systems represent very complex and challenging tasks. The desired flexibility in terms of the functionality of a machine adds more components to the machine. The real time monitoring and reporting generates large streams of data. However the intelligent and real time processing of this large collection of system data is at the core of the manufacturing decision support tools. </p>
<p>This thesis outlines the use of Frequent Episodes in Event Sequences and Hidden Markov Modeling of throughput, quality and maintenance data to model the deterioration of performance in the components that make up the manufacturing system. The thesis also introduces the concept of decision points and outlines how to integrate the total cost function in a business model. </p>
This thesis deals with the following three topics:
<p>First, the component-based data structure of the manufacturing system is outlined especially throughput, quality and maintenance data. In this approach, the manufacturing system is considered as a group of components that interact with each other and with raw materials to produce the manufactured product. This interaction creates a considerable amount of data which can be associated with the relevant components of the system. The relations between the manufacturing components are established on a physical and logical basis. The components properties are clearly defined in database tables specifically created for this application. The thesis also discusses the web services in manufacturing systems and the portable technologies used in plant decision support tools. </p>
<p>Second, the thesis presents a novel application of Frequent Episodes in Event Sequences to identify patterns in the deterioration of performance in a component using frequent episodes of operational failures, quality failures and maintenance activities. A Hidden Markov Model (HMM) is used to model each deterioration episode to estimate the states of performance and the transition rates between the states. The thesis compares the results generated by this model to other existing models of component performance deterioration while emphasizing the benefits ofthe proposed model through the use of the plant data.</p>
<p>Finally the thesis presents a methodology usmg HMM probability distributions and Bayesian Decision theory framework to provide a set of decisions and recommendations under the condition of data uncertainty. The results of this analysis are then integrated in the plant maintenance business model.</p> <p>It is worthwhile mentioning that to develop the techniques and validate the results in this research; a Manufacturing Execution System (MES) was developed to operate in an automotive engine plant. All the data and results in this research are based on the plant data. The MES which was developed in this research provided significant benefits in the plant and was adapted by many other GM plants around the world.</p> / Thesis / Doctor of Philosophy (PhD)
|
95 |
Contributions To Automatic Particle Identification In Electron Micrographs: Algorithms, Implementation, And ApplicationsSingh, Vivek 01 January 2005 (has links)
Three dimensional reconstruction of large macromolecules like viruses at resolutions below 8 Ã… - 10 Ã… requires a large set of projection images and the particle identification step becomes a bottleneck. Several automatic and semi-automatic particle detection algorithms have been developed along the years. We present a general technique designed to automatically identify the projection images of particles. The method utilizes Markov random field modelling of the projected images and involves a preprocessing of electron micrographs followed by image segmentation and post processing for boxing of the particle projections. Due to the typically extensive computational requirements for extracting hundreds of thousands of particle projections, parallel processing becomes essential. We present parallel algorithms and load balancing schemes for our algorithms. The lack of a standard benchmark for relative performance analysis of particle identification algorithms has prompted us to develop a benchmark suite. Further, we present a collection of metrics for the relative performance analysis of particle identification algorithms on the micrograph images in the suite, and discuss the design of the benchmark suite.
|
96 |
Nocturnal Bird Call Recognition System for Wind Farm ApplicationsBastas, Selin A. 10 July 2012 (has links)
No description available.
|
97 |
SEQUENCE CLASSIFICATION USING HIDDEN MARKOV MODELSDESAI, PRANAY A. 13 July 2005 (has links)
No description available.
|
98 |
Time-frequency and Hidden Markov Model Methods for Epileptic Seizure DetectionZhu, Dongqing 16 July 2009 (has links)
No description available.
|
99 |
Recognition of off-line printed Arabic text using Hidden Markov Models.Al-Muhtaseb, Husni A., Mahmoud, Sabri A., Qahwaji, Rami S.R. January 2008 (has links)
yes / This paper describes a technique for automatic recognition of off-line printed Arabic text using Hidden Markov Models. In this work different sizes of overlapping and non-overlapping hierarchical windows are used to generate 16 features from each vertical sliding strip. Eight different Arabic fonts were used for testing (viz. Arial, Tahoma, Akhbar, Thuluth, Naskh, Simplified Arabic, Andalus, and Traditional Arabic). It was experimentally proven that different fonts have their highest recognition rates at different numbers of states (5 or 7) and codebook sizes (128 or 256).
Arabic text is cursive, and each character may have up to four different shapes based on its location in a word. This research work considered each shape as a different class, resulting in a total of 126 classes (compared to 28 Arabic letters). The achieved average recognition rates were between 98.08% and 99.89% for the eight experimental fonts.
The main contributions of this work are the novel hierarchical sliding window technique using only 16 features for each sliding window, considering each shape of Arabic characters as a separate class, bypassing the need for segmenting Arabic text, and its applicability to other languages.
|
100 |
Automatic Phoneme Recognition with Segmental Hidden Markov ModelsBaghdasaryan, Areg Gagik 10 March 2010 (has links)
A speaker independent continuous speech phoneme recognition and segmentation system is presented. We discuss the training and recognition phases of the phoneme recognition system as well as a detailed description of the integrated elements. The Hidden Markov Model (HMM) based phoneme models are trained using the Baum-Welch re-estimation procedure. Recognition and segmentation of the phonemes in the continuous speech is performed by a Segmental Viterbi Search on a Segmental Ergodic HMM for the phoneme states.
We describe in detail the three phases of the phoneme joint recognition and segmentation system. First, the extraction of the Mel-Frequency Cepstral Coefficients (MFCC) and the corresponding Delta and Delta Log Power coefficients is described. Second, we describe the operation of the Baum-Welch re-estimation procedure for the training of the phoneme HMM models, including the K-Means and the Expectation-Maximization (EM) clustering algorithms used for the initialization of the Baum-Welch algorithm. Additionally, we describe the structural framework of - and the recognition procedure for - the ergodic Segmental HMM for the phoneme segmentation and recognition. We include test and simulation results for each of the individual systems integrated into the phoneme recognition system and finally for the phoneme recognition/segmentation system as a whole. / Master of Science
|
Page generated in 0.0439 seconds