541 |
Kriging-based black-box global optimization : analysis and new algorithms / Optimisation Globale et processus Gaussiens : analyse et nouveaux algorithmesMohammadi, Hossein 11 April 2016 (has links)
L’«Efficient Global Optimization» (EGO) est une méthode de référence pour l’optimisation globale de fonctions «boites noires» coûteuses. Elle peut cependant rencontrer quelques difficultés, comme le mauvais conditionnement des matrices de covariance des processus Gaussiens (GP) qu’elle utilise, ou encore la lenteur de sa convergence vers l’optimum global. De plus, le choix des paramètres du GP, crucial car il contrôle la famille des fonctions d’approximation utilisées, mériterait une étude plus poussée que celle qui en a été faite jusqu’à présent. Enfin, on peut se demander si l’évaluation classique des paramètres du GP est la plus appropriée à des fins d’optimisation. \\Ce travail est consacré à l'analyse et au traitement des différentes questions soulevées ci-dessus.La première partie de cette thèse contribue à une meilleure compréhension théorique et pratique de l’impact des stratégies de régularisation des processus Gaussiens, développe une nouvelle technique de régularisation, et propose des règles pratiques. Une seconde partie présente un nouvel algorithme combinant EGO et CMA-ES (ce dernier étant un algorithme d’optimisation globale et convergeant). Le nouvel algorithme, nommé EGO-CMA, utilise EGO pour une exploration initiale, puis CMA-ES pour une convergence finale. EGO-CMA améliore les performances des deux algorithmes pris séparément. Dans une troisième partie, l’effet des paramètres du processus Gaussien sur les performances de EGO est soigneusement analysé. Finalement, un nouvel algorithme EGO auto-adaptatif est présenté, dans une nouvelle approche où ces paramètres sont estimés à partir de leur influence sur l’efficacité de l’optimisation elle-même. / The Efficient Global Optimization (EGO) is regarded as the state-of-the-art algorithm for global optimization of costly black-box functions. Nevertheless, the method has some difficulties such as the ill-conditioning of the GP covariance matrix and the slow convergence to the global optimum. The choice of the parameters of the GP is critical as it controls the functional family of surrogates used by EGO. The effect of different parameters on the performance of EGO needs further investigation. Finally, it is not clear that the way the GP is learned from data points in EGO is the most appropriate in the context of optimization. This work deals with the analysis and the treatment of these different issues. Firstly, this dissertation contributes to a better theoretical and practical understanding of the impact of regularization strategies on GPs and presents a new regularization approach based on distribution-wise GP. Moreover, practical guidelines for choosing a regularization strategy in GP regression are given. Secondly, a new optimization algorithm is introduced that combines EGO and CMA-ES which is a global but converging search. The new algorithm, called EGO-CMA, uses EGO for early exploration and then CMA-ES for final convergence. EGO-CMA improves the performance of both EGO and CMA-ES. Thirdly, the effect of GP parameters on the EGO performance is carefully analyzed. This analysis allows a deeper understanding of the influence of these parameters on the EGO iterates. Finally, a new self-adaptive EGO is presented. With the self-adaptive EGO, we introduce a novel approach for learning parameters directly from their contribution to the optimization.
|
542 |
Continuous reinforcement learning with incremental Gaussian mixture models / Aprendizagem por reforço contínua com modelos de mistura gaussianas incrementaisPinto, Rafael Coimbra January 2017 (has links)
A contribução original desta tese é um novo algoritmo que integra um aproximador de funções com alta eficiência amostral com aprendizagem por reforço em espaços de estados contínuos. A pesquisa completa inclui o desenvolvimento de um algoritmo online e incremental capaz de aprender por meio de uma única passada sobre os dados. Este algoritmo, chamado de Fast Incremental Gaussian Mixture Network (FIGMN) foi empregado como um aproximador de funções eficiente para o espaço de estados de tarefas contínuas de aprendizagem por reforço, que, combinado com Q-learning linear, resulta em performance competitiva. Então, este mesmo aproximador de funções foi empregado para modelar o espaço conjunto de estados e valores Q, todos em uma única FIGMN, resultando em um algoritmo conciso e com alta eficiência amostral, i.e., um algoritmo de aprendizagem por reforço capaz de aprender por meio de pouquíssimas interações com o ambiente. Um único episódio é suficiente para aprender as tarefas investigadas na maioria dos experimentos. Os resultados são analisados a fim de explicar as propriedades do algoritmo obtido, e é observado que o uso da FIGMN como aproximador de funções oferece algumas importantes vantagens para aprendizagem por reforço em relação a redes neurais convencionais. / This thesis’ original contribution is a novel algorithm which integrates a data-efficient function approximator with reinforcement learning in continuous state spaces. The complete research includes the development of a scalable online and incremental algorithm capable of learning from a single pass through data. This algorithm, called Fast Incremental Gaussian Mixture Network (FIGMN), was employed as a sample-efficient function approximator for the state space of continuous reinforcement learning tasks, which, combined with linear Q-learning, results in competitive performance. Then, this same function approximator was employed to model the joint state and Q-values space, all in a single FIGMN, resulting in a concise and data-efficient algorithm, i.e., a reinforcement learning algorithm that learns from very few interactions with the environment. A single episode is enough to learn the investigated tasks in most trials. Results are analysed in order to explain the properties of the obtained algorithm, and it is observed that the use of the FIGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks.
|
543 |
An incremental gaussian mixture network for data stream classification in non-stationary environments / Uma rede de mistura de gaussianas incrementais para classificação de fluxos contínuos de dados em cenários não estacionáriosDiaz, Jorge Cristhian Chamby January 2018 (has links)
Classificação de fluxos contínuos de dados possui muitos desafios para a comunidade de mineração de dados quando o ambiente não é estacionário. Um dos maiores desafios para a aprendizagem em fluxos contínuos de dados está relacionado com a adaptação às mudanças de conceito, as quais ocorrem como resultado da evolução dos dados ao longo do tempo. Duas formas principais de desenvolver abordagens adaptativas são os métodos baseados em conjunto de classificadores e os algoritmos incrementais. Métodos baseados em conjunto de classificadores desempenham um papel importante devido à sua modularidade, o que proporciona uma maneira natural de se adaptar a mudanças de conceito. Os algoritmos incrementais são mais rápidos e possuem uma melhor capacidade anti-ruído do que os conjuntos de classificadores, mas têm mais restrições sobre os fluxos de dados. Assim, é um desafio combinar a flexibilidade e a adaptação de um conjunto de classificadores na presença de mudança de conceito, com a simplicidade de uso encontrada em um único classificador com aprendizado incremental. Com essa motivação, nesta dissertação, propomos um algoritmo incremental, online e probabilístico para a classificação em problemas que envolvem mudança de conceito. O algoritmo é chamado IGMN-NSE e é uma adaptação do algoritmo IGMN. As duas principais contribuições da IGMN-NSE em relação à IGMN são: melhoria de poder preditivo para tarefas de classificação e a adaptação para alcançar um bom desempenho em cenários não estacionários. Estudos extensivos em bases de dados sintéticas e do mundo real demonstram que o algoritmo proposto pode rastrear os ambientes em mudança de forma muito próxima, independentemente do tipo de mudança de conceito. / Data stream classification poses many challenges for the data mining community when the environment is non-stationary. The greatest challenge in learning classifiers from data stream relates to adaptation to the concept drifts, which occur as a result of changes in the underlying concepts. Two main ways to develop adaptive approaches are ensemble methods and incremental algorithms. Ensemble method plays an important role due to its modularity, which provides a natural way of adapting to change. Incremental algorithms are faster and have better anti-noise capacity than ensemble algorithms, but have more restrictions on concept drifting data streams. Thus, it is a challenge to combine the flexibility and adaptation of an ensemble classifier in the presence of concept drift, with the simplicity of use found in a single classifier with incremental learning. With this motivation, in this dissertation we propose an incremental, online and probabilistic algorithm for classification as an effort of tackling concept drifting. The algorithm is called IGMN-NSE and is an adaptation of the IGMN algorithm. The two main contributions of IGMN-NSE in relation to the IGMN are: predictive power improvement for classification tasks and adaptation to achieve a good performance in non-stationary environments. Extensive studies on both synthetic and real-world data demonstrate that the proposed algorithm can track the changing environments very closely, regardless of the type of concept drift.
|
544 |
Gaussian process tools for modelling stellar signals and studying exoplanetsRajpaul, Vinesh Maguire January 2017 (has links)
The discovery of exoplanets represents one of the greatest scientific revolutions in history, and exoplanetary science has rapidly become uniquely positioned to address profound questions about the origins of life, and about humanity's place (and future) in the cosmos. Since the discovery of the first exoplanet over two decades ago, the radial velocity (RV) method has been one of the most productive techniques for discovering new planets. It has also become indispensable for characterising exoplanets detected via other techniques, notably transit photometry. Unfortunately, signals intrinsic to stars themselves - especially magnetic activity signals - can induce RV variations that can drown out or even mimic planetary signals. Modelling and thus mitigating these signals is notoriously difficult, which represents a major obstacle to using next-generation instruments to detect lower mass planets, planets with longer periods, and planets around more magnetically-active stars. Enter Gaussian processes (GPs), which have a number of features that make them very well suited to the joint modelling of stochastic activity processes and dynamical (e.g. planetary) signals. In this thesis, I leverage GPs to enable the study of smaller planets around a wider variety of stars than has previously been possible. In particular, I develop a principled and sophisticated Bayesian framework, based on GPs, for modelling RV time series jointly with ancillary activity-sensitive proxies, thus allowing activity signals to be constrained and disentangled from genuine planetary signals. I show that my framework succeeds even in cases where existing techniques would fail to detect planets, e.g. the case of a weak planetary signal with period identical to its host star's rotation period. In a first application of the framework, I demonstrate that Alpha Centauri Bb - until 2016, thought to be the closest exoplanet to Earth, and also the lowest minimum-mass exoplanet around a Sun-like star - was, in fact, an astrophysical false positive. Next, I use the framework to re-characterise the well-studied Kepler-10 system, thereby resolving a mystery surrounding the mass of planet Kepler-10c. I also use the framework to help discover or characterise various exoplanets. Finally, the activity modelling framework aside, I also present in outline form a few promising applications of GPs in the context of modelling stellar signals and studying exoplanets, viz. GPs for (i) enhanced characterisation of stellar rotation; (ii) generating realistic synthetic observations, and modelling in a systematic way the effects of an observing window function; and (iii) ultra-precise extraction of RV shifts directly from observed spectra, without requiring template cross-correlation.
|
545 |
Effects of nickel and manganese on the embrittlement of low-copper pressure vessel steelsZelenty, Jennifer Evelyn January 2016 (has links)
Solute clustering is known to play a significant role in the embrittlement of reactor pressure vessel (RPV) steels. When precipitates form they impede the movement of dislocations, causing an increase in hardness and a shift in the ductile-brittle transition temperature. Over time this can cause the steel to become brittle and more susceptible to fracture. Thus, understanding precipitate formation is of great importance to the nuclear industry. The first part of this thesis aims to isolate and better understand the thermal aging component of embrittlement in low copper, model RPV steels. Currently, relatively little is known about the effects of Ni and Mn in a low copper environment. Therefore, it is of interest to determine if Ni and Mn form precipitates under these conditions. To this end, hardness measurements and atom probe tomography were utilized to link the mechanical properties to the microstructure. After 11,690 hours of thermal aging a statistically significant decrease in hardening was observed. Consistent with hardness measurements, no precipitates were present within the matrix of the thermally aged RPV steels. The local chemistry method was then applied to investigate the very early stages of solute clustering. Association was found to be statistically significant in both the thermally aged and as-received model RPV steels. Therefore, no apparent trends regarding the changes in solute association between the as-received and thermally aged RPV steels were identified. Small, non-random clusters were observed at heterogeneous nucleation sites, such as carbide/matrix interfaces and grain boundaries, within the thermally aged material. The clusters found at the carbide/matrix interfaces were all rich in Mn and approximately 90-150 atoms in size. The clusters located along the observed low-angle grain boundary, however, were significantly larger (on the order of hundreds of atoms) and rich in Ni. Lastly, copper-rich precipitates (CRPs) and Mn- and Ni-rich precipitates (MNPs) were observed within the cementite phase of a high copper and low copper RPV steel, respectively, following long term thermal aging. APT was used to characterize these precipitates and obtain more detailed chemical information. The presence of such precipitates indicates that a range of precipitation can take place within the cementite phase of thermally aged RPV steels. The second part of this thesis aims to investigate the effects of ion irradiation on the microstructure of low copper RPV steels via APT. These steels were ion irradiated with 6.4 MeV Fe<sup>3+</sup> ions with a dose rate of 1.5 x 10<sup>-4</sup> dpa/s at 290°C. MNPs were observed in all five of the RPV steels analyzed. These precipitates were found to have nucleated within the matrix as well as at dislocations and grain boundaries. Using the maximum separation method these MNPs were extracted and characterized. Precipitate composition, size, volume fraction, and number density were determined for each of the five samples. Lastly, several grain boundaries were characterized. Several emerging trends were observed within the samples: Ni content within the precipitates did not vary significantly once a threshold between 30-50% was reached; bulk Mn content appeared to dictate Si and Mn content within the precipitates; and samples low in bulk Ni content were characterized by a higher number density of smaller precipitates. Additionally, by regressing precipitate volume fraction against the interaction of Ni and Mn, a linear relationship was found to be statistically significant.
|
546 |
Simulação sequencial na interpolação dos dados de entrada ou saída do modelo de lixiviação do software Araquá / Sequential simulation on input or output data interpolation of Araquá software leaching modelMoraes, Diego Augusto de Campos [UNESP] 12 November 2015 (has links) (PDF)
Made available in DSpace on 2016-02-05T18:29:09Z (GMT). No. of bitstreams: 0
Previous issue date: 2015-11-12. Added 1 bitstream(s) on 2016-02-05T18:33:17Z : No. of bitstreams: 1
000858605.pdf: 2289436 bytes, checksum: 3728a29094417cadf89fbeb1c1dc825b (MD5) / A interface entre simuladores do comportamento e destino ambiental de defensivos agrícolas e softwares de geoprocessamento, tem sido cada vez mais frequente em estudos de avaliação de risco ambiental. Destaca-se nesse contexto, o uso da geoestatística, a qual considera a correlação espacial e interpolação de um determinado fenômeno na natureza. No entanto, a aplicação do processo de interpolação geoestatística nos dados de entrada ou saída de um simulador pode fornecer resultados diferentes. Diante disso, a hipótese deste trabalho assenta-se na proposição de que o uso de técnicas de simulação sequencial na interpolação dos dados de entrada ou saída do modelo de lixiviação do software ARAquá produzirá um cenário mais crítico de contaminação de águas subterrâneas, quando comparado com a interpolação dos dados de saída deste mesmo modelo. Portanto, o objetivo foi implementar a metodologia de simulação sequencial como procedimento de interpolação dos dados de entrada e saída do modelo de lixiviação do software ARAquá, com a posterior comparação dos resultados. O estudo foi realizado para uma área de cana-de-açúcar com a aplicação simulada do herbicida Tebuthiuron, no município de São Manuel - SP. Foram consideradas duas abordagens: Calcular Antes - Interpolar Depois (CI) e Interpolar Antes - Calcular Depois (IC). Ambas consideraram a profundidade do lençol freático a 2 m e 1 m. Para a abordagem CI foram aplicados o software ARAquá, os variogramas univariados das concentrações estimadas e a Simulação Sequencial Gaussiana (SSG). Na abordagem IC foram aplicados o Modelo Linear de Corregionalização (MLC) dos parâmetros do solo, a co-Simulação Sequencial Gaussiana (co-SSG) e a aplicação do software ARAquá para obtenção das concentrações simuladas. Os resultados obtidos pelas abordagens mostraram que a abordagem IC obteve as maiores ... / The interface between simulators of pesticide environmental behavior and fate and geoprocessing softwares has been increasingly used in environmental risk assessment studies. In this context, the use of geostatistics, which considers the spatial correlation and interpolation of a given phenomenon in nature, has a great importance. However, application of geostatistical interpolation processes on the input or output simulator data can provide different results. Therefore, the hypothesis of this work relies on the proposition that using stochastic simulation techniques on ARAquá software input data interpolation will produce a more critical scenario of groundwater contamination, when compared with ARAquá software output data interpolation. Therefore, the aim of this work was to implement the stochastic simulation methodology as interpolation procedure for ARAquá software input and output data, with the subsequent comparison of results. The study was conducted for a sugarcane area with Tebuthiuron simulated application, in São Manuel-SP, Brazil. Two approaches were considered: Calculate Before - Interpolate After (CI) and Interpolate Before - Calculate After (IC). Both approaches considered the groundwater depth of 2 m and 1 m. For CI approach were applied ARAquá software, univariate variograms of estimated concentrations and Sequential Gaussian Simulation (SSG). In the IC approach were applied the Linear Model of Coregionalization (LMC) of soil parameters, the co-Sequential Gaussian Simulation (co-SSG) and the application of ARAquá software to obtain simulated concentrations. The results obtained by the approaches showed that the IC approach obtained the worst case scenario for Tebuthiuron simulated concentrations in groundwater, and acute risk to aquatic plants when considering 1 m groundwater depth. Through LMC analysis it was possible to identify that field capacity water content, organic ...
|
547 |
Avaliação do impacto do reservatório de Três Irmãos sobre a superfície potenciométrica do aquífero livre na cidade de Pereira Barreto (SP): uma abordagem numérica e geoestatísticaLeite, Claudio Benedito Baptista [UNESP] 18 May 2005 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:32:18Z (GMT). No. of bitstreams: 0
Previous issue date: 2005-05-18Bitstream added on 2014-06-13T19:21:50Z : No. of bitstreams: 1
leite_cbb_dr_rcla.pdf: 7311558 bytes, checksum: 5bdbab506865068f3f86a6c0dea7b45a (MD5) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Ipt / Reservatórios formados pela construção de barragens causam impactos significativos sobre diversos aspectos ambientais de uma bacia hidrográfica. Durante o enchimento do reservatório um sistema de fluxo transitório é induzido em suas áreas marginais. Como as cargas hidráulicas nas bordas do reservatório sofrem elevação, ocorrem inversões nas direções de fluxo e temporariamente se estabelece um fluxo do reservatório para o sistema aqüífero. O resultado final do reajuste transitório inicial é uma alteração permanente, a longo prazo, do regime hidrogeológico regional. Os níveis d'água são elevados, as cargas hidráulicas do aqüífero são aumentadas, porém os gradientes hidráulicos são suavizados, provocando a elevação e o aplainamento da superfície potenciométrica, com redução das descargas de base nos exutórios naturais em relação à situação original. É objeto deste trabalho o desenvolvimento de procedimentos para avaliação dos impactos da formação do reservatório sobre a superfície potenciométrica do aqüífero livre, utilizando modelos numéricos com a introdução de uma abordagem geoestatística, tanto por krigagem como por simulação estocástica, no tratamento dos dados de campo. O local escolhido para a realização deste trabalho foi a cidade de Pereira Barreto (SP), situado na margem do atual reservatório da Usina Hidrelétrica de Três Irmãos, no baixo Rio Tiête, considerando informações e dados do monitoramento sistemático (poços e piezômetros) realizados na área, durante os anos de 1987 a 2001. Do ponto de vista metodológico a execução do presente estudo comprovou a viabilidade e a adequação da Geoestatística associada à modelagem matemática com fins previsionais, em escala local, para a avaliação das modificações induzidas na superfície potenciométrica do aqüífero livre após a implantação de reservatórios. / Hydropower reservoirs result in significant environmental impacts in the context of hydrographic basins. During the infillment of the reservoir, a transient flow system is induced in its surroundings. Also the hydraulic heads are raised in the edges of the reservoir resulting in a change of the flow direction, in such a way that after the complete fulfillment, the underground water will flow from the reservoir to the aquifer. The outcome of this inicial transient readjustment is a long-run permanent change in the hydrogeologic regional system. This way, the water levels are raised and the hydraulic heads of the aquifer are increased. On the other hand, the hydraulic gradients are alleviated resulting in an elevation followed by a flattening of the potentiometric surface as the basement discharge of the natural exutories reduce as compared to the original situation. This study aims to develop procedures to assess the impacts of the Três Irmãos' hydropower reservoir on the potentiometric surface of the free aquifer in the flooded area in the lower Tietê river valley using numerical model with a geostatistical approach of the data. Both krigiage and stochastic simulation of the underground water circulation system were applied in this context. Data and information generated by the systematic monitoring of both well and piezometers, gathered from 1987 to 2001, were taken into account in this geostatistical analyses. From the methodological standpoint, the undertaking of this study showed the adequacy and viability of using geostatistics coupled with mathematical modelling as tools for local level assessments of induced changes in the potentiometric surfaces of the free aquifers after the fulfillment of the reservoirs.
|
548 |
Savitzky-Golay Filters and Application to Image and Signal DenoisingMenon, Seeram V January 2015 (has links) (PDF)
We explore the applicability of local polynomial approximation of signals for noise suppression. In the context of data regression, Savitzky and Golay showed that least-squares approximation of data with a polynomial of fixed order, together with a constant window length, is identical to convolution with a finite impulse response filter, whose characteristics depend entirely on two parameters, namely, the order and window length. Schafer’s recent article in IEEE Signal Processing Magazine provides a detailed account of one-dimensional Savitzky-Golay (SG) filters. Drawing motivation from this idea, we present an elaborate study of two-dimensional SG filters and employ them for image denoising by optimizing the filter response to minimize the mean-squared error (MSE) between the original image and the filtered output. The key contribution of this thesis is a method for optimal selection of order and window length of SG filters for denoising images. First, we apply the denoising technique for images contaminated by additive Gaussian noise. Owing to the absence of ground truth in practice, direct minimization of the MSE is infeasible. However, the classical work of C. Stein provides a statistical method to overcome the hurdle. Based on Stein’s lemma, an estimate of the MSE, namely Stein’s unbiased risk estimator (SURE), is derived, and the two critical parameters of the filter are optimized to minimize the cost. The performance of the technique improves when a regularization term, which penalizes fast variations in the estimate, is added to the optimization cost. In the next three chapters, we focus on non-Gaussian noise models.
In Chapter 3, image degradation in the presence of a compound noise model, where images are corrupted by mixed Poisson-Gaussian noise, is addressed. Inspired by Hudson’s identity, an estimate of MSE, namely Poisson unbiased risk estimator (PURE), which is analogous to SURE, is developed. Combining both lemmas, Poisson-Gaussian unbiased risk estimator (PGURE) minimization is performed to obtain the optimal filter parameters. We also show that SG filtering provides better lowpass approximation for a multiresolution denoising framework.
In Chapter 4, we employ SG filters for reducing multiplicative noise in images. The standard SG filter frequency response can be controlled along horizontal or vertical directions. This limits its ability to capture oriented features and texture that lie at other angles. Here, we introduce the idea of steering the SG filter kernel and perform mean-squared error minimization based on the new concept of multiplicative noise unbiased risk estimation (MURE).
Finally, we propose a method to robustify SG filters, robustness to deviation from Gaussian noise statistics. SG filters work on the principle of least-squares error minimization, and are hence compatible with maximum-likelihood (ML) estimation in the context of Gaussian statistics. However, for heavily-tailed noise such as the Laplacian, where ML estimation requires mean-absolute error minimization in lieu of MSE minimization, standard SG filter performance deteriorates. `1 minimization is a challenge since there is no closed-form solution. We solve the problem by inducing the `1-norm criterion using the iteratively reweighted least-squares (IRLS) method. At every iteration, we solve an l`2 problem, which is equivalent to optimizing a weighted SG filter, but, as iterations progress, the solution converges to that corresponding to `1 minimization. The results thus obtained
are superior to those obtained using the standard SG filter.
|
549 |
Cognitive Control Processes Underlying Continuous and Transient Monitoring Processes in Event-Based Prospective MemoryJanuary 2015 (has links)
abstract: A converging operations approach using response time distribution modeling was adopted to better characterize the cognitive control dynamics underlying ongoing task cost and cue detection in event based prospective memory (PM). In Experiment 1, individual differences analyses revealed that working memory capacity uniquely predicted nonfocal cue detection, while proactive control and inhibition predicted variation in ongoing task cost of the ex-Gaussian parameter associated with continuous monitoring strategies (mu). In Experiments 2A and 2B, quasi-experimental techniques aimed at identifying the role of proactive control abilities in PM monitoring and cue detection suggested that low ability participants may have PM deficits during demanding tasks due to inefficient monitoring strategies, but that emphasizing importance of the intention can increase reliance on more efficacious monitoring strategies that boosts performance (Experiment 2A). Furthermore, high proactive control ability participants are able to efficiently regulate their monitoring strategies under scenarios that do not require costly monitoring for successful cue detection (Experiment 2B). In Experiments 3A and 3B, it was found that proactive control benefited cue detection in interference-rich environments, but the neural correlates of cue detection or intention execution did not differ when engaged in proactive versus reactive control. The results from the current set of studies highlight the importance of response time distribution modeling in understanding PM cost. Additionally, these results have important implications for extant theories of PM and have considerable applied ramifications concerning the cognitive control processes that should be targeted to improve PM abilities. / Dissertation/Thesis / Doctoral Dissertation Psychology 2015
|
550 |
ADAPTIVE LEARNING OF NEURAL ACTIVITY DURING DEEP BRAIN STIMULATIONJanuary 2015 (has links)
abstract: Parkinson's disease is a neurodegenerative condition diagnosed on patients with
clinical history and motor signs of tremor, rigidity and bradykinesia, and the estimated
number of patients living with Parkinson's disease around the world is seven
to ten million. Deep brain stimulation (DBS) provides substantial relief of the motor
signs of Parkinson's disease patients. It is an advanced surgical technique that is used
when drug therapy is no longer sufficient for Parkinson's disease patients. DBS alleviates the motor symptoms of Parkinson's disease by targeting the subthalamic nucleus using high-frequency electrical stimulation.
This work proposes a behavior recognition model for patients with Parkinson's
disease. In particular, an adaptive learning method is proposed to classify behavioral
tasks of Parkinson's disease patients using local field potential and electrocorticography
signals that are collected during DBS implantation surgeries. Unique patterns
exhibited between these signals in a matched feature space would lead to distinction
between motor and language behavioral tasks. Unique features are first extracted
from deep brain signals in the time-frequency space using the matching pursuit decomposition
algorithm. The Dirichlet process Gaussian mixture model uses the extracted
features to cluster the different behavioral signal patterns, without training or
any prior information. The performance of the method is then compared with other
machine learning methods and the advantages of each method is discussed under
different conditions. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2015
|
Page generated in 0.0747 seconds