• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 26
  • 18
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 269
  • 269
  • 66
  • 41
  • 40
  • 34
  • 31
  • 24
  • 23
  • 21
  • 21
  • 18
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Visualizing Confusion Matrices for Multidimensional Signal Detection Correlational Methods and Semantic Cluster based Visualization in Virtual Environments

Zhou, Yue 03 September 2013 (has links)
No description available.
242

Advances in therapeutic risk management through signal detection and risk minimisation tool analyses

Nkeng, Lenhangmbong 02 1900 (has links)
Les quatre principales activités de la gestion de risque thérapeutique comportent l’identification, l’évaluation, la minimisation, et la communication du risque. Ce mémoire aborde les problématiques liées à l’identification et à la minimisation du risque par la réalisation de deux études dont les objectifs sont de: 1) Développer et valider un outil de « data mining » pour la détection des signaux à partir des banques de données de soins de santé du Québec; 2) Effectuer une revue systématique afin de caractériser les interventions de minimisation de risque (IMR) ayant été implantées. L’outil de détection de signaux repose sur la méthode analytique du quotient séquentiel de probabilité (MaxSPRT) en utilisant des données de médicaments délivrés et de soins médicaux recueillis dans une cohorte rétrospective de 87 389 personnes âgées vivant à domicile et membres du régime d’assurance maladie du Québec entre les années 2000 et 2009. Quatre associations « médicament-événement indésirable (EI) » connues et deux contrôles « négatifs » ont été utilisés. La revue systématique a été faite à partir d’une revue de la littérature ainsi que des sites web de six principales agences réglementaires. La nature des RMIs ont été décrites et des lacunes de leur implémentation ont été soulevées. La méthode analytique a mené à la détection de signaux dans l'une des quatre combinaisons médicament-EI. Les principales contributions sont: a) Le premier outil de détection de signaux à partir des banques de données administratives canadiennes; b) Contributions méthodologiques par la prise en compte de l'effet de déplétion des sujets à risque et le contrôle pour l'état de santé du patient. La revue a identifié 119 IMRs dans la littérature et 1,112 IMRs dans les sites web des agences réglementaires. La revue a démontré qu’il existe une augmentation des IMRs depuis l’introduction des guides réglementaires en 2005 mais leur efficacité demeure peu démontrée. / The four main components of therapeutic risk management (RM) consist of risk detection (identification), evaluation, minimisation, and communication. This thesis aims at addressing RM methodologies within the two realms of risk detection and risk minimisation, through the conduct of two distinct studies: i) The development and evaluation of a data mining tool to support signal detection using health care claims databases, and ii) A systematic review to characterise risk minimisation interventions (RMIs) implemented so far. The data mining tool is based on a Maximised Sequential Probability Ratio Test (MaxSPRT), using drug dispensing and medical claims data found in the Quebec health claims databases (RAMQ). It was developed and validated in a cohort of 87,389 community-dwelling elderly aged 66+, randomly sampled from all elderly drug plan members between 2000 and 2009. Four known drug-AE associations and two "negative" controls were used. The systematic review on RMIs is based on a literature search as well as a review of the websites of six main regulatory agencies. Types of RMIs have been summarized and implementation gaps identified. The data mining tool detected signals in one of four of the known drug-AE associations. Major contributions are: a) The first signal detection data mining tool applied to a Canadian claims database; b) Methodological improvements over published methods by considering the depletion of susceptibles effect and adjusting for overall health status to control for prescription channelling. The review yielded 119 distinct RMIs from the literature and 1,112 from the websites. The review demonstrated that an increase in RMI numbers among websites occurred since the introduction of guidances in 2005, but their effectiveness remains insufficiently examined.
243

On the neuronal systems underlying perceptual decision-making and confidence in humans

Hebart, Martin 13 March 2014 (has links)
Die Fähigkeit, Zustände in der Außenwelt zu beurteilen und zu kategorisieren, wird unter dem Oberbegriff „perzeptuelles Entscheiden“ zusammengefasst. In der vorliegenden Arbeit wurde funktionelle Magnetresonanztomografie mit multivariater Musteranalyse verbunden, um offene Fragen zur perzeptuellen Entscheidungsfindung zu beantworten. In der ersten Studie (Hebart et al., 2012) wurde gezeigt, dass der visuelle und parietale Kortex eine Repräsentation abstrakter perzeptueller Entscheidungen aufweisen. Im frühen visuellen Kortex steigt die Menge entscheidungsspezifischer Information mit der Menge an verfügbarer visueller Bewegungsinformation, doch der linke posteriore parietale Kortex zeigt einen negativen Zusammenhang. Diese Ergebnisse zeigen, wo im Gehirn abstrakte Entscheidungen repräsentiert werden und deuten darauf hin, dass die gefundenen Hirnregionen unterschiedlich in den Entscheidungsprozess involviert sind, je nach Menge an verfügbarer sensorischer Information. In der zweiten Studie (Hebart et al., submitted) wurde gezeigt, dass sich eine Repräsentation der Entscheidungsvariable (EV) im fronto-parietalen Assoziationskortex finden lässt. Ferner weist die EV im rechten ventrolateralen präfrontalen Kortex (vlPFC) einen spezifischen Zusammenhang mit konfidenzbezogenen Hirnsignalen im ventralen Striatum auf. Die Ergebnisse deuten darauf hin, dass Konfidenz aus der EV im vlPFC berechnet wird. In der dritten Studie (Christophel et al., 2012) wurde gezeigt, dass der Kurzzeitgedächtnisinhalt im visuellen und posterioren parietalen Kortex, nicht jedoch im präfrontalen Kortex repräsentiert wird. Diese Ergebnisse lassen vermuten, dass der Gedächtnisinhalt in denselben Regionen enkodiert wird, die auch perzeptuelle Entscheidungen repräsentieren können. Zusammenfassend geben die hier errungenen Erkenntnisse Aufschluss über den neuronalen Code des perzeptuellen Entscheidens von Menschen und stellen ein vollständigeres Verständnis der zugrundeliegenden Prozesse in Aussicht. / Perceptual decision-making refers to the ability to arrive at categorical judgments about states of the outside world. Here we use functional magnetic resonance imaging and multivariate pattern analysis to identify decision-related brain regions and address a number of open issues in the field of perceptual decision-making. In the first study (Hebart et al., 2012), we demonstrated that perceptual decisions about motion direction are represented in both visual and parietal cortex, even when decoupled from motor plans. While in early visual cortex the amount of information about perceptual choices follows the amount of sensory evidence presented on the screen, the reverse pattern is observed in left posterior parietal cortex. These results reveal the brain regions involved when choices are encoded in an abstract format and suggest that these two brain regions are recruited differently depending on the amount of sensory evidence available. In the second study (Hebart et al., submitted), we show that the perceptual decision variable (DV) is represented throughout fronto-parietal association cortices. The DV in right ventrolateral prefrontal cortex covaries specifically with brain signals in the ventral striatum representing confidence, demonstrating a close link between the two variables. This suggests that confidence is calculated from the perceptual DV encoded in ventrolateral prefrontal cortex. In the third study (Christophel et al., 2012), using a visual short-term memory (VSTM) task, we demonstrate that the content of VSTM is represented in visual cortex and posterior parietal cortex, but not prefrontal cortex. These results constrain theories of VSTM and suggest that the memorized content is stored in regions shown to represent perceptual decisions. Together, these results shed light on the neuronal code underlying perceptual decision-making in humans and offer the prospect for a more complete understanding of these processes.
244

Iterative detection for wireless communications

Shaheem, Asri January 2008 (has links)
[Truncated abstract] The transmission of digital information over a wireless communication channel gives rise to a number of issues which can detract from the system performance. Propagation effects such as multipath fading and intersymbol interference (ISI) can result in significant performance degradation. Recent developments in the field of iterative detection have led to a number of powerful strategies that can be effective in mitigating the detrimental effects of wireless channels. In this thesis, iterative detection is considered for use in two distinct areas of wireless communications. The first considers the iterative decoding of concatenated block codes over slow flat fading wireless channels, while the second considers the problem of detection for a coded communications system transmitting over highly-dispersive frequency-selective wireless channels. The iterative decoding of concatenated codes over slow flat fading channels with coherent signalling requires knowledge of the fading amplitudes, known as the channel state information (CSI). The CSI is combined with statistical knowledge of the channel to form channel reliability metrics for use in the iterative decoding algorithm. When the CSI is unknown to the receiver, the existing literature suggests the use of simple approximations to the channel reliability metric. However, these works generally consider low rate concatenated codes with strong error correcting capabilities. In some situations, the error correcting capability of the channel code must be traded for other requirements, such as higher spectral efficiency, lower end-to-end latency and lower hardware cost. ... In particular, when the error correcting capabilities of the concatenated code is weak, the conventional metrics are observed to fail, whereas the proposed metrics are shown to perform well regardless of the error correcting capabilities of the code. The effects of ISI caused by a frequency-selective wireless channel environment can also be mitigated using iterative detection. When the channel can be viewed as a finite impulse response (FIR) filter, the state-of-the-art iterative receiver is the maximum a posteriori probability (MAP) based turbo equaliser. However, the complexity of this receiver's MAP equaliser increases exponentially with the length of the FIR channel. Consequently, this scheme is restricted for use in systems where the channel length is relatively short. In this thesis, the use of a channel shortening prefilter in conjunction with the MAP-based turbo equaliser is considered in order to allow its use with arbitrarily long channels. The prefilter shortens the effective channel, thereby reducing the number of equaliser states. A consequence of channel shortening is that residual ISI appears at the input to the turbo equaliser and the noise becomes coloured. In order to account for the ensuing performance loss, two simple enhancements to the scheme are proposed. The first is a feedback path which is used to cancel residual ISI, based on decisions from past iterations. The second is the use of a carefully selected value for the variance of the noise assumed by the MAP-based turbo equaliser. Simulations are performed over a number of highly dispersive channels and it is shown that the proposed enhancements result in considerable performance improvements. Moreover, these performance benefits are achieved with very little additional complexity with respect to the unmodified channel shortened turbo equaliser.
245

Belief Propagation Based Signal Detection In Large-MIMO And UWB Systems

Som, Pritam 09 1900 (has links)
Large-dimensional communication systems are likely to play an important role in modern wireless communications, where dimensions can be in space, time, frequency and their combinations. Large dimensions can bring several advantages with respect to the performance of communication systems. Harnessing such large-dimension benefits in practice, however, is challenging. In particular, optimum signal detection gets prohibitively complex for large dimensions. Consequently, low-complexity detection techniques that scale well for large dimensions while achieving near-optimal performance are of interest. Belief Propagation (BP) is a technique that solves inference problems using graphical models. BP has been successfully employed in a variety of applications including computational biology, statistical signal/image processing, machine learning and artificial intelligence. BP is well suited in several communication problems as well; e.g., decoding of turbo codes and low-density parity check codes (LDPC), and multiuser detection. We propose a BP based algorithm for detection in large-dimension linear vector channels employing binary phase shift keying (BPSK) modulation, by adopting a Markov random field (MRF)graphical model of the system. The proposed approach is shown to achieve i)detection at low complexities that scale well for large dimensions, and ii)improved bit error performance for increased number of dimensions (a behavior we refer to as the ’large-system behavior’). As one application of the BP based approach, we demonstrate the effectiveness of the proposed BP algorithm for decoding non-orthogonal space-time block codes (STBC) from cyclic division algebras (CDA)having large dimensions. We further improve the performance of the proposed algorithm through damped belief propagation, where messages that are passed from one iteration to the next are formed as a weighted combination of messages from the current iteration and the previous iteration. Next, we extend the proposed BP approach to higher order modulation. through a novel scheme of interference cancellation. This proposed scheme exhibits large system behavior in terms of bit error performance, while being scalable to large dimensions in terms of complexity. Finally, as another application of the BP based approach, we illustrate the adoption and performance of the proposed BP algorithm for low-complexity near-optimal equalization in severely delay-spread UWBMIMO-ISI channels that are characterized by large number (tens to hundreds)of multipath components.
246

Wavelet Based Algorithms For Spike Detection In Micro Electrode Array Recordings

Nabar, Nisseem S 06 1900 (has links)
In this work, the problem of detecting neuronal spikes or action potentials (AP) in noisy recordings from a Microelectrode Array (MEA) is investigated. In particular, the spike detection algorithms should be less complex and with low computational complexity so as to be amenable for real time applications. The use of the MEA is that it allows collection of extracellular signals from either a single unit or multiple (45) units within a small area. The noisy MEA recordings then undergo basic filtering, digitization and are presented to a computer for further processing. The challenge lies in using this data for detection of spikes from neuronal firings and extracting spatiotemporal patterns from the spike train which may allow control of a robotic limb or other neuroprosthetic device directly from the brain. The aim is to understand the spiking action of the neurons, and use this knowledge to devise efficient algorithms for Brain Machine Interfaces (BMIs). An effective BMI will require a realtime, computationally efficient implementation which can be carried out on a DSP board or FPGA system. The aim is to devise algorithms which can detect spikes and underlying spatio-temporal correlations having computational and time complexities to make a real time implementation feasible on a specialized DSP chip or an FPGA device. The time-frequency localization, multiresolution representation and analysis properties of wavelets make them suitable for analysing sharp transients and spikes in signals and distinguish them from noise resembling a transient or the spike. Three algorithms for the detection of spikes in low SNR MEA neuronal recordings are proposed: 1. A wavelet denoising method based on the Discrete Wavelet Transform (DWT) to suppress the noise power in the MEA signal or improve the SNR followed by standard thresholding techniques to detect the spikes from the denoised signal. 2. Directly thresholding the coefficients of the Stationary (Undecimated) Wavelet Transform (SWT) to detect the spikes. 3. Thresholding the output of a Teager Energy Operator (TEO) applied to the signal on the discrete wavelet decomposed signal resulting in a multiresolution TEO framework. The performance of the proposed three wavelet based algorithms in terms of the accuracy of spike detection, percentage of false positives and the computational complexity for different types of wavelet families in the presence of colored AR(5) (autoregressive model with order 5) and additive white Gaussian noise (AWGN) is evaluated. The performance is further evaluated for the wavelet family chosen under different levels of SNR in the presence of the colored AR(5) and AWGN noise. Chapter 1 gives an introduction to the concept behind Brain Machine Interfaces (BMIs), an overview of their history, the current state-of-the-art and the trends for the future. It also describes the working of the Microelectrode Arrays (MEAs). The generation of a spike in a neuron, the proposed mechanism behind it and its modeling as an electrical circuit based on the Hodgkin-Huxley model is described. An overview of some of the algorithms that have been suggested for spike detection purposes whether in MEA recordings or Electroencephalographic (EEG) signals is given. Chapter 2 describes in brief the underlying ideas that lead us to the Wavelet Transform paradigm. An introduction to the Fourier Transform, the Short Time Fourier Transform (STFT) and the Time-Frequency Uncertainty Principle is provided. This is followed by a brief description of the Continuous Wavelet Transform and the Multiresolution Analysis (MRA) property of wavelets. The Discrete Wavelet Transform (DWT) and its filter bank implementation are described next. It is proposed to apply the wavelet denoising algorithm pioneered by Donoho, to first denoise the MEA recordings followed by standard thresholding technique for spike detection. Chapter 3 deals with the use of the Stationary or Undecimated Wavelet Transform (SWT) for spike detection. It brings out the differences between the DWT and the SWT. A brief discussion of the analysis of non-stationary time series using the SWT is presented. An algorithm for spike detection based on directly thresholding the SWT coefficients without any need for reconstructing the denoised signal followed by thresholding technique as in the first method is presented. In chapter 4 a spike detection method based on multiresolution Teager Energy Operator is discussed. The Teager Energy Operator (TEO) picks up localized spikes in signal energy and thus is directly used for spike detection in many applications including R wave detection in ECG and various (alpha, beta) rhythms in EEG. Some basic properties of the TEO are discussed followed by the need for a multiresolution approach to TEO and the methods existing in literature. The wavelet decomposition and the subsampled signal involved at each level naturally lends it to a multiresolution TEO framework at the same time significantly reducing the computational complexity due the subsampled signal at each level. A wavelet-TEO algorithm for spike detection with similar accuracies as the previous two algorithms is proposed. The method proposed here differs significantly from that in literature since wavelets are used instead of time domain processing. Chapter 5 describes the method of evaluation of the three algorithms proposed in the previous chapters. The spike templates are obtained from MEA recordings, resampled and normalized for use in spike trains simulated as Poisson processes. The noise is modeled as colored autoregressive (AR) of order 5, i.e AR(5), as well as Additive White Gaussian Noise (AWGN). The noise in most human and animal MEA recordings conforms to the autoregressive model with orders of around 5. The AWGN Noise model is used in most spike detection methods in the literature. The performance of the proposed three wavelet based algorithms is measured in terms of the accuracy of spike detection, percentage of false positives and the computational complexity for different types of wavelet families. The optimal wavelet for this purpose is then chosen from the wavelet family which gives the best results. Also, optimal levels of decomposition and threshold factors are chosen while maintaining a balance between accuracy and false positives. The algorithms are then tested for performance under different levels of SNR with the noise modeled as AR(5) or AWGN. The proposed wavelet based algorithms exhibit a detection accuracy of approximately 90% at a low SNR of 2.35 dB with the false positives below 5%. This constitutes a significant improvement over the results in existing literature which claim an accuracy of 80% with false positives of nearly 10%. As the SNR increases, the detection accuracy increases to close to 100% and the false alarm rate falls to 0. Chapter 6 summarizes the work. A comparison is made between the three proposed algorithms in terms of detection accuracy and false positives. Directions in which future work may be carried out are suggested.
247

Signal Processing for Spectroscopic Applications

Gudmundson, Erik January 2010 (has links)
Spectroscopic techniques allow for studies of materials and organisms on the atomic and molecular level. Examples of such techniques are nuclear magnetic resonance (NMR) spectroscopy—one of the principal techniques to obtain physical, chemical, electronic and structural information about molecules—and magnetic resonance imaging (MRI)—an important medical imaging technique for, e.g., visualization of the internal structure of the human body. The less well-known spectroscopic technique of nuclear quadrupole resonance (NQR) is related to NMR and MRI but with the difference that no external magnetic field is needed. NQR has found applications in, e.g., detection of explosives and narcotics. The first part of this thesis is focused on detection and identification of solid and liquid explosives using both NQR and NMR data. Methods allowing for uncertainties in the assumed signal amplitudes are proposed, as well as methods for estimation of model parameters that allow for non-uniform sampling of the data. The second part treats two medical applications. Firstly, new, fast methods for parameter estimation in MRI data are presented. MRI can be used for, e.g., the diagnosis of anomalies in the skin or in the brain. The presented methods allow for a significant decrease in computational complexity without loss in performance. Secondly, the estimation of blood flow velo-city using medical ultrasound scanners is addressed. Information about anomalies in the blood flow dynamics is an important tool for the diagnosis of, for example, stenosis and atherosclerosis. The presented methods make no assumption on the sampling schemes, allowing for duplex mode transmissions where B-mode images are interleaved with the Doppler emissions.
248

Advances in therapeutic risk management through signal detection and risk minimisation tool analyses

Nkeng, Lenhangmbong 02 1900 (has links)
Les quatre principales activités de la gestion de risque thérapeutique comportent l’identification, l’évaluation, la minimisation, et la communication du risque. Ce mémoire aborde les problématiques liées à l’identification et à la minimisation du risque par la réalisation de deux études dont les objectifs sont de: 1) Développer et valider un outil de « data mining » pour la détection des signaux à partir des banques de données de soins de santé du Québec; 2) Effectuer une revue systématique afin de caractériser les interventions de minimisation de risque (IMR) ayant été implantées. L’outil de détection de signaux repose sur la méthode analytique du quotient séquentiel de probabilité (MaxSPRT) en utilisant des données de médicaments délivrés et de soins médicaux recueillis dans une cohorte rétrospective de 87 389 personnes âgées vivant à domicile et membres du régime d’assurance maladie du Québec entre les années 2000 et 2009. Quatre associations « médicament-événement indésirable (EI) » connues et deux contrôles « négatifs » ont été utilisés. La revue systématique a été faite à partir d’une revue de la littérature ainsi que des sites web de six principales agences réglementaires. La nature des RMIs ont été décrites et des lacunes de leur implémentation ont été soulevées. La méthode analytique a mené à la détection de signaux dans l'une des quatre combinaisons médicament-EI. Les principales contributions sont: a) Le premier outil de détection de signaux à partir des banques de données administratives canadiennes; b) Contributions méthodologiques par la prise en compte de l'effet de déplétion des sujets à risque et le contrôle pour l'état de santé du patient. La revue a identifié 119 IMRs dans la littérature et 1,112 IMRs dans les sites web des agences réglementaires. La revue a démontré qu’il existe une augmentation des IMRs depuis l’introduction des guides réglementaires en 2005 mais leur efficacité demeure peu démontrée. / The four main components of therapeutic risk management (RM) consist of risk detection (identification), evaluation, minimisation, and communication. This thesis aims at addressing RM methodologies within the two realms of risk detection and risk minimisation, through the conduct of two distinct studies: i) The development and evaluation of a data mining tool to support signal detection using health care claims databases, and ii) A systematic review to characterise risk minimisation interventions (RMIs) implemented so far. The data mining tool is based on a Maximised Sequential Probability Ratio Test (MaxSPRT), using drug dispensing and medical claims data found in the Quebec health claims databases (RAMQ). It was developed and validated in a cohort of 87,389 community-dwelling elderly aged 66+, randomly sampled from all elderly drug plan members between 2000 and 2009. Four known drug-AE associations and two "negative" controls were used. The systematic review on RMIs is based on a literature search as well as a review of the websites of six main regulatory agencies. Types of RMIs have been summarized and implementation gaps identified. The data mining tool detected signals in one of four of the known drug-AE associations. Major contributions are: a) The first signal detection data mining tool applied to a Canadian claims database; b) Methodological improvements over published methods by considering the depletion of susceptibles effect and adjusting for overall health status to control for prescription channelling. The review yielded 119 distinct RMIs from the literature and 1,112 from the websites. The review demonstrated that an increase in RMI numbers among websites occurred since the introduction of guidances in 2005, but their effectiveness remains insufficiently examined.
249

Reconstrução de energia em calorímetros operando em alta luminosidade usando estimadores de máxima verossimilhança / Reconstrution of energy in calorimeters operating in high brigthness enviroments using maximum likelihood estimators

Paschoalin, Thiago Campos 15 March 2016 (has links)
Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2016-08-12T11:54:08Z No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Rejected by Adriana Oliveira (adriana.oliveira@ufjf.edu.br), reason: Isabela, verifique que no resumo há algumas palavras unidas. on 2016-08-15T13:06:32Z (GMT) / Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2016-08-15T13:57:16Z No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Rejected by Adriana Oliveira (adriana.oliveira@ufjf.edu.br), reason: separar palavras no resumo e palavras-chave on 2016-08-16T11:34:37Z (GMT) / Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2016-12-19T13:07:02Z No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Rejected by Adriana Oliveira (adriana.oliveira@ufjf.edu.br), reason: Consertar palavras unidas no resumo on 2017-02-03T12:27:10Z (GMT) / Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2017-02-03T12:51:52Z No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-02-03T12:54:15Z (GMT) No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Made available in DSpace on 2017-02-03T12:54:15Z (GMT). No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) Previous issue date: 2016-03-15 / Esta dissertação apresenta técnicas de processamento de sinais a fim de realizar a Estimação da energia, utilizando calorimetria de altas energias. O CERN, um dos mais importantes centros de pesquisa de física de partículas, possui o acelerador de partículas LHC, onde está inserido o ATLAS. O TileCal, importante calorímetro integrante do ATLAS, possui diversos canais de leitura, operando com altas taxas de eventos. A reconstrução da energia das partículas que interagem com este calorímetro é realizada através da estimação da amplitude do sinal gerado nos canais do mesmo. Por este motivo, a modelagem correta do ruído é importante para se desenvolver técnicas de estimação eficientes. Com o aumento da luminosidade (número de partículas que incidem no detector por unidade de tempo) no TileCal, altera-se o modelo do ruído, o que faz com que as técnicas de estimação utilizadas anteriormente apresentem uma queda de desempenho. Com a modelagem deste novo ruído como sendo uma Distribuição Lognormal, torna possível o desenvolvimento de uma nova técnica de estimação utilizando Estimadores de Máxima Verossimilhança (do inglês Maximum Likelihood Estimator MLE), aprimorando a estimação dos parâmetros e levando à uma reconstrução da energia do sinal de forma mais correta. Uma nova forma de análise da qualidade da estimação é também apresentada, se mostrando bastante eficiente e útil em ambientes de alta luminosidade. A comparação entre o método utilizado pelo CERN e o novo método desenvolvido mostrou que a solução proposta é superior em desempenho, sendo adequado o seu uso no novo cenário de alta luminosidade no qual o TileCal estará sujeito a partir de 2018. / This paper presents signal processing techniques that performs signal detection and energy estimation using calorimetry high energies. The CERN, one of the most important physics particles research center, has the LHC, that contains the ATLAS. The TileCal, important device of the ATLAS calorimeter, is the component that involves a lot of parallel channels working, involving high event rates. The reconstruction of the signal energy that interact with this calorimeter is performed through estimation of the amplitude of signal generated by this calorimter. So, accurate noise modeling is important to develop efficient estimation techniques. With high brightness in TileCal, the noise model modifies, which leads a performance drop of estimation techniques used previously. Modelling this new noise as a lognormal distribution allows the development of a new estimation technique using the MLE (Maximum Like lihood Estimation), improving parameter sestimation and leading to a more accurately reconstruction of the signal energy. A new method to analise the estimation quality is presented, wich is very effective and useful in high brightness enviroment conditions. The comparison between the method used by CERN and the new method developed revealed that the proposed solution is superior and is suitable to use in this kind of ambient that TileCal will be working from 2018.
250

Detecção de sinais e estimação de energia para calorimetria de altas energias / Signal detection and energy estimation for high energy calorimetry

Peralva, Bernardo Sotto-Maior 07 May 2012 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-04-20T15:14:06Z No. of bitstreams: 1 bernardosottomaiorperalva.pdf: 4608167 bytes, checksum: c63c1f7fc453965f36158791fb85964e (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-04-24T16:49:03Z (GMT) No. of bitstreams: 1 bernardosottomaiorperalva.pdf: 4608167 bytes, checksum: c63c1f7fc453965f36158791fb85964e (MD5) / Made available in DSpace on 2017-04-24T16:49:04Z (GMT). No. of bitstreams: 1 bernardosottomaiorperalva.pdf: 4608167 bytes, checksum: c63c1f7fc453965f36158791fb85964e (MD5) Previous issue date: 2012-05-07 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Nesta dissertação, são apresentados métodos para detecção de sinais e estimação de energia para calorimetria de altas energias aplicados no calorímetro hadrônico (TileCal) do ATLAS. A energia depositada em cada célula do calorímetro é adquirida por dois canais eletrônicos de leitura e é estimada, separadamente, através da reconstrução da amplitude do pulso digitalizado amostrado a cada 25 ns. Este trabalho explora a aplicabilidade de uma aproximação do Filtro Casado no ambiente do TileCal para detectar sinais e estimar sua amplitude. Além disso, este trabalho explora o impacto na detecção de eventos válidos e estimação da amplitude quando somam-se os sinais referentes à mesma célula antes da aplicação do filtro. O método proposto é comparado com o Filtro Ótimo atualmente utilizado pelo TileCal para reconstrução de energia. Os resultados para dados simulados e de colisão mostram que, para condições em que a linha de base do sinal de entrada pode ser considerada estacionária, a técnica proposta apresenta uma melhor eficiência de detecção e estimação do que a alcançada pelo Filtro Ótimo empregada no TileCal. / The Tile Barrel Calorimeter (TileCal) is the central section of the hadronic calorimeter of ATLAS at LHC. The energy deposited in each cell of the calorimeter is read out by two electronic channels for redundancy and is estimated, per channel, by reconstructing the amplitude of the digitized signal pulse sampled every 25 ns. This work presents signal detection and energy estimation methods for high energy calorimetry, applied to the TileCal environment. It investigates the applicability of a Matched Filter and, furthermore, it explores the impact when summing the signals belonging to the same cell before the estimating and detecting procedures. The proposed method is compared to the Optimal Filter algorithm, that is currently been used at TileCal for energy reconstruction. The results for simulated and collision data sets showed that for conditions where the signal pedestal could be considered stationary, the proposed method achieves better detection and estimation efficiencies than the Optimal Filter technique employed in TileCal.

Page generated in 0.1342 seconds