• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 46
  • 12
  • 9
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Verification of Pipelined Ciphers

Lam, Chiu Hong January 2009 (has links)
The purpose of this thesis is to explore the formal verification technique of completion functions and equivalence checking by verifying two pipelined cryptographic circuits, KASUMI and WG ciphers. Most of current methods of communications either involve a personal computer or a mobile phone. To ensure that the information is exchanged in a secure manner, encryption circuits are used to transform the information into an unintelligible form. To be highly secure, this type of circuits is generally designed such that it is hard to analyze. Due to this fact, it becomes hard to locate a design error in the verification of cryptographic circuits. Therefore, cryptographic circuits pose significant challenges in the area of formal verification. Formal verification use mathematics to formulate correctness criteria of designs, to develop mathematical models of designs, and to verify designs against their correctness criteria. The results of this work can extend the existing collection of verification methods as well as benefiting the area of cryptography. In this thesis, we implemented the KASUMI cipher in VHDL, and we applied the optimization technique of pipelining to create three additional implementations of KASUMI. We verified the three pipelined implementations of KASUMI with completion functions and equivalence checking. During the verification of KASUMI, we developed a methodology to handle the completion functions efficiently based on VHDL generic parameters. We implemented the WG cipher in VHDL, and we applied the optimization techniques of pipelining and hardware re-use to create an optimized implementation of WG. We verified the optimized implementation of WG with completion functions and equivalence checking. During the verification of WG, we developed the methodology of ``skipping" that can decrease the number of verification obligations required to verify the correctness of a circuit. During the verification of WG, we developed a way of applying the completion functions approach such that it can deal with a circuit that has been optimized with hardware re-use.
12

Using System Dynamics to Research How Enterprise¡¦s Belief Influence the Process of Organizational Change Case Study Such As General Electric Company

Yang, Min-Huei 21 August 2006 (has links)
As an enterprise start to grow up, it comes along with the bottleneck and limitation of growth. In the meantime, this enterprise will activate a series of activities of organizational development for creating a better performance. In the past studies, researchers focused most of time on the relationship among organizational structures, those activities and organizational performance. They tried to find out how the organizational change created the marvelous performance, but just missed an important factor that made this happen is the believes of the leader. We believe that leader¡¦s believies will influence the organizational structure and then decide the performance of this organization. Our research focused on how believes affect the organization, and took GE company for example. We tried to explore the changes of organizational structure and organizational performance. Furthermore, to find out the key soft variables that is behind the back of organization¡¦s excellent performance. Our research adopted System Dynamics as the research method. We collected the information about the GE company, analyzed them and constructed the GE¡¦s system dynamics model. According to this model, we do the sumilation, test and analysis. Finally, we proposed our research conclusion.
13

The Original View of Reed-Solomon Coding and the Welch-Berlekamp Decoding Algorithm

Mann, Sarah Edge January 2013 (has links)
Reed-Solomon codes are a class of maximum distance separable error correcting codes with known fast error correction algorithms. They have been widely used to assure data integrity for stored data on compact discs, DVDs, and in RAID storage systems, for digital communications channels such as DSL internet connections, and for deep space communications on the Voyager mission. The recent explosion of storage needs for "Big Data'' has generated renewed interest in large storage systems with extended error correction capacity. Reed-Solomon codes have been suggested as one potential solution. This dissertation reviews the theory of Reed-Solomon codes from the perspective taken in Reed and Solomon's original paper on them. It then derives the Welch-Berlekamp algorithm for solving certain polynomial equations, and connects this algorithm to the problem of error correction. The discussion is mathematically rigorous, and provides a complete and consistent discussion of the error correction process. Numerous algorithms for encoding, decoding, erasure recovery, error detection, and error correction are provided and their computational cost is analyzed and discussed thus allowing this dissertation to serve as a manual for engineers interested in implementing Reed-Solomon coding.
14

Non-contract Estimation of Respiration and Heartbeat Rate using Ultra-Wideband Signals

Li, Chang 29 September 2008 (has links)
The use of ultra-wideband (UWB) signals holds great promise for remote monitoring of vital-signs which has applications in the medical, for first responder and in security. Previous research has shown the feasibility of a UWB-based radar system for respiratory and heartbeat rate estimation. Some simulation and real experimental results are presented to demonstrate the capability of the respiration rate detection. However, past analysis are mostly based upon the assumption of an ideal experiment environment. The accuracy of the estimation and interference factors of this technology has not been investigated. This thesis establishes an analytical framework for the FFT-based signal processing algorithms to detect periodic bio-signals from a single target. Based on both simulation and experimental data, three basic challenges are identified: (1) Small body movement during the measurement interval results in slow variations in the consecutive received waveforms which mask the signals of interest. (2) The relatively strong respiratory signal with its harmonics greatly impact the detection of heartbeat rate. (3) The non-stationary nature of bio-signals creates challenges for spectral analysis. Having identified these problems, adaptive signal processing techniques have been developed which effectively mitigate these problems. Specifically, an ellipse-fitting algorithm is adopted to track and compensate the aperiodic large-scale body motion, and a wavelet-based filter is applied for attenuating the interference caused by respiratory harmonics to accurately estimate the heartbeat frequency. Additionally, the spectrum estimation of non-stationary signals is examined using a different transform method. Results from simulation and experiments show that substantial improvement is obtained by the use of these techniques. Further, this thesis examines the possibility of multi-target detection based on the same measurement setup. Array processing techniques with subspace-based algorithms are applied to estimate multiple respiration rates from different targets. The combination of array processing and single- target detection techniques are developed to extract the heartbeat rates. The performance is examined via simulation and experimental results and the limitation of the current measurement setup is discussed. / Master of Science
15

Automatic Phoneme Recognition with Segmental Hidden Markov Models

Baghdasaryan, Areg Gagik 10 March 2010 (has links)
A speaker independent continuous speech phoneme recognition and segmentation system is presented. We discuss the training and recognition phases of the phoneme recognition system as well as a detailed description of the integrated elements. The Hidden Markov Model (HMM) based phoneme models are trained using the Baum-Welch re-estimation procedure. Recognition and segmentation of the phonemes in the continuous speech is performed by a Segmental Viterbi Search on a Segmental Ergodic HMM for the phoneme states. We describe in detail the three phases of the phoneme joint recognition and segmentation system. First, the extraction of the Mel-Frequency Cepstral Coefficients (MFCC) and the corresponding Delta and Delta Log Power coefficients is described. Second, we describe the operation of the Baum-Welch re-estimation procedure for the training of the phoneme HMM models, including the K-Means and the Expectation-Maximization (EM) clustering algorithms used for the initialization of the Baum-Welch algorithm. Additionally, we describe the structural framework of - and the recognition procedure for - the ergodic Segmental HMM for the phoneme segmentation and recognition. We include test and simulation results for each of the individual systems integrated into the phoneme recognition system and finally for the phoneme recognition/segmentation system as a whole. / Master of Science
16

Iterative Decoding and Channel Estimation over Hidden Markov Fading Channels

Khan, Anwer Ali 24 May 2000 (has links)
Since the 1950s, hidden Markov models (HMMS) have seen widespread use in electrical engineering. Foremost has been their use in speech processing, pattern recognition, artificial intelligence, queuing theory, and communications theory. However, recent years have witnessed a renaissance in the application of HMMs to the analysis and simulation of digital communication systems. Typical applications have included signal estimation, frequency tracking, equalization, burst error characterization, and transmit power control. Of special significance to this thesis, however, has been the use of HMMs to model fading channels typical of wireless communications. This variegated use of HMMs is fueled by their ability to model time-varying systems with memory, their ability to yield closed form solutions to otherwise intractable analytic problems, and their ability to help facilitate simple hardware and/or software based implementations of simulation test-beds. The aim of this thesis is to employ and exploit hidden Markov fading models within an iterative (turbo) decoding framework. Of particular importance is the problem of channel estimation, which is vital for realizing the large coding gains inherent in turbo coded schemes. This thesis shows that a Markov fading channel (MFC) can be conceptualized as a trellis, and that the transmission of a sequence over a MFC can be viewed as a trellis encoding process much like convolutional encoding. The thesis demonstrates that either maximum likelihood sequence estimation (MLSE) algorithms or maximum <I> a posteriori</I> (MAP) algorithms operating over the trellis defined by the MFC can be used for channel estimation. Furthermore, the thesis illustrates sequential and decision-directed techniques for using the aforementioned trellis based channel estimators <I>en masse</I> with an iterative decoder. / Master of Science
17

Robust Blind Spectral Estimation in the Presence of Impulsive Noise

Kees, Joel Thomas 07 March 2019 (has links)
Robust nonparametric spectral estimation includes generating an accurate estimate of the Power Spectral Density (PSD) for a given set of data while trying to minimize the bias due to data outliers. Robust nonparametric spectral estimation is applied in the domain of electrical communications and digital signal processing when a PSD estimate of the electromagnetic spectrum is desired (often for the goal of signal detection), and when the spectrum is also contaminated by Impulsive Noise (IN). Power Line Communication (PLC) is an example of a communication environment where IN is a concern because power lines were not designed with the intent to transmit communication signals. There are many different noise models used to statistically model different types of IN, but one popular model that has been used for PLC and various other applications is called the Middleton Class A model, and this model is extensively used in this thesis. The performances of two different nonparametric spectral estimation methods are analyzed in IN: the Welch method and the multitaper method. These estimators work well under the common assumption that the receiver noise is characterized by Additive White Gaussian Noise (AWGN). However, the performance degrades for both of these estimators when they are used for signal detection in IN environments. In this thesis basic robust estimation theory is used to modify the Welch and multitaper methods in order to increase their robustness, and it is shown that the signal detection capabilities in IN is improved when using the modified robust estimators. / Master of Science / One application of blind spectral estimation is blind signal detection. Unlike a car radio, where the radio is specifically designed to receive AM and PM radio waves, sometimes it is useful for a radio to be able to detect the presence of transmitted signals whose characteristics are not known ahead of time. Cognitive radio is one application where this capability is useful. Often signal detection is inhibited by Additive White Gaussian Noise (AWGN). This is analogous to trying to hear a friend speak (signal detection) in a room full of people talking (background AWGN). However, some noise environments are more impulsive in nature. Using the previous analogy, the background noise could be loud banging caused by machinery; the noise will not be as constant as the chatter of the crowd, but it will be much louder. When power lines are used as a medium for electromagnetic communication (instead of just sending power), it is called Power Line Communication (PLC), and PLC is a good example of a system where the noise environment is impulsive. In this thesis, methods used for blind spectral estimation are modified to work reliably (or robustly) for impulsive noise environments.
18

Desenvolvimento de ferramenta de comparação de técnicas de processamento de sinais para determinar fadiga muscular por meio do sinal emg / Toolkit development for signals processing technics comparison to detect muscular fadigue by EMG signal

CAMPOS, Ramon de Freitas Elias 09 July 2012 (has links)
Made available in DSpace on 2014-07-29T15:08:19Z (GMT). No. of bitstreams: 1 dissertacao RamonCampos UFG 2012.pdf: 5667385 bytes, checksum: 1ff8645bd9d1b27aa406264298e405ef (MD5) Previous issue date: 2012-07-09 / This study aimed to development of a computational tool for electromyographic signal (EMG) analysis by signal processing techniques to determine muscular fatigue. With Ethics Committee of Federal University of Goiás approve were recorded from the dominant biceps brachii of 10 volunteers, that did not ever had muscular disease. The protocol consisted on get the maximal voluntary isometric contraction (MVIC) from the volunteer seated, floor contact with the feet, and forearm in 90 degree, doing three maximal voluntary contraction against a rigid and fixed surface, by five seconds, with a five resting minutes between each acquisition. The MVIC values were obtained by arithmetical mean from the three greater values of each contraction. In statistical analysis the volunteer sustained a load of 60% MVIC for 30 seconds, or while they supported. For dynamical analysis was used a electrogoniometer tied in forearm to measure the angle and a 60% MVIC load for 30 seconds measured, achieved angle of 70° until 130°, and return to 70°. Each flexion was did in 1,5 seconds, or while volunteer support. To analyze the signal in time domain were used Root main square (RMS) values and Continuous wavelet transform (CWT). To analyze in frequency domain were adopted the values of mean and median from Fast Fourier Transform (FFT), Welch spectral estimator, auto regressive moving average (ARMA) filter, and analytic wavelet transform (AWT). Linear regressions were obtained using a window of 250 ms for all techniques. Slopes with positive values, in time domain, and slopes with negative values, in frequency domain, indicate muscular fatigue. Using high scales of wavelet transform shows great results while compared with default techniques, like RMS and FFT. ANOVA one way were adopted as statistical method of analysis, with P < 0,05. Only in dynamic contraction, on frequency domain, had P value < 0,05. Tukey test were applied to identify which techniques had variance great than 5%. Is suggested as future works development of a wireless system to acquire EMG signals, improvement in the software to motor unit action potentials (MUAP) detection, prosthesis control, and synchronization with others systems of data collection. / Este trabalho propõe o desenvolvimento de uma ferramenta computacional para realizar a análise do sinal eletromiográfico (EMG) por meio de técnicas de processamento de sinais a fim de detectar fadiga muscular. Com aprovação do Comitê de Ética da Universidade Federal de Goiás, foram captados os sinais eletromiográfico do bíceps braquial do braço dominante de 10 voluntários, que não apresentavam histórico de problemas musculares. O protocolo consistiu na obtenção do valor de contração isométrica voluntária máxima (CIVM) com o voluntário sentado, pés em contato com o chão e o antebraço em ângulo de 90° em relação ao tronco, realizando três repetições de contração máxima, contra uma superfície fixa e rígida, durante 5 segundos, intercaladas por um período de 5 minutos em repouso. O valor da CIVM, representando 100% da força máxima exercida, foi determinado por meio da média aritmética dos maiores valores de cada amostra. Para análise de contração estática, o voluntário realizou uma nova contração com carga de 60% da CIVM durante 30 segundos, ou até onde suportasse. Na análise de contração dinâmica, foi utilizado um eletrogoniômetro para medição do ângulo do antebraço que, ao carregar uma carga com 60% da CIVM, realizou manobra de flexão, alcançando ângulo de 70°, e manobra de extensão, atingindo ângulo de 130° do antebraço em relação ao tronco. Cada manobra (flexão e extensão) foi realizada em 1,5 segundos, durante um período de 30 segundos ou enquanto suportasse, totalizando no máximo de 10 flexões e 10 extensões. Foram adotadas para análise do sinal EMG no domínio do tempo as técnicas de cálculo do valor quadrático médio (RMS) e transformada wavelet contínua (TWC). Para análise no domínio da frequência foram adotados os valores médios e medianos obtidos pelas técnicas de transformada rápida de Fourier (FFT), o estimador espectral de Welch, o filtro autorregressivo de média móvel (ARMA) e as transformadas wavelets analíticas (TWA). Utilizando uma janela de 250 ms foram obtidos os gráficos contendo a regressão linear de todas as técnicas. A inclinação positiva da reta de regressão, para o domínio do tempo, e inclinação negativa, para o domínio da frequência, indica o processo de fadiga muscular. A utilização da transformada wavelet, em grandes escalas, apresenta valor mais significativo de indícios de fadiga muscular quando comparada às técnicas padrões RMS e FFT. O método estatístico utilizado foi a ANOVA de um fator, com o P < 0,05. Apenas na contração dinâmica no domínio da frequência obteve um P < 0,05. Aplicada a análise post hoc do teste de Tukey foram identificadas, das técnicas comparadas duas a duas, quais apresentaram um grau de variância maior que 5%. Ainda no trabalho é sugerido o desenvolvimento de equipamento de coleta de sinais EMG sem fio, o aperfeiçoamento da ferramenta desenvolvida para detecção de potenciais de ativação de unidade motora (MUAP), o controle de próteses de reabilitação, e a sincronização com outros sistemas de coleta de dados.
19

Training of Hidden Markov models as an instance of the expectation maximization algorithm

Majewsky, Stefan 22 August 2017 (has links)
In Natural Language Processing (NLP), speech and text are parsed and generated with language models and parser models, and translated with translation models. Each model contains a set of numerical parameters which are found by applying a suitable training algorithm to a set of training data. Many such training algorithms are instances of the Expectation-Maximization (EM) algorithm. In [BSV15], a generic EM algorithm for NLP is described. This work presents a particular speech model, the Hidden Markov model, and its standard training algorithm, the Baum-Welch algorithm. It is then shown that the Baum-Welch algorithm is an instance of the generic EM algorithm introduced by [BSV15], from which follows that all statements about the generic EM algorithm also apply to the Baum-Welch algorithm, especially its correctness and convergence properties.:1 Introduction 1.1 N-gram models 1.2 Hidden Markov model 2 Expectation-maximization algorithms 2.1 Preliminaries 2.2 Algorithmic skeleton 2.3 Corpus-based step mapping 2.4 Simple counting step mapping 2.5 Regular tree grammars 2.6 Inside-outside step mapping 2.7 Review 3 The Hidden Markov model 3.1 Forward and backward algorithms 3.2 The Baum-Welch algorithm 3.3 Deriving the Baum-Welch algorithm 3.3.1 Model parameter and countable events 3.3.2 Tree-shaped hidden information 3.3.3 Complete-data corpus 3.3.4 Inside weights 3.3.5 Outside weights 3.3.6 Complete-data corpus (cont.) 3.3.7 Step mapping 3.4 Review Appendix A Elided proofs from Chapter 3 A.1 Proof of Lemma 3.8 A.2 Proof of Lemma 3.9 B Formulary for Chapter 3 Bibliography
20

Mätosäkerhet vid kalibrering av referensutrustning för blodtrycksmätning : En modell för framtagning av mätosäkerhet för referensmanometer WA 767 / Measurement uncertainty for calibration of reference equipment for blood pressure measurement : A model for obtaining measurement uncertainty of reference manometer WA 767

Patzauer, Rebecka, Wessel, Elin January 2016 (has links)
Avdelningen för Medicinsk teknik på Akademiska sjukhuset har uppdaterat befintliga kalibreringsprotokoll för Welch Allyn 767 som används som referensmanometer vid kalibrering av blodtrycksmätare. I protokollet ska det enligt ISO 9001 och ISO 13485 ingå att vid varje kalibreringspunkt ange mätosäkerheten.  Rutiner kring detta var inte definierade. En modell för att ta fram mätosäkerhet utformades utifrån standardiserade metoder från “Guide to the expression of uncertainty in measurement” och anpassades för att kunna användas på den medicintekniska avdelningen. En mätmetod för kalibrering togs fram och med modellen beräknades mätosäkerhet för en referensmanometer. Mätosäkerheten med definierad mätmetod blev lägre än den av Welch Allyn specificerade mätosäkerheten på ± 3 mmHg. Felfortplantning från kalibrering till blodtrycksmätning undersöktes. Mätosäkerheten ökade i varje steg, varför avdelningen bör ta fram ett protokoll för hur kalibrering genomförs, och på så sätt förbättra spårbarheten. / The department of Medical Technology at Akademiska sjukhuset has updated their current protocol for calibration for Welch Allyn 767, which serves as a reference manometer for blood pressure meters when being calibrated. According to ISO 9001 and ISO 13485, the protocol has to include a measurement uncertainty for every given point of calibration. The routines regarding this were undefined. A model for retrieving measurement uncertainty was designed using standardized methods from “Guide to the expression of uncertainty in measurement” and was customized to be used at the department of Medical Technology. A method for calibration was created and used to calculate the measurement uncertainty for the reference manometer. This measurement uncertainty was smaller than the one specified by Welch Allyn, which was ± 3 mmHg. Propagation of uncertainty from the calibration to the blood pressure measurement was investigated. The measurement uncertainty increased in every step. Therefore, the department should introduce a protocol for how a calibration is performed, and thereby improve the traceability.

Page generated in 0.0269 seconds