• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Sequential Detection of Misbehaving Relay in Cooperative Networks

Yi, Young-Ming 02 September 2012 (has links)
To combat channel fading, cooperative communication achieves spatial diversity for the transmission between source and destination through the help of relay. However, if the relay behaves abnormally or maliciously and the destination is not aware, the diversity gain of the cooperative system will be significantly reduced, which degrades system performance. In our thesis, we consider an one-relay decode and forward cooperative network, and we assume that the relay may misbehave with a certain probability. If the relay is malicious, it will garble transmission signal, resulting in severe damage to cooperative system. In this work, we discuss three kinds of malicious behavior detection. More specifically, we adopt sequential detection to detect the behavior of relay. If tracing symbols are inserted among the source message, the destination detects malicious after extracting the received tracing symbols. We adopt log-likelihood ratio test to examine these tracing symbols, and then determine the behavior of relay. If the source does not transmit tracing symbols, the destination detects misbehavior according to the received data signal. Furthermore, we employ sequential detection to reduce detection time for a given probabilities of false alarm and miss detection. Through simulation results, for a certain target on probability of errors, our proposed methods can effectively reduce numbers of observations. On the other works, the destination can effectively detect misbehavior of relay, and eliminating the damage causes by malicious relay without requiring large numbers of observations.
2

Maximum Likelihood Estimation of Hyperon Parameters in Python : Facilitating Novel Studies of Fundamental Symmetries with Modern Software Tools

Verbeek, Benjamin January 2021 (has links)
In this project, an algorithm has been implemented in Python to estimate the parameters describing the production and decay of a spin 1/2 baryon - antibaryon pair. This decay can give clues about a fundamental asymmetry between matter and antimatter. A model-independent formalism developed by the Uppsala hadron physics group and previously implemented in C++, has been shown to be a promising tool in the search for physics beyond the Standard Model (SM) of particle physics. The program developed in this work provides a more user-friendly alternative, and is intended to motivate further use of the formalism through a more maintainable, customizable and readable implementation. The hope is that this will expedite future research in the area of charge parity (CP)-violation and eventually lead to answers to questions such as why the universe consists of matter. A Monte-Carlo integrator is used for normalization and a Python library for function minimization. The program returns an estimation of the physics parameters including error estimation. Tests of statistical properties of the estimator, such as consistency and bias, have been performed. To speed up the implementation, the Just-In-Time compiler Numba has been employed which resulted in a speed increase of a factor 400 compared to plain Python code.
3

On a turbo decoder design for low power dissipation

Fei, Jia 21 July 2000 (has links)
A new coding scheme called "turbo coding" has generated tremendous interest in channel coding of digital communication systems due to its high error correcting capability. Two key innovations in turbo coding are parallel concatenated encoding and iterative decoding. A soft-in soft-out component decoder can be implemented using the maximum a posteriori (MAP) or the maximum likelihood (ML) decoding algorithm. While the MAP algorithm offers better performance than the ML algorithm, the computation is complex and not suitable for hardware implementation. The log-MAP algorithm, which performs necessary computations in the logarithm domain, greatly reduces hardware complexity. With the proliferation of the battery powered devices, power dissipation, along with speed and area, is a major concern in VLSI design. In this thesis, we investigated a low-power design of a turbo decoder based on the log-MAP algorithm. Our turbo decoder has two component log-MAP decoders, which perform the decoding process alternatively. Two major ideas for low-power design are employment of a variable number of iterations during the decoding process and shutdown of inactive component decoders. The number of iterations during decoding is determined dynamically according to the channel condition to save power. When a component decoder is inactive, the clocks and spurious inputs to the decoder are blocked to reduce power dissipation. We followed the standard cell design approach to design the proposed turbo decoder. The decoder was described in VHDL, and then synthesized to measure the performance of the circuit in area, speed and power. Our decoder achieves good performance in terms of bit error rate. The two proposed methods significantly reduce power dissipation and energy consumption. / Master of Science
4

Analytical Methods for the Performance Evaluation of Binary Linear Block Codes

Chaudhari, Pragat January 2000 (has links)
The modeling of the soft-output decoding of a binary linear block code using a Binary Phase Shift Keying (BPSK) modulation system (with reduced noise power) is the main focus of this work. With this model, it is possible to provide bit error performance approximations to help in the evaluation of the performance of binary linear block codes. As well, the model can be used in the design of communications systems which require knowledge of the characteristics of the channel, such as combined source-channel coding. Assuming an Additive White Gaussian Noise channel model, soft-output Log Likelihood Ratio (LLR) values are modeled to be Gaussian distributed. The bit error performance for a binary linear code over an AWGN channel can then be approximated using the Q-function that is used for BPSK systems. Simulation results are presented which show that the actual bit error performance of the code is very well approximated by the LLR approximation, especially for low signal-to-noise ratios (SNR). A new measure of the coding gain achievable through the use of a code is introduced by comparing the LLR variance to that of an equivalently scaled BPSK system. Furthermore, arguments are presented which show that the approximation requires fewer samples than conventional simulation methods to obtain the same confidence in the bit error probability value. This translates into fewer computations and therefore, less time is needed to obtain performance results. Other work was completed that uses a discrete Fourier Transform technique to calculate the weight distribution of a linear code. The weight distribution of a code is defined by the number of codewords which have a certain number of ones in the codewords. For codeword lengths of small to moderate size, this method is faster and provides an easily implementable and methodical approach over other methods. This technique has the added advantage over other techniques of being able to methodically calculate the number of codewords of a particular Hamming weight instead of calculating the entire weight distribution of the code.
5

Analytical Methods for the Performance Evaluation of Binary Linear Block Codes

Chaudhari, Pragat January 2000 (has links)
The modeling of the soft-output decoding of a binary linear block code using a Binary Phase Shift Keying (BPSK) modulation system (with reduced noise power) is the main focus of this work. With this model, it is possible to provide bit error performance approximations to help in the evaluation of the performance of binary linear block codes. As well, the model can be used in the design of communications systems which require knowledge of the characteristics of the channel, such as combined source-channel coding. Assuming an Additive White Gaussian Noise channel model, soft-output Log Likelihood Ratio (LLR) values are modeled to be Gaussian distributed. The bit error performance for a binary linear code over an AWGN channel can then be approximated using the Q-function that is used for BPSK systems. Simulation results are presented which show that the actual bit error performance of the code is very well approximated by the LLR approximation, especially for low signal-to-noise ratios (SNR). A new measure of the coding gain achievable through the use of a code is introduced by comparing the LLR variance to that of an equivalently scaled BPSK system. Furthermore, arguments are presented which show that the approximation requires fewer samples than conventional simulation methods to obtain the same confidence in the bit error probability value. This translates into fewer computations and therefore, less time is needed to obtain performance results. Other work was completed that uses a discrete Fourier Transform technique to calculate the weight distribution of a linear code. The weight distribution of a code is defined by the number of codewords which have a certain number of ones in the codewords. For codeword lengths of small to moderate size, this method is faster and provides an easily implementable and methodical approach over other methods. This technique has the added advantage over other techniques of being able to methodically calculate the number of codewords of a particular Hamming weight instead of calculating the entire weight distribution of the code.
6

Integration of Hidden Markov Modelling and Bayesian Network for Fault Detection and Prediction of Complex Engineered Systems

Soleimani, Morteza, Campean, Felician, Neagu, Daniel 07 June 2021 (has links)
yes / This paper presents a methodology for fault detection, fault prediction and fault isolation based on the integration of hidden Markov modelling (HMM) and Bayesian networks (BN). This addresses the nonlinear and non-Gaussian data characteristics to support fault detection and prediction, within an explainable hybrid framework that captures causality in the complex engineered system. The proposed methodology is based on the analysis of the pattern of similarity in the log-likelihood (LL) sequences against the training data for the mixture of Gaussians HMM (MoG-HMM). The BN model identifies the root cause of detected/predicted faults, using the information propagated from the HMM model as empirical evidence. The feasibility and effectiveness of the presented approach are discussed in conjunction with the application to a real-world case study of an automotive exhaust gas Aftertreatment system. The paper details the implementation of the methodology to this case study, with data available from real-world usage of the system. The results show that the proposed methodology identifies the fault faster and attributes the fault to the correct root cause. While the proposed methodology is illustrated with an automotive case study, its applicability is much wider to the fault detection and prediction problem of any similar complex engineered system.
7

Inferences on the power-law process with applications to repairable systems

Chumnaul, Jularat 13 December 2019 (has links)
System testing is very time-consuming and costly, especially for complex high-cost and high-reliability systems. For this reason, the number of failures needed for the developmental phase of system testing should be relatively small in general. To assess the reliability growth of a repairable system, the generalized confidence interval and the modified signed log-likelihood ratio test for the scale parameter of the power-law process are studied concerning incomplete failure data. Specifically, some recorded failure times in the early developmental phase of system testing cannot be observed; this circumstance is essential to establish a warranty period or determine a maintenance phase for repairable systems. For the proposed generalized confidence interval, we have found that this method is not biased estimates which can be seen from the coverage probabilities obtained from this method being close to the nominal level 0.95 for all levels of γ and β. When the performance of the proposed method and the existing method are compared and validated regarding average widths, the simulation results show that the proposed method is superior to another method due to shorter average widths when the predetermined number of failures is small. For the proposed modified signed log-likelihood ratio test, we have found that this test performs well in controlling type I errors for complete failure data, and it has desirable powers for all parameters configurations even for the small number of failures. For incomplete failure data, the proposed modified signed log-likelihood ratio test is preferable to the signed log-likelihood ratio test in most situations in terms of controlling type I errors. Moreover, the proposed test also performs well when the missing ratio is up to 30% and n > 10. In terms of empirical powers, the proposed modified signed log-likelihood ratio test is superior to another test for most situations. In conclusion, it is quite clear that the proposed methods, the generalized confidence interval, and the modified signed log-likelihood ratio test, are practically useful to save business costs and time during the developmental phase of system testing since the only small number of failures is required to test systems, and it yields precise results.
8

Bayesian Inference for Bivariate Conditional Copula Models with Continuous or Mixed Outcomes

Sabeti, Avideh 12 August 2013 (has links)
The main goal of this thesis is to develop Bayesian model for studying the influence of covariate on dependence between random variables. Conditional copula models are flexible tools for modelling complex dependence structures. We construct Bayesian inference for the conditional copula model adapted to regression settings in which the bivariate outcome is continuous or mixed (binary and continuous) and the copula parameter varies with covariate values. The functional relationship between the copula parameter and the covariate is modelled using cubic splines. We also extend our work to additive models which would allow us to handle more than one covariate while keeping the computational burden within reasonable limits. We perform the proposed joint Bayesian inference via adaptive Markov chain Monte Carlo sampling. The deviance information criterion and cross-validated marginal log-likelihood criterion are employed for three model selection problems: 1) choosing the copula family that best fits the data, 2) selecting the calibration function, i.e., checking if parametric form for copula parameter is suitable and 3) determining the number of independent variables in the additive model. The performance of the estimation and model selection techniques are investigated via simulations and demonstrated on two data sets: 1) Matched Multiple Birth and 2) Burn Injury. In which of interest is the influence of gestational age and maternal age on twin birth weights in the former data, whereas in the later data we are interested in investigating how patient’s age affects the severity of burn injury and the probability of death.
9

Bayesian Inference for Bivariate Conditional Copula Models with Continuous or Mixed Outcomes

Sabeti, Avideh 12 August 2013 (has links)
The main goal of this thesis is to develop Bayesian model for studying the influence of covariate on dependence between random variables. Conditional copula models are flexible tools for modelling complex dependence structures. We construct Bayesian inference for the conditional copula model adapted to regression settings in which the bivariate outcome is continuous or mixed (binary and continuous) and the copula parameter varies with covariate values. The functional relationship between the copula parameter and the covariate is modelled using cubic splines. We also extend our work to additive models which would allow us to handle more than one covariate while keeping the computational burden within reasonable limits. We perform the proposed joint Bayesian inference via adaptive Markov chain Monte Carlo sampling. The deviance information criterion and cross-validated marginal log-likelihood criterion are employed for three model selection problems: 1) choosing the copula family that best fits the data, 2) selecting the calibration function, i.e., checking if parametric form for copula parameter is suitable and 3) determining the number of independent variables in the additive model. The performance of the estimation and model selection techniques are investigated via simulations and demonstrated on two data sets: 1) Matched Multiple Birth and 2) Burn Injury. In which of interest is the influence of gestational age and maternal age on twin birth weights in the former data, whereas in the later data we are interested in investigating how patient’s age affects the severity of burn injury and the probability of death.
10

Kritická analýza jazykových ideologií v českém veřejném diskurzu / Critical Analysis of Language Ideologies in Czech Public Discourse

Dufek, Ondřej January 2018 (has links)
The thesis deals with language ideologies in Czech public discourse. After introducing its topic, motivation and structure in the opening chapter, it devotes the second chapter to a thorough analysis of the research field of language ideologies. It presents various ways of defining them, two different approaches to them and a few key features which characterize language ideologies. The relation of language ideologies and other related notions is outlined, possibilities and ways of investigation are surveyed. Some remarks focus on existing lists or glossaries of language ideologies. The core of this chapter is an original, complex definition of language ideologies grounded in a critical reflection of approaches up to now. The third chapter summarizes relevant existing findings and on that basis, it formulates the main aim of the thesis - to make a contribution to knowledge on the foundations and ways of conceptualizing language in Czech public discourse. The fourth chapter elaborates the methodological frame of the thesis. Critical discourse analysis is chosen as a basis - its basics are summarized, main critical comments are considered and a partial solutions are proposed in use of corpus linguistics' tools. Another part of this chapter concerns with keyness as one of the dominant principles used...

Page generated in 0.032 seconds