• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 750
  • 163
  • 104
  • 70
  • 57
  • 37
  • 19
  • 16
  • 15
  • 12
  • 11
  • 9
  • 9
  • 7
  • 6
  • Tagged with
  • 1540
  • 174
  • 141
  • 128
  • 125
  • 122
  • 118
  • 118
  • 113
  • 92
  • 92
  • 91
  • 83
  • 79
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Optimal Design and Inference for Correlated Bernoulli Variables using a Simplified Cox Model

Bruce, Daniel January 2008 (has links)
<p>This thesis proposes a simplification of the model for dependent Bernoulli variables presented in Cox and Snell (1989). The simplified model, referred to as the simplified Cox model, is developed for identically distributed and dependent Bernoulli variables.</p><p>Properties of the model are presented, including expressions for the loglikelihood function and the Fisher information. The special case of a bivariate symmetric model is studied in detail. For this particular model, it is found that the number of design points in a locally D-optimal design is determined by the log-odds ratio between the variables. Under mutual independence, both a general expression for the restrictions of the parameters and an analytical expression for locally D-optimal designs are derived.</p><p>Focusing on the bivariate case, score tests and likelihood ratio tests are derived to test for independence. Numerical illustrations of these test statistics are presented in three examples. In connection to testing for independence, an E-optimal design for maximizing the local asymptotic power of the score test is proposed.</p><p>The simplified Cox model is applied to a dental data. Based on the estimates of the model, optimal designs are derived. The analysis shows that these optimal designs yield considerably more precise parameter estimates compared to the original design. The original design is also compared against the E-optimal design with respect to the power of the score test. For most alternative hypotheses the E-optimal design provides a larger power compared to the original design.</p>
332

Optimal Design and Inference for Correlated Bernoulli Variables using a Simplified Cox Model

Bruce, Daniel January 2008 (has links)
This thesis proposes a simplification of the model for dependent Bernoulli variables presented in Cox and Snell (1989). The simplified model, referred to as the simplified Cox model, is developed for identically distributed and dependent Bernoulli variables. Properties of the model are presented, including expressions for the loglikelihood function and the Fisher information. The special case of a bivariate symmetric model is studied in detail. For this particular model, it is found that the number of design points in a locally D-optimal design is determined by the log-odds ratio between the variables. Under mutual independence, both a general expression for the restrictions of the parameters and an analytical expression for locally D-optimal designs are derived. Focusing on the bivariate case, score tests and likelihood ratio tests are derived to test for independence. Numerical illustrations of these test statistics are presented in three examples. In connection to testing for independence, an E-optimal design for maximizing the local asymptotic power of the score test is proposed. The simplified Cox model is applied to a dental data. Based on the estimates of the model, optimal designs are derived. The analysis shows that these optimal designs yield considerably more precise parameter estimates compared to the original design. The original design is also compared against the E-optimal design with respect to the power of the score test. For most alternative hypotheses the E-optimal design provides a larger power compared to the original design.
333

Performance Comparison Of Message Passing Decoding Algorithms For Binary And Non-binary Low Density Parity Check (ldpc) Codes

Uzunoglu, Cihan 01 December 2007 (has links) (PDF)
In this thesis, we investigate the basics of Low-Density Parity-Check (LDPC) codes over binary and non-binary alphabets. We especially focus on the message passing decoding algorithms, which have different message definitions such as a posteriori probabilities, log-likelihood ratios and Fourier transforms of probabilities. We present the simulation results that compare the performances of small block length binary and non-binary LDPC codes, which have regular and irregular structures over GF(2),GF(4) and GF(8) alphabets. We observe that choosing non-binary alphabets improve the performance with careful selection of mean column weight by comparing LDPC codes with variable node degrees of 3, 2.8 and 2.6, since it is effective in the order of GF(2), GF(4) and GF(8) performances.
334

On applications of puncturing in error-correction coding

Klinc, Demijan 05 April 2011 (has links)
This thesis investigates applications of puncturing in error-correction coding and physical layer security with an emphasis on binary and non-binary LDPC codes. Theoretical framework for the analysis of punctured binary LDPC codes at short block lengths is developed and a novel decoding scheme is designed that achieves considerably faster convergence than conventional approaches. Subsequently, optimized puncturing and shortening is studied for non-binary LDPC codes over binary input channels. Framework for the analysis of punctured/shortened non-binary LDPC codes over the BEC channel is developed, which enables the optimization of puncturing and shortening patterns. Insight from this analysis is used to develop algorithms for puncturing and shortening of non-binary LDPC codes at finite block lengths that perform well. It is confirmed that symbol-wise puncturing is generally bad and that bit-wise punctured non-binary LDPC codes can significantly outperform their binary counterparts, thus making them an attractive solution for future communication systems; both for error-correction and distributed compression. Puncturing is also considered in the context of physical layer security. It is shown that puncturing can be used effectively for coding over the wiretap channel to hide the message bits from eavesdroppers. Further, it is shown how puncturing patterns can be optimized for enhanced secrecy. Asymptotic analysis confirms that eavesdroppers are forced to operate at BERs very close to 0.5, even if their signal is only slightly worse than that of the legitimate receivers. The proposed coding scheme is naturally applicable at finite block lengths and allows for efficient, almost-linear time encoding. Finally, it is shown how error-correcting codes can be used to solve an open problem of compressing data encrypted with block ciphers such as AES. Coding schemes for multiple chaining modes are proposed and it is verified that considerable compression gains are attainable for binary sources.
335

Monte Carlo methods for sampling high-dimensional binary vectors

Schäfer, Christian 14 November 2012 (has links) (PDF)
This thesis is concerned with Monte Carlo methods for sampling high-dimensional binary vectors from complex distributions of interest. If the state space is too large for exhaustive enumeration, these methods provide a mean of estimating the expected value with respect to some function of interest. Standard approaches are mostly based on random walk type Markov chain Monte Carlo, where the equilibrium distribution of the chain is the distribution of interest and its ergodic mean converges to the expected value. We propose a novel sampling algorithm based on sequential Monte Carlo methodology which copes well with multi-modal problems by virtue of an annealing schedule. The performance of the proposed sequential Monte Carlo sampler depends on the ability to sample proposals from auxiliary distributions which are, in a certain sense, close to the current distribution of interest. The core work of this thesis discusses strategies to construct parametric families for sampling binary vectors with dependencies. The usefulness of this approach is demonstrated in the context of Bayesian variable selection and combinatorial optimization of pseudo-Boolean objective functions.
336

Root LDPC Codes for Non Ergodic Transmission Channels / Root LDPC Codes for Non Ergodic Transmission Channels

Bhutto, Tarique Inayat January 2011 (has links)
4 ABSTRACT Tremendous amount of research has been conducted in modern coding theory in the past few years and much of the work has been done in developing new coding techniques. Low density parity check (LDPC) codes are class of linear block error correcting codes which provide capacity performance on a large collection of data transmission and storage channels while Root LDPC codes in this thesis work are admitting implementable decoders with manageable complexity. Furthermore, work has been conducted to develop graphical methods to represent LDPC codes. This thesis implement one of the LDPC kind “Root LDPC code” using iterative method and calculate its threshold level for binary and non-binary Root LDPC code. This threshold value can serve as a starting point for further study on this topic. We use C++ as tool to simulate the code structure and parameters. The results show that non-binary Root LDPC code provides higher threshold value as compare to binary Root LDPC code. / postal address: Björnkullaringen 26, LGH 1029 14151 Huddinge Stockholm Sweden. Mobile: +46-720 490 967
337

Development of a prototype taint tracing tool for security and other purposes

Kargén, Ulf January 2012 (has links)
In recent years there has been an increasing interest in dynamic taint tracing of compiled software as a powerful analysis method for security and other purposes. Most existing approaches are highly application specific and tends to sacrifice precision in favor of performance. In this thesis project a generic taint tracing tool has been developed that can deliver high precision taint information. By allowing an arbitrary number of taint labels to be stored for every tainted byte, accurate taint propagation can be achieved for values that are derived from multiple input bytes. The tool has been developed for x86 Linux systems using the dynamic binary instrumentation framework Valgrind. The basic theory of taint tracing and multi-label taint propagation is discussed, as well as the main concepts of implementing a taint tracing tool using dynamic binary instrumentation. The impact of multi-label taint propagation on performance and precision is evaluated. While multi-label taint propagation has a considerable impact on performance, experiments carried out using the tool show that large amounts of taint information is lost with approximate methods using only one label per tainted byte.
338

Online, Submodular, and Polynomial Optimization with Discrete Structures / オンライン最適化,劣モジュラ関数最大化,および多項式関数最適化に対する離散構造に基づいたアルゴリズムの研究

Sakaue, Shinsaku 23 March 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22588号 / 情博第725号 / 新制||情||124(附属図書館) / 京都大学大学院情報学研究科通信情報システム専攻 / (主査)教授 湊 真一, 教授 五十嵐 淳, 教授 山本 章博 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
339

Cyber-Physical Analysis and Hardening of Robotic Aerial Vehicle Controllers

Taegyu Kim (10716420) 06 May 2021 (has links)
Robotic aerial vehicles (RAVs) have been increasingly deployed in various areas (e.g., commercial, military, scientific, and entertainment). However, RAVs’ security and safety issues could not only arise from either of the “cyber” domain (e.g., control software) and “physical” domain (e.g., vehicle control model) but also stem in their interplay. Unfortunately, existing work had focused mainly on either the “cyber-centric” or “control-centric” approaches. However, such a single-domain focus could overlook the security threats caused by the interplay between the cyber and physical domains. <br>In this thesis, we present cyber-physical analysis and hardening to secure RAV controllers. Through a combination of program analysis and vehicle control modeling, we first developed novel techniques to (1) connect both cyber and physical domains and then (2) analyze individual domains and their interplay. Specifically, we describe how to detect bugs after RAV accidents using provenance (Mayday), how to proactively find bugs using fuzzing (RVFuzzer), and how to patch vulnerable firmware using binary patching (DisPatch). As a result, we have found 91 new bugs in modern RAV control programs, and their developers confirmed 32 cases and patch 11 cases.
340

Models for fitting correlated non-identical bernoulli random variables with applications to an airline data problem

Perez Romo Leroux, Andres January 2021 (has links)
Our research deals with the problem of devising models for fitting non- identical dependent Bernoulli variables and using these models to predict fu- ture Bernoulli trials.We focus on modelling and predicting random Bernoulli response variables which meet all of the following conditions: 1. Each observed as well as future response corresponds to a Bernoulli trial 2. The trials are non-identical, having possibly different probabilities of occurrence 3. The trials are mutually correlated, with an underlying complex trial cluster correlation structure. Also allowing for the possible partitioning of trials within clusters into groups. Within cluster - group level correlation is reflected in the correlation structure. 4. The probability of occurrence and correlation structure for both ob- served and future trials can depend on a set of observed covariates. A number of proposed approaches meeting some of the above conditions are present in the current literature. Our research expands on existing statistical and machine learning methods. We propose three extensions to existing models that make use of the above conditions. Each proposed method brings specific advantages for dealing with correlated binary data. The proposed models allow for within cluster trial grouping to be reflected in the correlation structure. We partition sets of trials into groups either explicitly estimated or implicitly inferred. Explicit groups arise from the determination of common covariates; inferred groups arise via imposing mixture models. The main motivation of our research is in modelling and further understanding the potential of introducing binary trial group level correlations. In a number of applications, it can be beneficial to use models that allow for these types of trial groupings, both for improved predictions and better understanding of behavior of trials. The first model extension builds on the Multivariate Probit model. This model makes use of covariates and other information from former trials to determine explicit trial groupings and predict the occurrence of future trials. We call this the Explicit Groups model. The second model extension uses mixtures of univariate Probit models. This model predicts the occurrence of current trials using estimators of pa- rameters supporting mixture models for the observed trials. We call this the Inferred Groups model. Our third methods extends on a gradient descent based boosting algorithm which allows for correlation of binary outcomes called WL2Boost. We refer to our extension of this algorithm as GWL2Boost. Bernoulli trials are divided into observed and future trials; with all trials having associated known covariate information. We apply our methodology to the problem of predicting the set and total number of passengers who will not show up on commercial flights using covariate information and past passenger data. The models and algorithms are evaluated with regards to their capac- ity to predict future Bernoulli responses. We compare the models proposed against a set of competing existing models and algorithms using available air- line passenger no-show data. We show that our proposed algorithm extension GWL2Boost outperforms top existing algorithms and models that assume in- dependence of binary outcomes in various prediction metrics. / Statistics

Page generated in 0.0298 seconds