• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 23
  • 6
  • 6
  • 6
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 146
  • 146
  • 93
  • 34
  • 31
  • 24
  • 23
  • 23
  • 21
  • 20
  • 18
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Development and validation of open-source software for DNA mixture interpretation based on a quantitative continuous model / 定量的連続性モデルに基づくDNA混合試料解析用オープンソースソフトウェアの開発と検証

Manabe, Sho 26 March 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(医科学) / 甲第21024号 / 医科博第85号 / 新制||医科||6(附属図書館) / 京都大学大学院医学研究科医科学専攻 / (主査)教授 川上 浩司, 教授 黒田 知宏, 教授 森田 智視 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
12

Diagnostics after a Signal from Control Charts in a Normal Process

Lou, Jianying 03 October 2008 (has links)
Control charts are fundamental SPC tools for process monitoring. When a control chart or combination of charts signals, knowing the change point, which distributional parameter changed, and/or the change size helps to identify the cause of the change, remove it from the process or adjust the process back in control correctly and immediately. In this study, we proposed using maximum likelihood (ML) estimation of the current process parameters and their ML confidence intervals after a signal to identify and estimate the changed parameters. The performance of this ML diagnostic procedure is evaluated for several different charts or chart combinations for the cases of sample sizes and , and compared to the traditional approaches to diagnostics. None of the ML and the traditional estimators performs well for all patterns of shifts, but the ML estimator has the best overall performance. The ML confidence interval diagnostics are overall better at determining which parameter has shifted than the traditional diagnostics based on which chart signals. The performance of the generalized likelihood ratio (GLR) chart in shift detection and in ML diagnostics is comparable to the best EWMA chart combination. With the application of the ML diagnostics naturally following a GLR chart compared to the traditional control charts, the studies of a GLR chart during process monitoring can be further deepened in the future. / Ph. D.
13

Spectrum Sensing in Cognitive Radio Networks

Bokharaiee Najafee, Simin 07 1900 (has links)
Given the ever-growing demand for radio spectrum, cognitive radio has recently emerged as an attractive wireless communication technology. This dissertation is concerned with developing spectrum sensing algorithms in cognitive radio networks where a single or multiple cognitive radios (CRs) assist in detecting licensed primary bands employed by single or multiple primary users. First, given that orthogonal frequency-division multiplexing (OFDM) is an important wideband transmission technique, detection of OFDM signals in low-signal-to-noise-ratio scenario is studied. It is shown that the cyclic prefix correlation coefficient (CPCC)-based spectrum sensing algorithm, which was previously introduced as a simple and computationally efficient spectrum-sensing method for OFDM signals, is a special case of the constrained generalized likelihood ratio test (GLRT) in the absence of multipath. The performance of the CPCC-based algorithm degrades in a multipath scenario. However when OFDM is implemented, by employing the inherent structure of OFDM signals and exploiting multipath correlation in the GLRT algorithm a simple and low-complexity algorithm called the multipath-based constrained-GLRT (MP-based C-GLRT) algorithm is obtained. Further performance improvement is achieved by combining both the CPCC- and MP-based C-GLRT algorithms. A simple GLRT-based detection algorithm is also developed for unsynchronized OFDM signals. In the next part of the dissertation, a cognitive radio network model with multiple CRs is considered in order to investigate the benefit of collaboration and diversity in improving the overall sensing performance. Specially, the problem of decision fusion for cooperative spectrum sensing is studied when fading channels are present between the CRs and the fusion center (FC). Noncoherent transmission schemes with on-off keying (OOK) and binary frequency-shift keying (BFSK) are employed to transmit the binary decisions to the FC. The aim is to maximize the achievable secondary throughput of the CR network. Finally, in order to reduce the required transmission bandwidth in the reporting phase of the CRs in a cooperative sensing scheme, the last part of the dissertation examines nonorthogonal transmission of local decisions by means of on-off keying. Proposed and analyzed is a novel decoding-based fusion rule for combining the hard decisions in a linear manner.
14

Statistical Analysis of Skew Normal Distribution and its Applications

Ngunkeng, Grace 01 August 2013 (has links)
No description available.
15

Quantifying the strength of evidence in forensic fingerprints

Forbes, Peter G. M. January 2014 (has links)
Part I presents a model for fingerprint matching using Bayesian alignment on unlabelled point sets. An efficient Monte Carlo algorithm is developed to calculate the marginal likelihood ratio between the hypothesis that an observed fingerprint and fingermark pair originate from the same finger and the hypothesis that they originate from different fingers. The model achieves good performance on the NIST-FBI fingerprint database of 258 matched fingerprint pairs, though the computed likelihood ratios are implausibly extreme due to oversimplification in our model. Part II moves to a more theoretical study of proper scoring rules. The chapters in this section are designed to be independent of each other. Chapter 9 uses proper scoring rules to calibrate the implausible likelihood ratios computed in Part I. Chapter 10 defines the class of compatible weighted proper scoring rules. Chapter 11 derives new results for the score matching estimator, which can quickly generate point estimates for a parametric model even when the normalization constant of the distribution is intractable. It is used to find an initial value for the iterative maximization procedure in §3.3. Appendix A describes a novel algorithm to efficiently sample from the posterior of a von Mises distribution. It is used within the fingerprint model sampling procedure described in §5.6. Appendix B includes various technical results which would otherwise disrupt the flow of the main dissertation.
16

Likelihood-Based Tests for Common and Idiosyncratic Unit Roots in the Exact Factor Model

Solberger, Martin January 2013 (has links)
Dynamic panel data models are widely used by econometricians to study over time the economics of, for example, people, firms, regions, or countries, by pooling information over the cross-section. Though much of the panel research concerns inference in stationary models, macroeconomic data such as GDP, prices, and interest rates are typically trending over time and require in one way or another a nonstationary analysis. In time series analysis it is well-established how autoregressive unit roots give rise to stochastic trends, implying that random shocks to a dynamic process are persistent rather than transitory. Because the implications of, say, government policy actions are fundamentally different if shocks to the economy are lasting than if they are temporary, there are now a vast number of univariate time series unit root tests available. Similarly, panel unit root tests have been designed to test for the presence of stochastic trends within a panel data set and to what degree they are shared by the panel individuals. Today, growing data certainly offer new possibilities for panel data analysis, but also pose new problems concerning double-indexed limit theory, unobserved heterogeneity, and cross-sectional dependencies. For example, economic shocks, such as technological innovations, are many times global and make national aggregates cross-country dependent and related in international business cycles. Imposing a strong cross-sectional dependence, panel unit root tests often assume that the unobserved panel errors follow a dynamic factor model. The errors will then contain one part which is shared by the panel individuals, a common component, and one part which is individual-specific, an idiosyncratic component. This is appealing from the perspective of economic theory, because unobserved heterogeneity may be driven by global common shocks, which are well captured by dynamic factor models. Yet, only a handful of tests have been derived to test for unit roots in the common and in the idiosyncratic components separately. More importantly, likelihood-based methods, which are commonly used in classical factor analysis, have been ruled out for large dynamic factor models due to the considerable number of parameters. This thesis consists of four papers where we consider the exact factor model, in which the idiosyncratic components are mutually independent, and so any cross-sectional dependence is through the common factors only. Within this framework we derive some likelihood-based tests for common and idiosyncratic unit roots. In doing so we address an important issue for dynamic factor models, because likelihood-based tests, such as the Wald test, the likelihood ratio test, and the Lagrange multiplier test, are well-known to be asymptotically most powerful against local alternatives. Our approach is specific-to-general, meaning that we start with restrictions on the parameter space that allow us to use explicit maximum likelihood estimators. We then proceed with relaxing some of the assumptions, and consider a more general framework requiring numerical maximum likelihood estimation. By simulation we compare size and power of our tests with some established panel unit root tests. The simulations suggest that the likelihood-based tests are locally powerful and in some cases more robust in terms of size. / Solving Macroeconomic Problems Using Non-Stationary Panel Data
17

Statistical Strategies for Efficient Signal Detection and Parameter Estimation in Wireless Sensor Networks

Ayeh, Eric 12 1900 (has links)
This dissertation investigates data reduction strategies from a signal processing perspective in centralized detection and estimation applications. First, it considers a deterministic source observed by a network of sensors and develops an analytical strategy for ranking sensor transmissions based on the magnitude of their test statistics. The benefit of the proposed strategy is that the decision to transmit or not to transmit observations to the fusion center can be made at the sensor level resulting in significant savings in transmission costs. A sensor network based on target tracking application is simulated to demonstrate the benefits of the proposed strategy over the unconstrained energy approach. Second, it considers the detection of random signals in noisy measurements and evaluates the performance of eigenvalue-based signal detectors. Due to their computational simplicity, robustness and performance, these detectors have recently received a lot of attention. When the observed random signal is correlated, several researchers claim that the performance of eigenvalue-based detectors exceeds that of the classical energy detector. However, such claims fail to consider the fact that when the signal is correlated, the optimal detector is the estimator-correlator and not the energy detector. In this dissertation, through theoretical analyses and Monte Carlo simulations, eigenvalue-based detectors are shown to be suboptimal when compared to the energy detector and the estimator-correlator.
18

Análise de variância multivariada nas estimativas dos parâmetros do modelo log-logístico para susceptibilidade do capim-pé-de-galinha ao glyphosate / Multivariate analysis of variance in the estimates of the log-losgstic model parameters for susceptibility of grass chicken feet to glyphosate

Jotta, César Augusto Degiato 25 October 2016 (has links)
O cenário agrícola nacional tem se tornado cada vez mais competitivo ao longo dos anos, manter o crescimento da produtividade a um baixo custo operacional e com baixo impacto ambiental tem sido os três ingredientes de maior relevância na área. A produtividade por sua vez, é função de várias variáveis, sendo o controle de plantas daninhas uma dessas variáveis a ser considerada. Nesse trabalho é analisado um conjunto de dados de um experimento realizado no departamento de Produção Vegetal da ESALQ-USP, Piracicaba - SP. Foram avaliadas 4 biótipos de capim-pé-de-galinha provenientes de três estados brasileiros e em três estágios morfológicos com 4 repetições para cada biótipo, a variável resposta utilizada foi massa seca (g) e como variável regressora foi utilizada a dose de glyphosate nas concentrações variando de 1/16 D a 16 D mais a testemunha, sem aplicação de herbicida, em que D varia de 480 gramas de equivalente ácido de glyphosate por hectare (g .e a. ha-1) para o estágio de 2 a 3 perfilhos, 720 (g .e a. ha-1) para o estágio de 6 a 8 perfilhos e de 960 para o estágio de 10-12 perfilhos. O trabalho teve como objetivo primário avaliar se, ao longo dos anos, as populações de capim-pé-de-galinha tem se tornado resistentes ao herbicida glyphosate, visando detecção de biótipos resistentes. O experimento foi instalado segundo o delineamento inteiramente aleatorizado, sendo feito em três estágios diferentes. Para a análise dos dados foi utilizado o modelo não-linear log-logístico proposto em Knezevic, S. e Ritz (2007) como método univariado, foi utilizado ainda o método da máxima verossimilhança para verificar a igualdade do parâmetro e. O modelo utilizado convergiu para quase todas as repetições, mas não houve um comportamento sistemático observado que explicasse a não convergência de uma repetição em particular. Num segundo momento, as estimativas dos três parâmetros do modelo foram tomadas como variáveis dependentes em uma análise de variância multivariada. Observando que as três, conjuntamente, foram significativas pelos testes de Pillai, Wilks, Roy e Hotelling-Lawley, foi realizado o teste de Tukey para o mesmo parâmetro e comparado com o primeiro método utilizado. Esse procedimento apresentou, com o mesmo coeficiente de significância, menor capacidade de identificar diferença entre as médias dos parâmetros das variedades de capim do que o método proposto por Regazzi (2015). / The national agricultural scenery has become increasingly competitive over the years, maintaining productivity growth at a low operating cost and low environmental impact has been the three most important ingredients in the area. Productivity in turn is a function of several variables, and the weed control is one of these variables to be considered. In this work it is analyzed a dataset of an experiment conducted in the Plant Production Department of ESALQ-USP, Piracicaba - SP. Were evaluated 4 grass chicken\'s feet biotypes from three Brazilian states in three morphological stages with 4 repetitions for each biotype, the response variable used was dry mass (g) and as regressor variable were used the dose of glyphosate in concentrations ranging from 1/16 D to 16 D plus the control without herbicide, wherein D ranges from 480 grams of glyphosate acid equivalent per hectare (g .e a. ha-1) for 2 to 3 stage tillers, 720 grams of glyphosate acid equivalent per hectare (g .e a. ha-1) for 6 to 8 tillers and 960 for stage 10-12 tillers. The work had as main objective to evaluate , if over the years, populations of grass chicken\'s feet has become resistant to glyphosate, aiming detection of resistant biotypes. The experiment was conducted under completely randomized design being done in three stages. For data analysis was used the non-linear log-logistic proposed in Knezevic, S. e Ritz (2007) as univariate method, it was still used the maximum likelihood method to verify the equality of the parameter e. The model converged to almost all repetitions, but there was an observed systematic behavior to explain the non-convergence of a particular repetition. Secondly, estimates of the three model parameters were taken as dependent variables in a multivariate analysis of variance. Noting that all three together, were significant by Pillai, Wilks, Roy and Hotelling-Lawley tests, was performed Tukey test for the same parameter e and compared with the first method. This procedure presented, with the same coefficient of significance, less able to identify differences between the means of the parameters of grass varieties than the method proposed by Regazzi (2015).
19

Comparison Between Confidence Intervals of Multiple Linear Regression Model with or without Constraints

Tao, Jinxin 27 April 2017 (has links)
Regression analysis is one of the most applied statistical techniques. The sta- tistical inference of a linear regression model with a monotone constraint had been discussed in early analysis. A natural question arises when it comes to the difference between the cases of with and without the constraint. Although the comparison be- tween confidence intervals of linear regression models with and without restriction for one predictor variable had been considered, this discussion for multiple regres- sion is required. In this thesis, I discuss the comparison of the confidence intervals between a multiple linear regression model with and without constraints.
20

Inference in Constrained Linear Regression

Chen, Xinyu 27 April 2017 (has links)
Regression analyses constitutes an important part of the statistical inference and has great applications in many areas. In some applications, we strongly believe that the regression function changes monotonically with some or all of the predictor variables in a region of interest. Deriving analyses under such constraints will be an enormous task. In this work, the restricted prediction interval for the mean of the regression function is constructed when two predictors are present. I use a modified likelihood ratio test (LRT) to construct prediction intervals.

Page generated in 0.0526 seconds