• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 325
  • 128
  • 49
  • 36
  • 12
  • 10
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 682
  • 182
  • 91
  • 86
  • 82
  • 71
  • 64
  • 53
  • 53
  • 53
  • 50
  • 46
  • 43
  • 38
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Introducing a New Quantitative Measure of Railway Timetable Robustness Based on Critical Points

Andersson, Emma, Peterson, Anders, Törnquist Krasemann, Johanna January 2013 (has links)
The growing demand for railway capacity has led to high capacity consumption at times and a delay-sensitive network with insufficient robustness. The fundamental challenge is therefore to decide how to increase the robustness. To do so there is a need for accurate measures that return whether the timetable is robust or not and indicate where improvements should be made. Previously presented measures are useful when comparing different timetable candidates with respect to robustness, but less useful to decide where and how robustness should be inserted. In this paper, we focus on points where trains enter a line, or where trains are being overtaken, since we have observed that these points are critical for the robustness. The concept of critical points can be used in the practical timetabling process to identify weaknesses in a timetable and to provide suggestions for improvements. In order to quantitatively assess how crucial a critical point may be, we have defined the measure RCP (Robustness in Critical Points). A high RCP value is preferred, and it reflects a situation at which train dispatchers will have higher prospects of handling a conflict effectively. The number of critical points, the location pattern and the RCP values constitute an absolute value for the robustness of a certain train slot, as well as of a complete timetable. The concept of critical points and RCP can be seen as a contribution to the already defined robustness measures which combined can be used as guidelines for timetable constructors.

Robustness measures for signal detection in non-stationary noise using differential geometric tools

Raux, Guillaume Julien 25 April 2007 (has links)
We propose the study of robustness measures for signal detection in non-stationary noise using differential geometric tools in conjunction with empirical distribution analysis. Our approach shows that the gradient can be viewed as a random variable and therefore used to generate sample densities allowing one to draw conclusions regarding the robustness. As an example, one can apply the geometric methodology to the detection of time varying deterministic signals in imperfectly known dependent nonstationary Gaussian noise. We also compare stationary to non-stationary noise and prove that robustness is barely reduced by admitting non-stationarity. In addition, we show that robustness decreases with larger sample sizes, but there is a convergence in this decrease for sample sizes greater than 14. We then move on to compare the effect on robustness for signal detection between non-Gaussian tail effects and residual dependency. The work focuses on robustness as applied to tail effects for the noise distribution, affecting discrete-time detection of signals in independent non-stationary noise. This approach makes use of the extension to the generalized Gaussian case allowing the comparison in robustness between the Gaussian and Laplacian PDF. The obtained results are contrasted with the influence of dependency on robustness for a fixed tail category and draws consequences on residual dependency versus tail uncertainty.

Statistical models for noise-robust speech recognition

van Dalen, Rogier Christiaan January 2011 (has links)
A standard way of improving the robustness of speech recognition systems to noise is model compensation. This replaces a speech recogniser's distributions over clean speech by ones over noise-corrupted speech. For each clean speech component, model compensation techniques usually approximate the corrupted speech distribution with a diagonal-covariance Gaussian distribution. This thesis looks into improving on this approximation in two ways: firstly, by estimating full-covariance Gaussian distributions; secondly, by approximating corrupted-speech likelihoods without any parameterised distribution. The first part of this work is about compensating for within-component feature correlations under noise. For this, the covariance matrices of the computed Gaussians should be full instead of diagonal. The estimation of off-diagonal covariance elements turns out to be sensitive to approximations. A popular approximation is the one that state-of-the-art compensation schemes, like VTS compensation, use for dynamic coefficients: the continuous-time approximation. Standard speech recognisers contain both per-time slice, static, coefficients, and dynamic coefficients, which represent signal changes over time, and are normally computed from a window of static coefficients. To remove the need for the continuous-time approximation, this thesis introduces a new technique. It first compensates a distribution over the window of statics, and then applies the same linear projection that extracts dynamic coefficients. It introduces a number of methods that address the correlation changes that occur in noise within this framework. The next problem is decoding speed with full covariances. This thesis re-analyses the previously-introduced predictive linear transformations, and shows how they can model feature correlations at low and tunable computational cost. The second part of this work removes the Gaussian assumption completely. It introduces a sampling method that, given speech and noise distributions and a mismatch function, in the limit calculates the corrupted speech likelihood exactly. For this, it transforms the integral in the likelihood expression, and then applies sequential importance resampling. Though it is too slow to use for recognition, it enables a more fine-grained assessment of compensation techniques, based on the KL divergence to the ideal compensation for one component. The KL divergence proves to predict the word error rate well. This technique also makes it possible to evaluate the impact of approximations that standard compensation schemes make.

Model Updating Using a Quadratic Form

Tarazaga, Pablo Alberto 23 August 2004 (has links)
The research presented in this thesis addresses the problem of updating an analytical model using a parametric Reference Basis approach. In this method, some parameters are assumed to be accurate (e.g. natural frequencies, mode shapes and mass matrix), while others are adjusted so that the eigenvalue equation is satisfied. Updating is done with the use of principal submatrices, and the method seeks the best parameters multiplying these matrices. This is a departure from classical model reference, and is closer to the formulation of sensitivity methods. The submatrices allow updating of the stiffness matrix with certain freedom while preserving connectivity. Closed form solution can be achieved through multiple ways; two different approaches, denoted as the Quadratic Compression Method (QCM) and the Full Vector Method (FVM), are described in this paper. It is shown that the QCM possesses superior robustness properties with respect to noise in the data. This fact, as well as the simplicity offered by QCM, is demonstrated theoretically and experimentally. The experiments are presented to show the advantage of the QCM in the updating process. / Master of Science

Familywise Robustness Criteria Revisited for Newer Multiple Testing Procedures

Miller, Charles W. January 2009 (has links)
As the availability of large datasets becomes more prevalent, so does the need to discover significant findings among a large collection of hypotheses. Multiple testing procedures (MTP) are used to control the familywise error rate (FWER) or the chance to commit at least one type I error when performing multiple hypotheses testing. When controlling the FWER, the power of a MTP to detect significant differences decreases as the number of hypotheses increases. It would be ideal to discover the same false null hypotheses despite the family of hypotheses chosen to be tested. Holland and Cheung (2002) developed measures called familywise robustness criteria (FWR) to study the effect of family size on the acceptance and rejection of a hypothesis. Their analysis focused on procedures that controlled FWER and false discovery rate (FDR). Newer MTPs have since been developed which control the generalized FWER (gFWER (k) or k-FWER) and false discovery proportion (FDP) or tail probabilities for the proportion of false positives (TPPFP). This dissertation reviews these newer procedures and then discusses the effect of family size using the FWRs of Holland and Cheung. In the case where the test statistics are independent and the null hypotheses are all true, the Type R enlargement familywise robustness measure can be expressed as a ratio of the expected number of Type I errors. In simulations, positive dependence among the test statistics was introduced, the expected number of Type I errors and the Type R enlargement FWR increased for step-up procedures with higher levels of correlation, but not for step-down or single-step procedures. / Statistics

An interaction framework for multiagent systems

Miller, Matthew James January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / Scott DeLoach / A multiagent system is a system that is composed of multiple autonomous agents. Au- tonomous agents are given the right and the responsibility to make decisions based on their perceptions and goals. Agents are also constrained by their capabilities, the environment and the system with which they reside. An agent within the system may need to coordinate with another agent in the system. This coordination may allow the agent to give updates from sensor readings, communicate updated map information or allow the agent to work on a cooperative task such as lifting an object. To coordinate agents must be able to communicate with one another. To communicate agents must have a communication medium. The medium is the conduit through which the information flows. Additionally there must be a set of rules to govern which agent talks at what time. This set of rules is called a communication protocol. To effectively and efficiently communicate all agents participating in the communication must be using compatible protocols. Robotic agents can be placed in diverse environment and there are multiple avenues for communication failure. Current multiagent systems use fixed communication protocols to allow agents to interact with one another. Using fixed protocols in an error prone environ- ment can lead to a high rate of system failure. To address these issues, I propose that a formal framework for interaction be defined. The framework should allow agents to choose new interaction protocols when the current protocol they are using fails. A formal framework allows automated tools to reason over the possible choices of interaction protocols. The tools can enumerate the protocols that will allow the agent to achieve its desired goal.

Tradeoff between robustness and elaboration in carotenoid networks produces cycles of avian color diversification

Badyaev, Alexander V., Morrison, Erin S., Belloni, Virginia, Sanderson, Michael J. January 2015 (has links)
BACKGROUND: Resolution of the link between micro- and macroevolution calls for comparing both processes on the same deterministic landscape, such as genomic, metabolic or fitness networks. We apply this perspective to the evolution of carotenoid pigmentation that produces spectacular diversity in avian colors and show that basic structural properties of the underlying carotenoid metabolic network are reflected in global patterns of elaboration and diversification in color displays. Birds color themselves by consuming and metabolizing several dietary carotenoids from the environment. Such fundamental dependency on the most upstream external compounds should intrinsically constrain sustained evolutionary elongation of multi-step metabolic pathways needed for color elaboration unless the metabolic network gains robustness - the ability to synthesize the same carotenoid from an additional dietary starting point. RESULTS: We found that gains and losses of metabolic robustness were associated with evolutionary cycles of elaboration and stasis in expressed carotenoids in birds. Lack of metabolic robustness constrained lineage's metabolic explorations to the immediate biochemical vicinity of their ecologically distinct dietary carotenoids, whereas gains of robustness repeatedly resulted in sustained elongation of metabolic pathways on evolutionary time scales and corresponding color elaboration. CONCLUSIONS: The structural link between length and robustness in metabolic pathways may explain periodic convergence of phylogenetically distant and ecologically distinct species in expressed carotenoid pigmentation; account for stasis in carotenoid colors in some ecological lineages; and show how the connectivity of the underlying metabolic network provides a mechanistic link between microevolutionary elaboration and macroevolutionary diversification. REVIEWERS: This article was reviewed by Junhyong Kim, Eugene Koonin, and Fyodor Kondrashov. For complete reports, see the Reviewers' reports section.

The robustness of confidence intervals for effect size in one way designs with respect to departures from normality

Hembree, David January 1900 (has links)
Master of Science / Department of Statistics / Paul Nelson / Effect size is a concept that was developed to bridge the gap between practical and statistical significance. In the context of completely randomized one way designs, the setting considered here, inference for effect size has only been developed under normality. This report is a simulation study investigating the robustness of nominal 0.95 confidence intervals for effect size with respect to departures from normality in terms of their coverage rates and lengths. In addition to the normal distribution, data are generated from four non-normal distributions: logistic, double exponential, extreme value, and uniform. The report discovers that the coverage rates of the logistic, double exponential, and extreme value distributions drop as effect size increases, while, as expected, the coverage rate of the normal distribution remains very steady at 0.95. In an interesting turn of events, the uniform distribution produced higher than 0.95 coverage rates, which increased with effect size. Overall, in the scope of the settings considered, normal theory confidence intervals for effect size are robust for small effect size and not robust for large effect size. Since the magnitude of effect size is typically not known, researchers are advised to investigate the assumption of normality before constructing normal theory confidence intervals for effect size.

Identification of functional RNA structures in sequence data

Pei, Shermin January 2016 (has links)
Thesis advisor: Michelle M. Meyer / Thesis advisor: Peter Clote / Structured RNAs have many biological functions ranging from catalysis of chemical reactions to gene regulation. Many of these homologous structured RNAs display most of their conservation at the secondary or tertiary structure level. As a result, strategies for natural structured RNA discovery rely heavily on identification of sequences sharing a common stable secondary structure. However, correctly identifying the functional elements of the structure continues to be challenging. In addition to studying natural RNAs, we improve our ability to distinguish functional elements by studying sequences derived from in vitro selection experiments to select structured RNAs that bind specific proteins. In this thesis, we seek to improve methods for distinguishing functional RNA structures from arbitrarily predicted structures in sequencing data. To do so, we developed novel algorithms that prioritize the structural properties of the RNA that are under selection. In order to identify natural structured ncRNAs, we bring concepts from evolutionary biology to bear on the de novo RNA discovery process. Since there is selective pressure to maintain the structure, we apply molecular evolution concepts such as neutrality to identify functional RNA structures. We hypothesize that alignments corresponding to structured RNAs should consist of neutral sequences. During the course of this work, we developed a novel measure of neutrality, the structure ensemble neutrality (SEN), which calculates neutrality by averaging the magnitude of structure retained over all single point mutations to a given sequence. In order to analyze in vitro selection data for RNA-protein binding motifs, we developed a novel framework that identifies enriched substructures in the sequence pool. Our method accounts for both sequence and structure components by abstracting the overall secondary structure into smaller substructures composed of a single base-pair stack. Unlike many current tools, our algorithm is designed to deal with the large data sets coming from high-throughput sequencing. In conclusion, our algorithms have similar performance to existing programs. However, unlike previous methods, our algorithms are designed to leverage the evolutionary selective pressures in order to emphasize functional structure conservation. / Thesis (PhD) — Boston College, 2016. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Biology.

A student's t filter for heavy tailed process and measurement noise

Roth, Michael, Ozkan, Emre, Gustafsson, Fredrik January 2013 (has links)
We consider the filtering problem in linear state space models with heavy tailed process and measurement noise. Our work is based on Student's t distribution, for which we give a number of useful results. The derived filtering algorithm is a generalization of the ubiquitous Kalman filter, and reduces to it as special case. Both Kalman filter and the new algorithm are compared on a challenging tracking example where a maneuvering target is observed in clutter. / MC Impulse

Page generated in 0.0957 seconds