• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 110
  • 25
  • 19
  • 7
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 196
  • 196
  • 52
  • 46
  • 42
  • 39
  • 37
  • 31
  • 27
  • 27
  • 22
  • 21
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Nonlinear compensation and heterogeneous data modeling for robust speech recognition

Zhao, Yong 21 February 2013 (has links)
The goal of robust speech recognition is to maintain satisfactory recognition accuracy under mismatched operating conditions. This dissertation addresses the robustness issue from two directions. In the first part of the dissertation, we propose the Gauss-Newton method as a unified approach to estimating noise parameters for use in prevalent nonlinear compensation models, such as vector Taylor series (VTS), data-driven parallel model combination (DPMC), and unscented transform (UT), for noise-robust speech recognition. While iterative estimation of noise means in a generalized EM framework has been widely known, we demonstrate that such approaches are variants of the Gauss-Newton method. Furthermore, we propose a novel noise variance estimation algorithm that is consistent with the Gauss-Newton principle. The formulation of the Gauss-Newton method reduces the noise estimation problem to determining the Jacobians of the corrupted speech parameters. For sampling-based compensations, we present two methods, sample Jacobian average (SJA) and cross-covariance (XCOV), to evaluate these Jacobians. The Gauss-Newton method is closely related to another noise estimation approach, which views the model compensation from a generative perspective, giving rise to an EM-based algorithm analogous to the ML estimation for factor analysis (EM-FA). We demonstrate a close connection between these two approaches: they belong to the family of gradient-based methods except with different convergence rates. Note that the convergence property can be crucial to the noise estimation in many applications where model compensation may have to be frequently carried out in changing noisy environments to retain desired performance. Furthermore, several techniques are explored to further improve the nonlinear compensation approaches. To overcome the demand of the clean speech data for training acoustic models, we integrate nonlinear compensation with adaptive training. We also investigate the fast VTS compensation to improve the noise estimation efficiency, and combine the VTS compensation with acoustic echo cancellation (AEC) to mitigate issues due to interfering background speech. The proposed noise estimation algorithm is evaluated for various compensation models on two tasks. The first is to fit a GMM model to artificially corrupted samples, the second is to perform speech recognition on the Aurora 2 database, and the third is on a speech corpus simulating the meeting of multiple competing speakers. The significant performance improvements confirm the efficacy of the Gauss-Newton method to estimating the noise parameters of the nonlinear compensation models. The second research work is devoted to developing more effective models to take full advantage of heterogeneous speech data, which are typically collected from thousands of speakers in various environments via different transducers. The proposed synchronous HMM, in contrast to the conventional HMMs, introduces an additional layer of substates between the HMM state and the Gaussian component variables. The substates have the capability to register long-span non-phonetic attributes, such as gender, speaker identity, and environmental condition, which are integrally called speech scenes in this study. The hierarchical modeling scheme allows an accurate description of probability distribution of speech units in different speech scenes. To address the data sparsity problem in estimating parameters of multiple speech scene sub-models, a decision-based clustering algorithm is presented to determine the set of speech scenes and to tie the substate parameters, allowing us to achieve an excellent balance between modeling accuracy and robustness. In addition, by exploiting the synchronous relationship among the speech scene sub-models, we propose the multiplex Viterbi algorithm to efficiently decode the synchronous HMM within a search space of the same size as for the standard HMM. The multiplex Viterbi can also be generalized to decode an ensemble of isomorphic HMM sets, a problem often arising in the multi-model systems. The experiments on the Aurora 2 task show that the synchronous HMMs produce a significant improvement in recognition performance over the HMM baseline at the expense of a moderate increase in the memory requirement and computational complexity.
22

Programmation linéaire mixte robuste; Application au dimensionnement d'un système hybride de production d'électricité. / Robust mixed integer linear programming; Application to the design of an hybrid system for electricity production

Poirion, Pierre-Louis 17 December 2013 (has links)
Dans cette thèse, nous nous intéressons à l’optimisation robuste. Plus précisément,nous nous intéresserons aux problèmes linéaires mixtes bi-niveaux, c’est à dire aux problèmes dans lesquels le processus de décision est divisé en deux parties : dans un premier temps, les valeurs optimales des variables dites "de décisions" seront calculées ; puis, une fois que l’incertitude sur les données est levée, nous calculerons les valeurs des variables dites "de recours". Dans cette thèse, nousnous limiterons au cas où les variables de deuxième étape, dites "de recours", sontcontinues.Dans la première partie de cette thèse, nous nous concentrerons sur l’étudethéorique de tels problèmes. Nous commencerons par résoudre un problème linéairesimplifié dans lequel l’incertitude porte seulement sur le membre droit descontraintes, et est modélisée par un polytope bien particulier. Nous supposerons enoutre que le problème vérifie une propriété dite "de recours complet", qui assureque, quelles que soient les valeurs prises par les variables de dcisions, si ces dernières sont admissibles, alors le problème admet toujours une solution réalisable, et ce, quelles que soient les valeurs prises par les paramètres incertains. Nous verrons alors une méthode permettant, à partir d’un programme robuste quelconque, de se ramener à un programme robuste équivalent dont le problème déterministe associévérifie la propriété de recours complet. Avant de traiter le cas général, nous nouslimiterons d’abord au cas o les variables de décisions sont entières. Nous testeronsalors notre approche sur un problème de production. Ensuite, après avoir remarquéque l’approche développée dans les chapitres précédents ne se généralisait pasnaturellement aux polytopes qui n’ont pas des points extrmes 0-1, nous montreronscomment, en utilisant des propriétés de convexité du problème, résoudre le problème robuste dans le cas général. Nous en déduirons alors des résultats de complexité sur le problème de deuxième étape, et sur le problème robuste. Dans la suite de cette partie nous tenterons d’utiliser au mieux les informations probabilistes que l’on a sur les données aléatoires pour estimer la pertinence de notre ensemble d’incertitude.Dans la deuxième partie de cette thèse, nous étudierons un problème de conceptionde parc hybride de production d’électricité. Plus précisément, nous chercheronsà optimiser un parc de production électrique constitué d’éoliennes, de panneauxsolaires, de batteries et d’un générateur à diesel, destiné à répondre à unedemande locale d’énergie électrique. Il s’agit de déterminer le nombre d’éoliennes,de panneaux solaires et de batteries à installer afin de répondre à la demande pourun cot minimum. Cependant, les données du problème sont très aléatoires. En effet,l’énergie produite par une éolienne dépend de la force et de la direction du vent ; celle produite par un panneau solaire, de l’ensoleillement et la demande en électricité peut tre liée à la température ou à d’autres paramètres extérieurs. Pour résoudre ce problème, nous commencerons par modéliser le problème déterministeen un programme linéaire mixte. Puis nous appliquerons directement l’approche de la première partie pour résoudre le problème robuste associé. Nous montrerons ensuite que le problème de deuxième étape associé, peut se résoudre en temps polynomial en utilisant un algorithme de programmation dynamique. Enfin, nous donnerons quelques généralisations et améliorations pour notre problème. / Robust optimization is a recent approach to study problems with uncertain datathat does not rely on a prerequisite precise probability model but on mild assumptionson the uncertainties involved in the problem.We studied a linear two-stage robustproblem with mixed-integer first-stage variables and continuous second stagevariables. We considered column wise uncertainty and focused on the case whenthe problem doesn’t satisfy a "full recourse property" which cannot be always satisfied for real problems. We also studied the complexity of the robust problemwhich is NP-hard and proved that it is actually polynomial solvable when a parameterof the problem is fixed.We then applied this approach to study a stand-alonehybrid system composed of wind turbines, solar photovoltaic panels and batteries.The aim was to determine the optimal number of photovoltaic panels, wind turbinesand batteries in order to serve a given demand while minimizing the total cost of investment and use. We also studied some properties of the second stage problem, in particular that the second stage problem can be solvable in polynomial time using dynamic programming.
23

A Quick-and-Dirty Approach to Robustness in Linear Optimization

Karimi, Mehdi January 2012 (has links)
We introduce methods for dealing with linear programming (LP) problems with uncertain data, using the notion of weighted analytic centers. Our methods are based on high interaction with the decision maker (DM) and try to find solutions which satisfy most of his/her important criteria/goals. Starting with the drawbacks of different methods for dealing with uncertainty in LP, we explain how our methods improve most of them. We prove that, besides many practical advantages, our approach is theoretically as strong as robust optimization. Interactive cutting-plane algorithms are developed for concave and quasi-concave utility functions. We present some probabilistic bounds for feasibility and evaluate our approach by means of computational experiments.
24

A comparative simulation study of robust estimators of standard errors /

Johnson, Natalie, January 2007 (has links) (PDF)
Project (M.S.)--Brigham Young University. Dept. of Statistics, 2007. / Includes bibliographical references (p. 57-59).
25

An optimization approach to plant-controller co-design /

Russell, Jared S. January 2009 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 2009. / Typescript. Includes bibliographical references (leaves 74-76).
26

Mathematical optimization techniques for cognitive radar networks

Rossetti, Gaia January 2018 (has links)
This thesis discusses mathematical optimization techniques for waveform design in cognitive radars. These techniques have been designed with an increasing level of sophistication, starting from a bistatic model (i.e. two transmitters and a single receiver) and ending with a cognitive network (i.e. multiple transmitting and multiple receiving radars). The environment under investigation always features strong signal-dependent clutter and noise. All algorithms are based on an iterative waveform-filter optimization. The waveform optimization is based on convex optimization techniques and the exploitation of initial radar waveforms characterized by desired auto and cross-correlation properties. Finally, robust optimization techniques are introduced to account for the assumptions made by cognitive radars on certain second order statistics such as the covariance matrix of the clutter. More specifically, initial optimization techniques were proposed for the case of bistatic radars. By maximizing the signal to interference and noise ratio (SINR) under certain constraints on the transmitted signals, it was possible to iteratively optimize both the orthogonal transmission waveforms and the receiver filter. Subsequently, the above work was extended to a convex optimization framework for a waveform design technique for bistatic radars where both radars transmit and receive to detect targets. The method exploited prior knowledge of the environment to maximize the accumulated target return signal power while keeping the disturbance power to unity at both radar receivers. The thesis further proposes convex optimization based waveform designs for multiple input multiple output (MIMO) based cognitive radars. All radars within the system are able to both transmit and receive signals for detecting targets. The proposed model investigated two complementary optimization techniques. The first one aims at optimizing the signal to interference and noise ratio (SINR) of a specific radar while keeping the SINR of the remaining radars at desired levels. The second approach optimizes the SINR of all radars using a max-min optimization criterion. To account for possible mismatches between actual parameters and estimated ones, this thesis includes robust optimization techniques. Initially, the multistatic, signal-dependent model was tested against existing worst-case and probabilistic methods. These methods appeared to be over conservative and generic for the considered signal-dependent clutter scenario. Therefore a new approach was derived where uncertainty was assumed directly on the radar cross-section and Doppler parameters of the clutters. Approximations based on Taylor series were invoked to make the optimization problem convex and {subsequently} determine robust waveforms with specific SINR outage constraints. Finally, this thesis introduces robust optimization techniques for through-the-wall radars. These are also cognitive but rely on different optimization techniques than the ones previously discussed. By noticing the similarities between the minimum variance distortionless response (MVDR) problem and the matched-illumination one, this thesis introduces robust optimization techniques that consider uncertainty on environment-related parameters. Various performance analyses demonstrate the effectiveness of all the above algorithms in providing a significant increase in SINR in an environment affected by very strong clutter and noise.
27

Robust optimization for discrete structures and non-linear impact of uncertainty

Espinoza García, Juan Carlos 28 September 2017 (has links)
L’objectif de cette thèse est de proposer des solutions efficaces à des problèmes de décision qui ont un impact sur la vie des citoyens, et qui reposent sur des données incertaines. Au niveau des applications, nous nous intéressons à deux problèmes de localisation qui ont un impact sur l’espace public, notamment la localisation de nouveaux logements, et la localisation de vendeurs mobiles dans l’espace urbain. Les problèmes de localisation ne sont pas un sujet récent dans la littérature, toutefois, pour ces deux problèmes qui reposent sur des modèles de choix pour le comportement d’achat des consommateurs, l’incertitude dans le modèle génère un cas spécial qui permet d’étendre la littérature sur l’Optimisation Robuste. Les contributions de cette thèse peuvent s’appliquer à divers problèmes génériques d’optimisation. / We address decision problems under uncertain information with non-linear structures of parameter variation, and devise solution methods in the spirit of Bertsimas and Sim’s Γ-Robustness approach. Furthermore, although the non-linear impact of uncertainty often introduces discrete structures to the problem, for tractability, we provide the conditions under which the complexity class of the nominal model is preserved for the robust counterpart. We extend the Γ-Robustness approach in three avenues. First, we propose a generic case of non-linear impact of parameter variation, and model it with a piecewise linear approximation of the impact function. We show that the subproblem of determining the worst-case variation can be dualized despite the discrete structure of the piece-wise function. Next, we built a robust model for the location of new housing where the non-linearity is introduced by a choice model, and propose a solution combining Γ-Robustness with a scenario-based approach. We show that the subproblem is tractable and leads to a linear formulation of the robust problem. Finally, we model the demand in a Location Problem through a Poisson Process inducing, when demands are uncertain, non-linear structures of parameter variation. We propose the concept of Nested Uncertainty Budgets to manage uncertainty in a tractable way through a hierarchical structure and, under this framework, obtain a subproblem that includes both continuous and discrete deviation variables.
28

Context Informed Statistics in Two Cases: Age Standardization and Risk Minimization

Lin, Zihan 24 October 2018 (has links)
When faced with death counts strati ed by age, analysts often calculate a crude mortality rate (CMR) as a single summary measure. This is done by simply dividing total death counts by total population counts. However, the crude mortality rate is not appropriate for comparing different populations due to the significant impact of age on mortality and the possibility of having different age structures for different populations. While a set of age-adjustment methods seeks to collapse age-specific mortality rates into a single measure that is free from the confounding effect of age structure, we focus on one of these methods called "direct age-standardization" method which summarizes and compares age-specific mortality rates by adopting a reference population. While qualitative insights in relation to age-standardization are often discussed, we seek to approximate age-standardized mortality rate of a population based on the corresponding CMR and the 90th quantile of its population distribution. This approximation is most useful when age-specific mortality data is unavailable. In addition, we provide quantitative insights related to age-standardization. We derive our model based on mathematical insights drawn from the explication of exact calculations and validate our model by using empirical data for a large number of countries under a large number of circumstances. We also extend the application of our approximation model to other age-standardized mortality indicators such as cause-specific mortality rate and potential years of life lost. In the second part of the thesis, we consider the formulation of a general risk management procedure, where risk needs to be measured and further mitigated. The formulation admits an optimization representation and requires as input the distributional information about the underlying risk factors. Unfortunately, for most risk factors it is known to be difficult to identify their distribution in full details, and more problematically the risk management procedure can be prone to errors in the input distribution. In particular, one of the most important distribution information is the covariance hat captures the spread and correlation among risk factors. We study the issue of covariance uncertainty in the problem of mitigating tail risk and by admitting an uncertainty set of covariance of risk factors, we propose a robust optimization model which minimizes risk for the worst scenario especially when data is insufficient and the number of risk factors is large. We will then transform our model into a computationally solvable one and test the model using real-world data.
29

Optimizing Surgical Scheduling Through Integer Programming and Robust Optimization

Geranmayeh, Shirin January 2015 (has links)
This thesis proposes and verifies a number of optimization models for re-designing a master surgery schedule with minimized peak inpatient load at the ward. All models include limitations on Operating Rooms and surgeons availability. Surgeons` preference is included with regards to a consistent weekly schedule over a cycle. The uncertain in patients` length of stay was incorporated using discrete probability distributions unique to each surgeon. Furthermore, robust optimization was utilized to protect against the uncertainty in the number of inpatients a surgeon may send to the ward per block. Different scenarios were developed that explore the impact of varying the availability of operating rooms on each day of the week. The models were solved using Cplex and were verified by an Arena simulation model.
30

Optimization-based approaches to non-parametric extreme event estimation

Mottet, Clementine Delphine Sophie 09 October 2018 (has links)
Modeling extreme events is one of the central tasks in risk management and planning, as catastrophes and crises put human lives and financial assets at stake. A common approach to estimate the likelihood of extreme events, using extreme value theory (EVT), studies the asymptotic behavior of the ``tail" portion of data, and suggests suitable parametric distributions to fit the data backed up by their limiting behaviors as the data size or the excess threshold grows. We explore an alternate approach to estimate extreme events that is inspired from recent advances in robust optimization. Our approach represents information about tail behaviors as constraints and attempts to estimate a target extremal quantity of interest (e.g, tail probability above a given high level) by imposing an optimization problem to find a conservative estimate subject to the constraints that encode the tail information capturing belief on the tail distributional shape. We first study programs where the feasible region is restricted to distribution functions with convex tail densities, a feature shared by all common parametric tail distributions. We then extend our work by generalizing the feasible region to distribution functions with monotone derivatives and bounded or infinite moments. In both cases, we study the statistical implications of the resulting optimization problems. Through investigating their optimality structures, we also present how the worst-case tail in general behaves as a linear combination of polynomial decay tails. Numerically, we develop results to reduce these optimization problems into tractable forms that allow solution schemes via linear-programming-based techniques.

Page generated in 0.0475 seconds