• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 764
  • 229
  • 138
  • 95
  • 30
  • 29
  • 19
  • 16
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1611
  • 591
  • 340
  • 247
  • 245
  • 235
  • 191
  • 187
  • 176
  • 167
  • 167
  • 160
  • 143
  • 135
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

A failure detection system design methodology

Chow, Edward Yik January 1981 (has links)
Thesis (Sc.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1981. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Includes bibliographical references. / by Edward Yik Chow. / Sc.D.
482

Essays on Approximation Algorithms for Robust Linear Optimization Problems

Lu, Brian Yin January 2016 (has links)
Solving optimization problems under uncertainty has been an important topic since the appearance of mathematical optimization in the mid 19th century. George Dantzig’s 1955 paper, “Linear Programming under Uncertainty” is considered one of the ten most influential papers in Management Science [26]. The methodology introduced in Dantzig’s paper is named stochastic programming, since it assumes an underlying probability distribution of the uncertain input parameters. However, stochastic programming suffers from the “curse of dimensionality”, and knowing the exact distribution of the input parameter may not be realistic. On the other hand, robust optimization models the uncertainty using a deterministic uncertainty set. The goal is to optimize the worst-case scenario from the uncertainty set. In recent years, many studies in robust optimization have been conducted and we refer the reader to Ben-Tal and Nemirovski [4–6], El Ghaoui and Lebret [19], Bertsimas and Sim [15, 16], Goldfarb and Iyengar [23], Bertsimas et al. [8] for a review of robust optimization. Computing an optimal adjustable (or dynamic) solution to a robust optimization problem is generally hard. This motivates us to study the hardness of approximation of the problem and provide efficient approximation algorithms. In this dissertation, we consider adjustable robust linear optimization problems with packing and covering formulations and their approximation algorithms. In particular, we study the performances of static solution and affine solution as approximations for the adjustable robust problem. Chapter 2 and 3 consider two-stage adjustable robust linear packing problem with uncertain second-stage constraint coefficients. For general convex, compact and down-monotone uncertainty sets, the problem is often intractable since it requires to compute a solution for all possible realizations of uncertain parameters [22]. In particular, for a fairly general class of uncertainty sets, we show that the two-stage adjustable robust problem is NP-hard to approximate within a factor that is better than Ω(logn), where n is the number of columns of the uncertain coefficient matrix. On the other hand, a static solution is a single (here and now) solution that is feasible for all possible realizations of the uncertain parameters and can be computed efficiently. We study the performance of static solutions an approximation for the adjustable robust problem and relate its optimality to a transformation of the uncertain set. With this transformation, we show that for a fairly general class of uncertainty sets, static solution is optimal for the adjustable robust problem. This is surprising since the static solution is widely perceived as highly conservative. Moreover, when the static solution is not optimal, we provide an instance-based tight approximation bound that is related to a measure of non-convexity of the transformation of the uncertain set. We also show that for two-stage problems, our bound is at least as good (and in many case significantly better) as the bound given by the symmetry of the uncertainty set [11, 12]. Moreover, our results can be generalized to the case where the objective coefficients and right-hand-side are also uncertainty. In Chapter 3, we focus on the two-stage problems with a family of column-wise and constraint-wise uncertainty sets where any constraint describing the set involves entries of only a single column or a single row. This is a fairly general class of uncertainty sets to model constraint coefficient uncertainty. Moreover, it is the family of uncertainty sets that gives the previous hardness result. On the positive side, we show that a static solution is an O(\log n · min(\log \Gamma, \log(m+n))-approximation for the two-stage adjustable robust problem where m and n denote the numbers of rows and columns of the constraint matrix and \Gamma is the maximum possible ratio of upper bounds of the uncertain constraint coefficients. Therefore, for constant \Gamma, surprisingly the performance bound for static solutions matches the hardness of approximation for the adjustable problem. Furthermore, in general the static solution provides nearly the best efficient approximation for the two-stage adjustable robust problem. In Chapter 4, we extend our result in Chapter 2 to a multi-stage adjustable robust linear optimization problem. In particular, we consider the case where the choice of the uncertain constraint coefficient matrix for each stage is independent of the others. In real world applications, decision problems are often of multiple stages and a iterative implementation of two-stage solution may result in a suboptimal solution for multi-stage problem. We consider the static solution for the adjustable robust problem and the transformation of the uncertainty sets introduced in Chapter 2. We show that the static solution is optimal for the adjustable robust problem when the transformation of the uncertainty set for each stage is convex. Chapters 5 considers a two-stage adjustable robust linear covering problem with uncertain right-hand-side parameter. As mentioned earlier, such problems are often intractable due to astronomically many extreme points of the uncertainty set. We introduce a new approximation framework where we consider a “simple” set that is “close” to the original uncertainty set. Moreover, the adjustable robust problem can be solved efficiently over the extended set. We show that the approximation bound is related to a geometric factor that represents the Banach-Mazur distance between the two sets. Using this framework, we provide approximation bounds that are better than the bounds given by an affine policy in [7] for a large class of interesting uncertainty sets. For instance, we provide an approximation solution that gives a m^{1/4}-approximation for the two-stage adjustable robust problem with hypersphere uncertainty set, while the affine policy has an approximation ratio of O(\sqrt{m}). Moreover, our bound for general p-norm ball is m^{\frac{p-1}{p^2}} as opposed to m^{1/p} as given by an affine policy.
483

Robust Process Monitoring for Continuous Pharmaceutical Manufacturing

Mariana Moreno (5930069) 03 January 2019 (has links)
<p>Robust process monitoring in real-time is a challenge for Continuous Pharmaceutical Manufacturing. Sensors and models have been developed to help to make process monitoring more robust, but they still need to be integrated in real-time to produce reliable estimates of the true state of the process. Dealing with random and gross errors in the process measurements in a systematic way is a potential solution. In this work, we present such a systematic framework, which for a given sensor network and measurement uncertainties will predict the most likely state of the process. As a result, real-time process decisions, whether for process control, exceptional events management or process optimization can be based on the most reliable estimate of the process state.</p><p><br></p><p></p><p>Data reconciliation (DR) and gross error detection (GED) have been developed to accomplish robust process monitoring. DR and GED mitigate the effects of random measurement errors and non-random sensor malfunctions. This methodology has been used for decades in other industries (i.e., Oil and Gas), but it has yet to be applied to the Pharmaceutical Industry. Steady-state data reconciliation (SSDR) is the simplest forms of DR but offers the benefits of short computational times. However, it requires the sensor network to be redundant (i.e., the number of measurements has to be greater than the degrees of freedom).</p><p><br></p><p>In this dissertation, the SSDR framework is defined and implemented it in two different continuous tableting lines: direct compression and dry granulation. The results for two pilot plant scales via continuous direct compression tableting line are reported in this work. The two pilot plants had different equipment and sensor configurations. The results for the dry granulation continuous tableting line studies were also reported on a pilot-plant scale in an end-to-end operation. New measurements for the dry granulation continuous tableting line are also proposed in this work.</p><p><br></p><p></p><p>A comparison is made for the model-based DR approach (SSDR-M) and the purely data-driven approach (SSDR-D) based on the use of principal component constructions. If the process is linear or mildly nonlinear, SSDR-M and SSDR-D give comparable results for the variables estimation and GED. The reconciled measurement values generate using SSDR-M satisfy the model equations and can be used together with the model to estimate unmeasured variables. However, in the presence of nonlinearities, the SSDR-M and SSDR-D will differ. SSDR successfully estimates the real state of the process in the presence of gross errors, as long as steady-state is maintained and the redundancy requirement is met. Gross errors are also detected whether using SSDR-M or SSDR-D. </p><p><br></p>
484

Um modelo de decisão para produção e comercialização de produtos agrícolas diversificáveis. / A decision model for production and commerce of diversifiable agricultural products.

Oliveira, Sydnei Marssal de 20 June 2012 (has links)
A ascensão de um grande número de pessoas em países em desenvolvimento para a classe média, no inicio do século XXI, aliado ao movimento político para transferência de base energética para os biocombustíveis vêm aumentando a pressão sobre os preços das commodities agrícolas e apresentando novas oportunidades e cenários administrativos para os produtores agrícolas dessas commodities, em especial aquelas que podem se diversificar em muitos subprodutos para atender diferentes mercados, como o de alimentos, químico, têxtil e de energia. Nesse novo ambiente os produtores podem se beneficiar dividindo adequadamente a produção entre os diferentes subprodutos, definindo o melhor momento para a comercialização através de estoques, e ainda controlar sua exposição ao risco através de posições no mercado de derivativos. A literatura atual pouco aborda o tema da diversificação e seu impacto nas decisões de produção e comercialização agrícola e portanto essa tese tem o objetivo de propor um modelo de decisão fundado na teoria de seleção de portfólios capaz de decidir a divisão da produção entre diversos subprodutos, as proporções a serem estocadas e o momento mais adequado para a comercialização e por fim as posições em contratos futuros para fins de proteção ou hedge. Adicionalmente essa tese busca propor que esse modelo seja capaz de lidar com incerteza em parâmetros, em especial parâmetros que provocam alto impacto nos resultados, como é o caso dos retornos previstos no futuro. Como uma terceira contribuição, esse trabalho busca ainda propor um modelo de previsão de preços mais sofisticado que possa ser aplicado a commodities agrícolas, em especial um modelo híbrido ou hierárquico, composto de dois modelos, um primeiro modelo fundado sob a teoria de processos estocásticos e do Filtro de Kalman e um segundo modelo, para refinar os resultados do primeiro modelo de previsão, baseado na teoria de redes neurais, com a finalidade de considerar variáveis exógenas. O modelo híbrido de previsão de preços foi testado com dados reais do mercado sucroalcooleiro brasileiro e indiano, gerando resultados promissores, enquanto o modelo de decisão de parâmetros de produção, comercialização, estocagem e hedge se mostrou uma ferramenta útil para suporte a decisão após ser testado com dados reais do mercado sucroalcooleiro brasileiro e do mercado de milho, etanol e biodiesel norte-americano. / The rise of a large number of people in developing countries for the middle class at the beginning of the century, combined with the political movement to transfer the energy base for biofuels has been increasing pressure on prices of agricultural commodities and presenting new opportunities and administrative scenarios for agricultural producers of these commodities, especially those who may diversify into many products to meet different markets such as food, chemicals, textiles and energy. In this new environment producers can achieve benefits properly dividing production between different products, setting the best time to market through inventories, and still control their risk exposure through positions in the derivatives market. The literature poorly addresses the issue of diversification and its impact on agricultural production and commercialization decisions and therefore this thesis aims to propose a decision model based on the theory of portfolio selection able to decide the division of production between different products, the proportions to be stored and timing for marketing and finally the positions in futures contracts to hedge. Additionally this thesis attempts to propose that this model is capable of dealing with uncertainty in parameters, especially parameters that cause high impact on the results, as is the case of expected returns in the future. As a third contribution this paper seeks to also propose a model more sophisticated to forecast prices that can be applied to agricultural commodities, especially a hybrid or hierarchical model, composed of two models, a first one based on the theory of stochastic processes and Kalman filter and a second one to refine the results of the first prediction model, based on the theory of neural networks in order to consider the exogenous variables. The hybrid model for forecasting prices has been tested with real data from the Brazilian and Indian sugar ethanol market, generating promising results, while the decision model parameters of production, commercialization, storage and hedge proved a useful tool for decision support after being tested with real data from Brazilian sugar ethanol market and the corn, ethanol and biodiesel market in U.S.A.
485

Low order modelling for flow simulation, estimation and control / Réduction de modèles pour la simulation, l'estimation et le côntrole d`écoulements

Lombardi, Edoardo 03 February 2010 (has links)
L’objectif est de développer et de tester des instruments peu côuteux pour la simulation, l’estimation et le contrôle d’écoulements. La décomposition orthogonale aux valeurs propres (POD) et une projection de Galerkin des équations sur les modes POD sont utilisées pour construire des modèles d’ordre réduit des équations de Navier-Stokes incompressibles. Dans ce travail, un écoulement autour d’un cylindre carré est considéré en configuration bidimensionnelle et tridimensionnelle. Des actionneurs de soufflage/aspiration sont placés sur la surface du cylindre. Quelques techniques de calibration sont appliquées, fournissant des modèles précis, même pour les écoulements tridimensionnels avec des structures tourbillonaires compliquées. Une méthode d’estimation d’état, impliquant des mesures, est ensuite mise au point pour des écoulements instationnaires. Une calibration multi-dynamique et des techniques d’échantillonnage efficaces sont appliquées, visant à construire des modèles robustes à des variations des paramètres de contrôle. Nous amorçons une analyse de stabilité linéaire en utilisant des modèles d’ordre réduit linéarisés autour d’un état d’équilibre contrôlé. Les techniques présentées sont appliquées à écoulements autour du cylindre carré à des nombres de Reynolds compris entre Re = 40 et Re = 300. / The aim is to develop and to test tools having a low computational cost for flow simulation, estimation and control applications. The proper orthogonal decomposition (POD) and a Galerkin projection of the equations onto the POD modes are used to build low order models of the incompressible Navier-Stokes equations. In this work a flow past a square cylinder is considered in two-dimensional and three-dimensional configurations. Two blowing/suction actuators are placed on the surface of the cylinder. Calibration techniques are applied, providing stable and rather accurate models, even for three-dimensional wake flows with complicated patterns. A state estimation method, involving flow measurements, is then developed for unsteady flows. Multi-dynamic calibrations and efficient sampling techniques are applied to build models that are robust to variations of the control parameters. A linear stability analysis by using linearized low order models around a controlled steady state is briefly addressed. The presented techniques are applied to the square cylinder configuration at Reynolds numbers that range between Re = 40 and Re = 300.
486

Robustní odhady v modelu CAPM / Robust estimators for CAPM

Steinhübelová, Monika January 2012 (has links)
The thesis describes the theory of capital asset pricing model (CAPM) and the issue of robust estimates. Robust methods are an effective tool to achieve better estimation relative to the classical least squares method when there is a fai- lure to assume a normal distribution of errors or in the presence of outlying obser- vations in the data. Theory of M-estimates, which is then applied in the practical part of the thesis to the multidimensional CAPM model is treated in detail. The- ory of R- and L-estimates is explained in less detail. A simulation study compares simultaneous estimates in multivariate model and estimates designed individually when applied to the model assuming the mutual independence of equations. 1
487

Avaliação do efeito da rigidez estrutural sobre a dinâmica veicular. / Evaluation of the effect of structural stiffness on the dynamic vehicle.

Antônio Carlos Botosso 27 May 2015 (has links)
O trabalho desenvolvido tem como objetivo avaliar a influência da rigidez estrutural no comportamento dinâmico do veículo baseando-se em manobra de raio constante em condição quase estática e análise de sensibilidade através de simulação numérica computacional. São apresentados dois modelos para avaliar a transferência de carga lateral do veículo quando sujeito à aceleração lateral (manobra de raio constante), um modelo completo de veículo elaborado em ambiente multicorpos tendo a carroceria e o sub-chassi modelados como corpos flexíveis, e outro modelo analítico com a consideração da rigidez torcional (obtida de modelo de elementos finitos) da estrutura. Com o modelo analítico atendendo o nível de correlação necessário para o propósito deste trabalho, discute-se neste ponto as variações na transferência de carga devido à rigidez torcional da estrutura. Em seguida, com o intuito de abranger, além do parâmetro de transferência de carga lateral, quais comportamentos do veículo são afetados pela sua rigidez estrutural, é proposta a utilização do método de engenharia robusta para a identificação das condições externas que geram diferentes resultados de comportamento dinâmico do veículo com a variação da rigidez estrutural. Este estudo permite identificar manobras ou situações nas quais as considerações de flexibilidade estrutural num modelo multicorpos, ou mesmo numa condição física real, são relevantes e podem afetar a segurança, dirigibilidade e o conforto do veículo. / The work aims to evaluate the influence of structural stiffness on the dynamic behavior of the vehicle based on a constant radius maneuver in a quasi-static condition and a sensitivity analysis through computer numerical simulation. Two models were developed to evaluate the lateral load transfer of the vehicle when subjected to lateral acceleration (constant radius maneuver), a complete vehicle built in multibody environment with the body and the sub-chassis modeled as flexible bodies, and an analytical model with consideration of structure torsional stiffness (obtained from finite element model). With the analytical model presenting the required correlation for the purpose of this paper, we discuss, at this point, the lateral load transfer variations due to torcional structural stiffness. Then, in order to cover, in addition to lateral load transfer, how the vehicle behavior is affected by its structural stiffness, the robust engineering method is considered for identifying the external conditions that generate different dynamic behavior results for the variation of structural stiffness. This study allows us to identify maneuvers or situations in which considerations of structural flexibility in multibody model, or even in a real physical condition, are relevant and can affect the safety, ride and handling of the vehicle.
488

Projeto de controlador robusto para rastreamento de tensão aplicado a um restaurador dinâmico de tensão (DVR). / Robust control design for voltage tracking loop of dynamic voltage restorers (DVR).

Bruno Augusto Ferrari 16 October 2015 (has links)
O restaurador dinâmico de tensão (DVR) é uma solução baseada em eletrônica de potência para minimizar os problemas causados por afundamentos e elevações de tensão em equipamentos ou cargas sensíveis a esses tipos de distúrbios. Basicamente a operação do DVR consiste em injetar na rede tensões de correção com a finalidade de anular o afundamento ou a elevação na tensão aplicada à carga. Tipicamente, a estrutura do controlador utilizado em um DVR é composta por uma malha interna de corrente e uma malha externa de tensão. Usualmente um controlador do tipo proporcional ou proporcional integral é utilizado na malha interna de corrente e um controlador ressonante é utilizado na malha externa de tensão. O presente trabalho apresenta um projeto de controlador robusto para rastreamento da tensão injetada pelo DVR que garante estabilidade robusta do sistema com respeito à variação dos parâmetros da carga. Além disso, o controlador proposto garante valores pré-definidos para o erro de rastreamento e para a rejeição do distúrbio causado por correntes de carga distorcidas na tensão injetada pelo DVR. A síntese do controlador robusto de tensão é feita com base no método de projeto H? pela formulação da sensibilidade mista. Todas as especificações de desempenho e robustez são impostas por meio de restrições nos diagramas de resposta em frequência do sistema em malha fechada (funções sensibilidade e sensibilidade complementar). O desempenho do controlador proposto é verificado e a metodologia de projeto é validada por simulações e experimentos realizados em um DVR de baixa potência. / The Dynamic Voltage Restorer (DVR) is a power electronics based solution for mitigation of voltage sags and swells effects on sensitive loads, which basically injects voltages in series with the grid. Typically the controller structure for a DVR is composed by an inner current loop and an outer voltage loop. Usually proportional or a proportional-integral controller is used for the current loop and a resonant controller is used for the voltage loop. This paper presents the design of a robust controller for the voltage tracking loop of a DVR that guaranties the robust stability against load parameters variation. Moreover, the proposed controller assures the tracking of a sinusoidal voltage waveform, as well the rejection of the non linear load current influence, both with a pre specified error. The voltage controller design is based on H? mix-sensitivity parameter specification approach. All the performance and robustness requirements are specified and analyzed based on the frequency response plot of closed loop transfer function (sensitivity and complementary sensitivity functions). The proposed controller performance is validated by simulation and by experiments carried out on a low scale DVR prototype.
489

Robust portfolio management with multiple financial analysts

Lu, I-Chen (Jennifer) January 2015 (has links)
Portfolio selection theory, developed by Markowitz (1952), is one of the best known and widely applied methods for allocating funds among possible investment choices, where investment decision making is a trade-off between the expected return and risk of the portfolio. Many portfolio selection models have been developed on the basis of Markowitz's theory. Most of them assume that complete investment information is available and that it can be accurately extracted from the historical data. However, this complete information never exists in reality. There are many kinds of ambiguity and vagueness which cannot be dealt with in the historical data but still need to be considered in portfolio selection. For example, to address the issue of uncertainty caused by estimation errors, the robust counterpart approach of Ben-Tal and Nemirovski (1998) has been employed frequently in recent years. Robustification, however, often leads to a more conservative solution. As a consequence, one of the most common critiques against the robust counterpart approach is the excessively pessimistic character of the robust asset allocation. This thesis attempts to develop new approaches to improve on the respective performances of the robust counterpart approach by incorporating additional investment information sources, so that the optimal portfolio can be more reliable and, at the same time, achieve a greater return.
490

Evolution and the possibility of moral knowledge

Wittwer, Silvan January 2018 (has links)
This PhD thesis provides an extended evaluation of evolutionary debunking arguments in meta-ethics. Such arguments attempt to show that evolutionary theory, together with a commitment to robust moral objectivity, lead to moral scepticism: the implausible view that we lack moral knowledge or that our moral beliefs are never justified (e.g. Joyce 2006, Street 2005, Kahane 2011). To establish that, these arguments rely on certain epistemic principles. But most of the epistemic principles appealed to in the literature on evolutionary debunking arguments are imprecise, confused or simply implausible. My PhD aims to rectify that. Informed by debates in cutting-edge contemporary epistemology, Chapter 1 distinguishes three general, independently motivated principles that, combined with evolution, seem to render knowledge of robustly objective moral facts problematic. These epistemic principles state that (i.) our getting facts often right in a given domain requires explanation - and if we cannot provide one, our beliefs about that domain are unjustified; (ii.) higher-order evidence of error undermines justification; and (iii.) for our beliefs to be justified, our having them must be best explained by the facts they are about. Chapters 2-4 develop and critically assess evolutionary debunking arguments based on those principles, showing that only the one inspired by (iii.) succeeds. Chapter 2 investigates the argument that evolution makes explaining why we get moral facts often right impossible. I argue that Justin Clarke-Doane's recent response (2014, 2015, 2016, 2017) works, yet neglects an issue about epistemic luck that spells trouble for robust moral objectivity. Chapter 3 discusses the argument that evolution provides higher-order evidence of error regarding belief in robustly objective moral facts. I show that such an argument falls prey to Katia Vavova's (2014) self-defeat objection, even if evolutionary debunkers tweak their background view on the epistemic significance of higher-order evidence. Chapter 4 develops the argument that evolution, rather than robustly objective moral facts, best explains why we hold our moral beliefs. I offer a systematic, comprehensive defence of that argument against Andreas Mogensen's (2015) charge of explanatory levels confusion, Terrence Cuneo's (2007) companion in guilt strategy, and David Enoch's (2012, 2016) appeal to deliberative indispensability. Chapter 5 brings everything together. It investigates whether robust moral objectivity survives the worry about epistemic luck raised in Chapter 2 and the explanatory challenge developed in Chapter 4. Making progress, however, requires a better idea of how we form true, justified beliefs about and acquire knowledge of robustly objective moral facts. Since it offers the most popular and best-developed epistemology of robustly objective morality, my inquiry in Chapter 5 focuses on contemporary moral intuitionism: the view that moral intuitions can be the source of basic moral knowledge. I argue that its success is mixed. While moral intuitionism has the conceptual tools to tackle the problem of epistemic luck from Chapter 2, it cannot insulate knowledge of robustly objective moral facts against the sceptical worry raised by the evolutionary debunking argument developed in Chapter 4. Thus, evolutionary theory, together with a commitment to robust moral objectivity, does lead to a form of unacceptable moral scepticism.

Page generated in 0.0479 seconds