• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 8
  • 5
  • 3
  • 3
  • 1
  • Tagged with
  • 40
  • 40
  • 27
  • 18
  • 8
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Approximate Dynamic Programming and Reinforcement Learning - Algorithms, Analysis and an Application

Lakshminarayanan, Chandrashekar January 2015 (has links) (PDF)
Problems involving optimal sequential making in uncertain dynamic systems arise in domains such as engineering, science and economics. Such problems can often be cast in the framework of Markov Decision Process (MDP). Solving an MDP requires computing the optimal value function and the optimal policy. The idea of dynamic programming (DP) and the Bellman equation (BE) are at the heart of solution methods. The three important exact DP methods are value iteration, policy iteration and linear programming. The exact DP methods compute the optimal value function and the optimal policy. However, the exact DP methods are inadequate in practice because the state space is often large and in practice, one might have to resort to approximate methods that compute sub-optimal policies. Further, in certain cases, the system observations are known only in the form of noisy samples and we need to design algorithms that learn from these samples. In this thesis we study interesting theoretical questions pertaining to approximate and learning algorithms, and also present an interesting application of MDPs in the domain of crowd sourcing. Approximate Dynamic Programming (ADP) methods handle the issue of large state space by computing an approximate value function and/or a sub-optimal policy. In this thesis, we are concerned with conditions that result in provably good policies. Motivated by the limitations of the PBE in the conventional linear algebra, we study the PBE in the (min, +) linear algebra. It is a well known fact that deterministic optimal control problems with cost/reward criterion are (min, +)/(max, +) linear and ADP methods have been developed for such systems in literature. However, it is straightforward to show that infinite horizon discounted reward/cost MDPs are neither (min, +) nor (max, +) linear. We develop novel ADP schemes namely the Approximate Q Iteration (AQI) and Variational Approximate Q Iteration (VAQI), where the approximate solution is a (min, +) linear combination of a set of basis functions whose span constitutes a subsemimodule. We show that the new ADP methods are convergent and we present a bound on the performance of the sub-optimal policy. The Approximate Linear Program (ALP) makes use of linear function approximation (LFA) and offers theoretical performance guarantees. Nevertheless, the ALP is difficult to solve due to the presence of a large number of constraints and in practice, a reduced linear program (RLP) is solved instead. The RLP has a tractable number of constraints sampled from the original constraints of the ALP. Though the RLP is known to perform well in experiments, theoretical guarantees are available only for a specific RLP obtained under idealized assumptions. In this thesis, we generalize the RLP to define a generalized reduced linear program (GRLP) which has a tractable number of constraints that are obtained as positive linear combinations of the original constraints of the ALP. The main contribution here is the novel theoretical framework developed to obtain error bounds for any given GRLP. Reinforcement Learning (RL) algorithms can be viewed as sample trajectory based solution methods for solving MDPs. Typically, RL algorithms that make use of stochastic approximation (SA) are iterative schemes taking small steps towards the desired value at each iteration. Actor-Critic algorithms form an important sub-class of RL algorithms, wherein, the critic is responsible for policy evaluation and the actor is responsible for policy improvement. The actor and critic iterations have deferent step-size schedules, in particular, the step-sizes used by the actor updates have to be generally much smaller than those used by the critic updates. Such SA schemes that use deferent step-size schedules for deferent sets of iterates are known as multitimescale stochastic approximation schemes. One of the most important conditions required to ensure the convergence of the iterates of a multi-timescale SA scheme is that the iterates need to be stable, i.e., they should be uniformly bounded almost surely. However, the conditions that imply the stability of the iterates in a multi-timescale SA scheme have not been well established. In this thesis, we provide veritable conditions that imply stability of two timescale stochastic approximation schemes. As an example, we also demonstrate that the stability of a widely used actor-critic RL algorithm follows from our analysis. Crowd sourcing (crowd) is a new mode of organizing work in multiple groups of smaller chunks of tasks and outsourcing them to a distributed and large group of people in the form of an open call. Recently, crowd sourcing has become a major pool for human intelligence tasks (HITs) such as image labeling, form digitization, natural language processing, machine translation evaluation and user surveys. Large organizations/requesters are increasingly interested in crowd sourcing the HITs generated out of their internal requirements. Task starvation leads to huge variation in the completion times of the tasks posted on to the crowd. This is an issue for frequent requesters desiring predictability in the completion times of tasks specified in terms of percentage of tasks completed within a stipulated amount of time. An important task attribute that affects the completion time of a task is its price. However, a pricing policy that does not take the dynamics of the crowd into account might fail to achieve the desired predictability in completion times. Here, we make use of the MDP framework to compute a pricing policy that achieves predictable completion times in simulations as well as real world experiments.
22

調整指數基金的最小成本模型 / Minimal Cost Index Fund Rebalence Problem

蘇代利 Unknown Date (has links)
通常已建立的指數基金,經過一段時間後其追蹤指數的效能已經無法滿足初期建購時的要求,此時管理者便面臨指數基金投資組合的調整問題。本論文融合建構指數基金的方法及最小化交易成本的概念,提出一個新的混合整數線性規劃模型以調整指數基金投資組合。模型亦考慮實務中交易成本、最小交易單位及批量、固定交易費用比率、以及資產總類數等限制。因此,模型包含整數變數及二元變數,求解也較為困難許多。本論文以啟發式演算法增進求解的效率,並以台灣50指數的相關資料做為實證研究的對象。 / The efficiency of index-tracking in index fund, which has been built, has usually been incapable to meet the needs after a period of time. In this moment, the managers have to face with the problems of the adjusting for index fund portfolio. In this paper, we integrate the methods of constructing index fund and the concepts of minimum transaction cost with it, and propose a new mixed integer linear program model to adjust the index fund portfolio. Moreover, the model also considers some limitations, such as the transaction costs, minimum transaction units and lots, fixed proportional transaction rates, and cardinality constraint in practical operating. For this reason, a set of integer variables and binary variables are introduced. However, they increase the computational complexity in model solution. Due to the difficulty of the MILP problem, a heuristic algorithm has been developed for the solution. The computational results are presented by applying the model to the Taiwan 50 index.
23

Robust-Intelligent Traffic Signal Control within a Vehicle-to-Infrastructure and Vehicle-to-Vehicle Communication Environment

He, Qing January 2010 (has links)
Modern traffic signal control systems have not changed significantly in the past 40-50 years. The most widely applied traffic signal control systems are still time-of-day, coordinated-actuated system, since many existing advanced adaptive signal control systems are too complicated and fathomless for most of people. Recent advances in communications standards and technologies provide the basis for significant improvements in traffic signal control capabilities. In the United States, the IntelliDriveSM program (originally called Vehicle Infrastructure Integration - VII) has identified 5.9GHz Digital Short Range Communications (DSRC) as the primary communications mode for vehicle-to-vehicle (v2v) and vehicle-to-infrastructure (v2i) safety based applications, denoted as v2x. The ability for vehicles and the infrastructure to communication information is a significant advance over the current system capability of point presence and passage detection that is used in traffic control systems. Given enriched data from IntelliDriveSM, the problem of traffic control can be solved in an innovative data-driven and mathematical way to produce robust and optimal outputs.In this doctoral research, three different problems within a v2x environment- "enhanced pseudo-lane-level vehicle positioning", "robust coordinated-actuated multiple priority control", and "multimodal platoon-based arterial traffic signal control", are addressed with statistical techniques and mathematical programming.First, a pseudo-lane-level GPS positioning system is proposed based on an IntelliDriveSM v2x environment. GPS errors can be categorized into common-mode errors and noncommon-mode errors, where common-mode errors can be mitigated by differential GPS (DGPS) but noncommon-mode cannot. Common-mode GPS errors are cancelled using differential corrections broadcast from the road-side equipment (RSE). With v2i communication, a high fidelity roadway layout map (called MAP in the SAE J2735 standard) and satellite pseudo-range corrections are broadcast by the RSE. To enhance and correct lane level positioning of a vehicle, a statistical process control approach is used to detect significant vehicle driving events such as turning at an intersection or lane-changing. Whenever a turn event is detected, a mathematical program is solved to estimate and update the GPS noncommon-mode errors. Overall the GPS errors are reduced by corrections to both common-mode and noncommon-mode errors.Second, an analytical mathematical model, a mixed-integer linear program (MILP), is developed to provide robust real-time multiple priority control, assuming penetration of IntelliDriveSM is limited to emergency vehicles and transit vehicles. This is believed to be the first mathematical formulation which accommodates advanced features of modern traffic controllers, such as green extension and vehicle actuations, to provide flexibility in implementation of optimal signal plans. Signal coordination between adjacent signals is addressed by virtual coordination requests which behave significantly different than the current coordination control in a coordinated-actuated controller. The proposed new coordination method can handle both priority and coordination together to reduce and balance delays for buses and automobiles with real-time optimized solutions.The robust multiple priority control problem was simplified as a polynomial cut problem with some reasonable assumptions and applied on a real-world intersection at Southern Ave. & 67 Ave. in Phoenix, AZ on February 22, 2010 and March 10, 2010. The roadside equipment (RSE) was installed in the traffic signal control cabinet and connected with a live traffic signal controller via Ethernet. With the support of Maricopa County's Regional Emergency Action Coordinating (REACT) team, three REACT vehicles were equipped with onboard equipments (OBE). Different priority scenarios were tested including concurrent requests, conflicting requests, and mixed requests. The experiments showed that the traffic controller was able to perform desirably under each scenario.Finally, a unified platoon-based mathematical formulation called PAMSCOD is presented to perform online arterial (network) traffic signal control while considering multiple travel modes in the IntelliDriveSM environment with high market penetration, including passenger vehicles. First, a hierarchical platoon recognition algorithm is proposed to identify platoons in real-time. This algorithm can output the number of platoons approaching each intersection. Second, a mixed-integer linear program (MILP) is solved to determine the future optimal signal plans based on the real-time platoon data (and the platoon request for service) and current traffic controller status. Deviating from the traditional common network cycle length, PAMSCOD aims to provide multi-modal dynamical progression (MDP) on the arterial based on the real-time platoon information. The integer feasible solution region is enhanced in order to reduce the solution times by assuming a first-come, first-serve discipline for the platoon requests on the same approach. Microscopic online simulation in VISSIM shows that PAMSCOD can easily handle two traffic modes including buses and automobiles jointly and significantly reduce delays for both modes, compared with SYNCHRO optimized plans.
24

Analysis of the performance of an optimization model for time-shiftable electrical load scheduling under uncertainty

Olabode, John A. 12 1900 (has links)
Approved for public release; distribution is unlimited / To ensure sufficient capacity to handle unexpected demands for electric power, decision makers often over-estimate expeditionary power requirements. Therefore, we often use limited resources inefficiently by purchasing more generators and investing in more renewable energy sources than needed to run power systems on the battlefield. Improvement of the efficiency of expeditionary power units requires better managing of load requirements on the power grids and, where possible, shifting those loads to a more economical time of day. We analyze the performance of a previously developed optimization model for scheduling time-shiftable electrical loads in an expeditionary power grids model in two experiments. One experiment uses model data similar to the original baseline data, in which expected demand and expected renewable production remain constant throughout the day. The second experiment introduces unscheduled demand and realistic fluctuations in the power production and the demand distributions data that more closely reflect actual data. Our major findings show energy grid power production composition affects which uncertain factor(s) influence fuel con-sumption, and uncertainty in the energy grid system does not always increase fuel consumption by a large amount. We also discover that the generators running the most do not always have the best load factor on the grid, even when optimally scheduled. / Lieutenant Commander, United States Navy
25

Documentação e testes da biblioteca genérica de álgebra linear Klein / Tests and documentation of the Klein library

Schmid, Rafael Freitas 12 December 2014 (has links)
Este trabalho descreve a Klein, uma biblioteca genérica para álgebra linear em C++. A Klein facilita o uso de matrizes e vetores, permitindo que o usuário programe de modo similar ao Matlab. Com ela podemos, por exemplo, implementar um passo do método de Newton para a função f, através da expressão x = x - inv(jac(x)) * f(x), onde x é o vetor, jac a Jacobiana e inv a inversa. Além disso, por se tratar de uma biblioteca genérica, os tipos envolvidos nestas expressões podem ser escolhidos pelo programador. O trabalho também discute como a biblioteca é testada, tanto do ponto de vista de corretude quanto de desempenho. / We describe the Klein library, a generic libray for linear algebra in C++. It simplifies the use of vectors and matrices and let the user program as in Matlab. With Klein, one can for instance implement Newton\'s method as x = x - inv(jac(x)) * f(x), where x is a vector, jac is the Jacobian matrix, inv is the inverse operator and f(x) is the function of which we want to find zero. Moreover, Klein is generic in the sense that it allows the use of arbitrary types of scalars (float, double, intervals, rationals, etc). We also explain how it is tested, both for correctness and performance.
26

Análise termodinâmica de processos de reforma do metano e da síntese Fischer-Tropsch / Thermodynamic analysis of methane reforming processes and Fischer-Tropsch synthesis

Freitas, Antonio Carlos Daltro de, 1986- 20 August 2018 (has links)
Orientador: Reginaldo Guirardello / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Química / Made available in DSpace on 2018-08-20T08:37:19Z (GMT). No. of bitstreams: 1 Freitas_AntonioCarlosDaltrode_M.pdf: 4674734 bytes, checksum: a2869576ee45e8734ab4fa69a217f9ef (MD5) Previous issue date: 2012 / Resumo: As reações de reforma de hidrocarbonetos leves, especialmente o gás natural, são reações químicas de elevada importância e representam etapas chave para a produção em larga escala de hidrogênio, para uso em reações de hidrogenação ou em células a combustível, ou de gás de síntese para aplicação na produção de amônia, metanol ou ainda para a síntese de Fischer-Tropsch (FT). A síntese de Fischer-Tropsch e o principal processo de conversao de hidrocarbonetos leves, como o metano, em hidrocarbonetos maiores, de maior valor agregado, assim a determinação das condições termodinamicamente favoráveis para a operação deste tipo de processos se torna cada vez mais necessária. Dentro desse contexto, as reações de reforma a vapor, reforma oxidativa, reforma autotérmica, reforma seca, reforma seca autotérmica e reforma seca combinada com reforma a vapor foram termodinamicamente avaliadas com o objetivo de determinar as melhores condições de reação para a produção de gás de síntese e de hidrogênio. Posteriormente, o gás de síntese produzido foi utilizado para a produção de metanol, etanol e hidrocarbonetos lineares, sendo avaliadas as melhores estratégias para a produção de cada um desses compostos. Para isso foram utilizadas as metodologias de minimização da energia de Gibbs a pressão e temperatura constantes e de maximização da entropia a pressão e entalpia constantes. Ambos os casos foram formulados como problemas de otimização na forma de programação não-linear, e resolvidas com o solver CONOPT2 do software GAMS 23.1'MARCA REGISTRADA'. A partir dos resultados obtidos com a aplicação da metodologia de minimização da energia de Gibbs, verificou-se que todos os processos de reforma avaliados se mostraram favoráveis para a produção de hidrogênio e/ou de gás de síntese do ponto de vista termodinâmico. Tendo a reação de reforma a vapor se destacado para a produção de hidrogênio devido a elevada razao molar H2/CO obtida no produto. A reação de oxidação parcial mostrou bons resultados para a produção de gás de síntese, devido a razão molar H2/CO próxima de 2 no produto. A comparação com dados experimentais permitiu verificar que a metodologia de minimização da energia de Gibbs apresentou boa capacidade de predição e pela comparação com dados simulados obtidos na literatura, pode-se verificar que a metodologia utilizada pelo presente trabalho esta de acordo com os dados publicados. Os resultados obtidos com a aplicação da metodologia de maximização da entropia pode-se verificar que as reações de reforma oxidativa, reforma autotérmica e reforma seca autotérmica, apresentaram comportamento autotérmico, tanto para o uso de O2 como para o uso de ar como agente oxidante. O ar mostrou capacidade de diminuir a elevação da temperatura final do sistema, sendo seu uso promissor para evitar pontos quentes no reator. A comparação com dados de perfil térmico de reatores, para as reações de reforma oxidativa e reforma autotérmica, únicas obtidas na literatura, demonstraram a boa capacidade de predição da metodologia de maximização da entropia para determinação das temperaturas de equilíbrio das reações. As analises realizadas pela aplicação da metodologia de minimização da energia de Gibbs para as reações de síntese de metanol, etanol e hidrocarbonetos lineares, demonstraram a viabilidade da produção desses compostos. Todas reações de síntese avaliadas apresentaram grande dependência da influencia do catalisador (efeito cinético) para promover a produção dos produtos de interesse. Aplicando-se a metodologia de maximização da entropia foi possível determinar que todas as reações de síntese apresentaram comportamento exotérmico. As metodologias empregadas, bem como o solver CONOPT2 aplicado no software GAMS® 23.1 se mostraram rápidos e eficazes para a solução dos problemas propostos, com baixos tempos computacionais para todos os casos analisados / Abstract: The reactions of reforming of light hydrocarbons, especially natural gas, are chemical reactions of great importance and represent key steps for large scale production of hydrogen for use in hydrogenation reactions or fuel cells, or synthesis gas production, for application in the ammonia or methanol production, or to Fischer-Tropsch (FT) synthesis. The Fischer- Tropsch synthesis is the main process of converting light hydrocarbons such as methane, in hydrocarbons of higher value added. The determination of the thermodynamically favorable conditions for the operation for this type of process is required. Within this context, the reactions of steam reforming, oxidative reforming, autothermal reforming, dry reforming, dry autothermal reforming and dry reforming combined with steam reforming were thermodynamically evaluated to determine the best reaction conditions for the production of synthesis gas and hydrogen. For this, we used the methods of Gibbs energy minimization, at constant pressure and temperature, and the Entropy maximization, at constant pressure and enthalpy. Both cases were formulated as optimization problems in the form of non-linear programming and solved with the software GAMS 2.5® with the solver CONOPT2. The results obtained by the method of minimization of Gibbs energy, for all the reform processes evaluated, proved able to produce hydrogen and syngas. Since the reaction of steam reforming showed greater ability to hydrogen production, due to high H2/CO molar ratio obtained in the product. The partial oxidation reaction showed good results for the syngas production, due to H2/CO molar ratio close to 2 in the product. The comparison with experimental data has shown that the Gibbs energy minimization method showed good predictive ability. By comparison with simulated data from the literature we can see that the methodology of minimization of Gibbs energy, used in this work is in agreement with data obtained in the literature for the same methodology. The results obtained using the methodology of entropy maximization allowed us to verify that the reactions of partial oxidation, autothermal reforming and dry autothermal reforming had autothermal behavior, both for the use of O2 as for the use of air as oxidizing agent. The air has shown ability to reduce the final temperature rise of the system, and its use has proved interesting to avoid hot spots in the reactor. A comparison with data from the reactor's thermal profile, for the reactions of partial oxidation and autothermal reforming, only found in the literature, showed good predictive ability of the methodology of entropy maximization to determine the final temperature of the reaction. The analysis realized using the methodology of Gibbs energy minimization for the synthesis reactions of methanol, ethanol and linear hydrocarbons, demonstrated the feasibility of producing these compounds. All synthesis reactions evaluated were greatly dependent on the influence of the catalyst (kinetic effect) to promote the production of products of interest. Trough the entropy maximization method was determined that all synthesis reactions analyzed presents exothermic behavior, but in the reaction conditions evaluated here, these systems can be considered safe. The methodologies used and applied in the software GAMS ® 23.1, and solved with the solver CONOPT2 proved to be fast and effective for solving the proposed problems with low computational time in all cases analyzed / Mestrado / Desenvolvimento de Processos Químicos / Mestre em Engenharia Química
27

Documentação e testes da biblioteca genérica de álgebra linear Klein / Tests and documentation of the Klein library

Rafael Freitas Schmid 12 December 2014 (has links)
Este trabalho descreve a Klein, uma biblioteca genérica para álgebra linear em C++. A Klein facilita o uso de matrizes e vetores, permitindo que o usuário programe de modo similar ao Matlab. Com ela podemos, por exemplo, implementar um passo do método de Newton para a função f, através da expressão x = x - inv(jac(x)) * f(x), onde x é o vetor, jac a Jacobiana e inv a inversa. Além disso, por se tratar de uma biblioteca genérica, os tipos envolvidos nestas expressões podem ser escolhidos pelo programador. O trabalho também discute como a biblioteca é testada, tanto do ponto de vista de corretude quanto de desempenho. / We describe the Klein library, a generic libray for linear algebra in C++. It simplifies the use of vectors and matrices and let the user program as in Matlab. With Klein, one can for instance implement Newton\'s method as x = x - inv(jac(x)) * f(x), where x is a vector, jac is the Jacobian matrix, inv is the inverse operator and f(x) is the function of which we want to find zero. Moreover, Klein is generic in the sense that it allows the use of arbitrary types of scalars (float, double, intervals, rationals, etc). We also explain how it is tested, both for correctness and performance.
28

On Fractional Realizations of Tournament Score Sequences

Murphy, Kaitlin S. 01 August 2019 (has links)
Contrary to popular belief, we can’t all be winners. Suppose 6 people compete in a chess tournament in which all pairs of players compete directly and no ties are allowed; i.e., 6 people compete in a ‘round robin tournament’. Each player is assigned a ‘score’, namely the number of games they won, and the ‘score sequence’ of the tournament is a list of the players’ scores. Determining whether a given potential score sequence actually is a score sequence proves to be difficult. For instance, (0, 0, 3, 3, 3, 6) is not feasible because two players cannot both have score 0. Neither is the sequence (1, 1, 1, 4, 4, 4) because the sum of the scores is 16, but only 15 games are played among 6 players. This so called ‘tournament score sequence problem’ (TSSP) was solved in 1953 by the mathematical sociologist H. G. Landau. His work inspired the investigation of round robin tournaments as directed graphs. We study a modification in which the TSSP is cast as a system of inequalities whose solutions form a polytope η-dimensional space. This relaxation allows us to investigate the possibility of fractional scores. If, in a ‘round-robin’-ish tournament, Players A and B play each other 3 times, and Player A wins 2 of the 3 games, we can record this interaction as a 2/3 score for Player A and a 1/3 score for Player B. This generalization greatly impacts the nature of possible score sequences. We will also entertain an interpretation of these fractional scores as probabilities predicting the outcome of a true round robin tournament. The intersection of digraph theory, polyhedral combinatorics, and linear programming is a relatively new branch of graph theory. These results pioneer research in this field.
29

Une méthodologie générique de réparation multicritère pour l'optimisation sous incertitude : Application aux problèmes de planification et d'affectation / A generic multi-criteria repair/recovery framework for optimization under uncertainty : Application to planning and assignment problems

Khaled, Oumaima 19 June 2017 (has links)
Plusieurs problématiques de gestion d’opérations peuvent être formalisées avec un problème d’optimisation discret. Ces modèles d’optimisation sont traditionnellement développés sous l’hypothèse que les données d’entrée sont déterministes, non impactées par des changements inattendus ou des incertitudes. Au cours des dernières années, le besoin en modèles performants, incluant des outils efficaces et permettant de réagir de manière optimale aux imprévus (perturbations), n’a cessé de croitre. En phase d’exécution d’un système, plusieurs événements imprévus (incertitudes) peuvent le perturber et le faire dévier de son parcours original voire rendre son exécution impossible. Il est vrai que ces incertitudes peuvent être considérées de manière proactive par le biais d’une optimisation stochastique ou des modèles d'optimisation robustes. Mais même avec des solutions robustes, des événements inattendus peuvent encore se produire nécessitant de revoir le plan robuste en cours d’exécution. Dans cette thèse, l’objectif est de prendre en compte ces incertitudes de manière réactive dans les modèles. Ainsi, une nouvelle méthodologie générique est proposée pour les problèmes d'optimisation de réparation / récupération. En considérant les solutions réparées / récupérées fournies par cette méthodologie appliquée à un plan initial en cours de mise en oeuvre, un décideur peut vouloir minimiser les coûts d'exploitation, mais aussi limiter les changements par rapport au plan initial. Le problème de réparation / récupération est formulé comme un problème d'optimisation multiobjectif, qui minimise des fonctions spécifiques relatives à divers critères de réparation (pilotés par les choix du décideur). / A wide variety of operations management problems can be formulated and solved as discrete optimization problems. Traditionally, these models have been mostly developed and used under the assumption that the input data are known in advance, not subject to unexpected changes, nor impacted by uncertainty. In recent years, the need for improved models providing efficient tools for quickly and optimally reacting to the occurrence of unexpected events (disruptions) has become a more and more important issue. In the execution phase, various unanticipated events will disrupt the system and make the plan deviate from its intended course and even make it infeasible.Uncertainty can be taken into account in a proactive way with stochastic optimization or robust optimization models. However, even with robust solutions, unexpected events can still occur requiring to reconsider the robust plan under execution. In this thesis, we are interested to cope with uncertainty in a reactive way. We propose a new generic methodology for repair/recovery optimization problems. When considering repair/recovery solutions for the initial plan under implementation, the decision-maker may want to minimize operating costs, but also limit the changes with respect to the initial plan. We formulate the repair/recovery problem as a multiobjective optimization problem minimizing specified functions for various repair criteria.
30

Evaluating The Performance Of Animal Shelters: An Application Of Data Envelopment Analysis

Heyde, Brandy 01 January 2008 (has links)
The focus of this thesis is the application of data envelopment analysis to understand and evaluate the performance of diverse animal welfare organizations across the United States. The results include identification of the most efficient animal welfare organizations, at least among those that post statistics on their operations, and a discussion of various partnerships that may improve the performance of the more inefficient organizations. The Humane Society of the United States estimates that there are 4000 - 6000 independently-run animal shelters across the United States, with an estimated 6-8 million companion animals entering them each year. Unfortunately, more than half of these animals are euthanized. The methods shared in this research illustrate how data envelopment analysis may help shelters improve these statistics through evaluation and cooperation. Data envelopment analysis (DEA) is based on the principle that the efficiency of an organization depends on its ability to transform its inputs into the desired outputs. The result of a DEA model is a single measure that summarizes the relative efficiency of each decision making unit (DMU) when compared with similar organizations. The DEA linear program defines an efficiency frontier with the most efficient animal shelters that are put into the model that "envelops" the other DMUs. Individual efficiency scores are calculated by determining how close each DMU is to reaching the frontier. The results shared in this research focus on the performance of 15 animal shelters. Lack of standardized data regarding individual animal shelter performance limited the ability to review a larger number of shelters and provide more robust results. Various programs are in place within the United States to improve the collection and availability of individual shelter performance. Specifically, the Asilomar Accords provide a strong framework for doing this and could significantly reduce euthanasia of companion animals if more shelters would adopt the practice of collecting and reporting their data in this format. It is demonstrated in this research that combining performance data with financial data within the data envelopment analysis technique can be powerful in helping shelters identify how to better deliver results. The addition of data from other organizations will make the results even more robust and useful for each shelter involved.

Page generated in 0.0554 seconds