• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 17
  • 7
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 83
  • 83
  • 15
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Annual Exceedance Probability Analysis

Gardner, Masako Amai 14 July 2005 (has links) (PDF)
Annual Exceedance Probability (AEP) is the method used by U.S. Army Corps of Engineers (USACE) to determine the probability of flooding caused by the failure of a levee or other flood control structure. This method shows the probability of flooding only at one particular location at a time. In order to overcome the limitation of AEP, a new method of studying flood probability, called an AEP map, was presented. By using hydrologic and hydraulic modeling software, an AEP map can be created to determine and visualize the spatial distribution of the probability of flooding. An AEP map represents a continuous solution of the probability of flooding and can be used to derive not only the limits of the typical 100-year inundation, but any other return period including the 20-year, 50-year, 500-year storm flood. The AEP map can be more useful than traditional flood hazard maps, since it makes it possible to evaluate the probability of flooding at any location within the floodplain. In the process of creating the AEP map, it is necessary to run number of simulations in order to accurately represent the probability distribution of flooding. The objective of this research, given a desktop computer of today's capacity, is to demonstrate the convergence of AEP maps after a reasonable number of simulations, so that users can have some guidelines to decide how many simulations are necessary. The Virgin River, UT is the primary study area for this research, with Gila River, AZ also used to support the results. The result of this research demonstrates the convergence of AEP maps by illustrating the convergence of water surface elevations computed as part of the hydraulic simulation leading up to the floodplain delineation model. If the average water surface elevations converge, then the resulting floodplain delineation (AEP maps) should also converge. The result proves that AEP maps do converge with a reasonable number of simulations. This research also shows the convergence of floodplain areas to demonstrate the convergence of AEP maps.
32

Threat Assessment and Proactive Decision-Making for Crash Avoidance in Autonomous Vehicles

Khattar, Vanshaj 24 May 2021 (has links)
Threat assessment and reliable motion-prediction of surrounding vehicles are some of the major challenges encountered in autonomous vehicles' safe decision-making. Predicting a threat in advance can give an autonomous vehicle enough time to avoid crashes or near crash situations. Most vehicles on roads are human-driven, making it challenging to predict their intentions and movements due to inherent uncertainty in their behaviors. Moreover, different driver behaviors pose different kinds of threats. Various driver behavior predictive models have been proposed in the literature for motion prediction. However, these models cannot be trusted entirely due to the human drivers' highly uncertain nature. This thesis proposes a novel trust-based driver behavior prediction and stochastic reachable set threat assessment methodology for various dangerous situations on the road. This trust-based methodology allows autonomous vehicles to quantify the degree of trust in their predictions to generate the probabilistically safest trajectory. This approach can be instrumental in the near-crash scenarios where no collision-free trajectory exists. Three different driving behaviors are considered: Normal, Aggressive, and Drowsy. Hidden Markov Models are used for driver behavior prediction. A "trust" in the detected driver is established by combining four driving features: Longitudinal acceleration, lateral acceleration, lane deviation, and velocity. A stochastic reachable set-based approach is used to model these three different driving behaviors. Two measures of threat are proposed: Current Threat and Short Term Prediction Threat which quantify present and the future probability of a crash. The proposed threat assessment methodology resulted in a lower rate of false positives and negatives. This probabilistic threat assessment methodology is used to address the second challenge in autonomous vehicle safety: crash avoidance decision-making. This thesis presents a fast, proactive decision-making methodology based on Stochastic Model Predictive Control (SMPC). A proactive decision-making approach exploits the surrounding human-driven vehicles' intent to assess the future threat, which helps generate a safe trajectory in advance, unlike reactive decision-making approaches that do not account for the surrounding vehicles' future intent. The crash avoidance problem is formulated as a chance-constrained optimization problem to account for uncertainty in the surrounding vehicle's motion. These chance-constraints always ensure a minimum probabilistic safety of the autonomous vehicle by keeping the probability of crash below a predefined risk parameter. This thesis proposes a tractable and deterministic reformulation of these chance-constraints using convex hull formulation for a fast real-time implementation. The controller's performance is studied for different risk parameters used in the chance-constraint formulation. Simulation results show that the proposed control methodology can avoid crashes in most hazardous situations on the road. / Master of Science / Unexpected road situations frequently arise on the roads which leads to crashes. In an NHTSA study, it was reported that around 94% of car crashes could be attributed to driver errors and misjudgments. This could be attributed to drinking and driving, fatigue, or reckless driving on the roads. Full self-driving cars can significantly reduce the frequency of such accidents. Testing of self-driving cars has recently begun on certain roads, and it is estimated that one in ten cars will be self-driving by the year 2030. This means that these self-driving cars will need to operate in human-driven environments and interact with human-driven vehicles. Therefore, it is crucial for autonomous vehicles to understand the way humans drive on the road to avoid collisions and interact safely with human-driven vehicles on the road. Detecting a threat in advance and generating a safe trajectory for crash avoidance are some of the major challenges faced by autonomous vehicles. We have proposed a reliable decision-making algorithm for crash avoidance in autonomous vehicles. Our framework addresses two core challenges encountered in crash avoidance decision-making in autonomous vehicles: 1. The outside challenge: Reliable motion prediction of surrounding vehicles to continuously assess the threat to the autonomous vehicle. 2. The inside challenge: Generating a safe trajectory for the autonomous vehicle in case of future predicted threat. The outside challenge is to predict the motion of surrounding vehicles. This requires building a reliable model through which future evolution of their position states can be predicted. Building these models is not trivial, as the surrounding vehicles' motion depends on human driver intentions and behaviors, which are highly uncertain. Various driver behavior predictive models have been proposed in the literature. However, most do not quantify trust in their predictions. We have proposed a trust-based driver behavior prediction method which combines all sensor measurements to output the probability (trust value) of a certain driver being "drowsy", "aggressive", or "normal". This method allows the autonomous vehicle to choose how much to trust a particular prediction. Once a picture is painted of surrounding vehicles, we can generate safe trajectories in advance – the inside challenge. Most existing approaches use stochastic optimal control methods, which are computationally expensive and impractical for fast real-time decision-making in crash scenarios. We have proposed a fast, proactive decision-making algorithm to generate crash avoidance trajectories based on Stochastic Model Predictive Control (SMPC). We reformulate the SMPC probabilistic constraints as deterministic constraints using convex hull formulation, allowing for faster real-time implementation. This deterministic SMPC implementation ensures in real-time that the vehicle maintains a minimum probabilistic safety.
33

Hybrid Multi-Objective Optimization Models for Managing Pavement Assets

Wu, Zheng 14 February 2008 (has links)
Increasingly tighter budgets, changes in government role/function, declines in staff resources, and demands for increased accountability in the transportation field have brought unprecedented challenges for state transportation officials at all management levels. Systematic methodologies for effective management of a specific type of infrastructure (e.g., pavement and bridges) as well as for holistically managing all types of infrastructure assets are being developed to approach these challenges. In particular, the intrinsic characteristics of highway system make the use of multi-objective optimization techniques particularly attractive for managing highway assets. Recognizing the need for effective tradeoff tools and the limitations of state-of-practice analytical models and tools in highway asset management, the main objective of this dissertation was to develop a performance-based asset management framework that uses multi-objective optimization techniques and consists of stand-alone but logically interconnected optimization models for different management levels. Based on a critical review of popular multi-objective optimization techniques and their applications in highway asset management, a synergistic integration of complementary multi-criteria optimization techniques is recommended for the development of practical and efficient decision-supporting tools. Accordingly, the dissertation first proposes and implements a probabilistic multi-objective model for performance-based pavement preservation programming that uses the weighting sum method and chance constraints. This model can handle multiple incommensurable and conflicting objectives while considering probabilistic constraints related to the available budget over the planning horizon, but is found more suitable to problems with small number of objective functions due to its computational intensity. To enhance the above model, a hybrid model that requires less computing time and systematically captures the decision maker's preferences on multiple objectives is developed by combining the analytic hierarchy process and goal programming. This model is further extended to also capture the relative importance existent within optimization constraints to be suitable for allocations of funding across multiple districts for a decentralized state department of transportation. Finally, as a continuation of the above proposed models for the succeeding management level, a project selection model capable of incorporating qualitative factors (e.g. equity, user satisfaction) into the decision making is developed. This model combines k-means clustering, analytic hierarchy process and integer linear programming. All the models are logically interconnected in a comprehensive resource allocation framework. Their feasibility, practicality and potential benefits are illustrated through various case studies and recommendations for further developments are provided. / Ph. D.
34

Enhancements in Markovian Dynamics

Ali Akbar Soltan, Reza 12 April 2012 (has links)
Many common statistical techniques for modeling multidimensional dynamic data sets can be seen as variants of one (or multiple) underlying linear/nonlinear model(s). These statistical techniques fall into two broad categories of supervised and unsupervised learning. The emphasis of this dissertation is on unsupervised learning under multiple generative models. For linear models, this has been achieved by collective observations and derivations made by previous authors during the last few decades. Factor analysis, polynomial chaos expansion, principal component analysis, gaussian mixture clustering, vector quantization, and Kalman filter models can all be unified as some variations of unsupervised learning under a single basic linear generative model. Hidden Markov modeling (HMM), however, is categorized as an unsupervised learning under multiple linear/nonlinear generative models. This dissertation is primarily focused on hidden Markov models (HMMs). On the first half of this dissertation we study enhancements on the theory of hidden Markov modeling. These include three branches: 1) a robust as well as a closed-form parameter estimation solution to the expectation maximization (EM) process of HMMs for the case of elliptically symmetrical densities; 2) a two-step HMM, with a combined state sequence via an extended Viterbi algorithm for smoother state estimation; and 3) a duration-dependent HMM, for estimating the expected residency frequency on each state. Then, the second half of the dissertation studies three novel applications of these methods: 1) the applications of Markov switching models on the Bifurcation Theory in nonlinear dynamics; 2) a Game Theory application of HMM, based on fundamental theory of card counting and an example on the game of Baccarat; and 3) Trust modeling and the estimation of trustworthiness metrics in cyber security systems via Markov switching models. As a result of the duration dependent HMM, we achieved a better estimation for the expected duration of stay on each regime. Then by robust and closed form solution to the EM algorithm we achieved robustness against outliers in the training data set as well as higher computational efficiency in the maximization step of the EM algorithm. By means of the two-step HMM we achieved smoother probability estimation with higher likelihood than the standard HMM. / Ph. D.
35

Influence of meteorological network density on hydrological modeling using input from the Canadian Precipitation Analysis (CaPA)

Abbasnezhadi, Kian 31 March 2017 (has links)
The Canadian Precipitation Analysis (CaPA) system has been developed by Environment and Climate Change Canada (ECCC) to optimally combine different sources of information to estimate precipitation accumulation across Canada. The system combines observations from different networks of weather stations and radar measurements with the background information generated by ECCC's Regional Deterministic Prediction System (RDPS), derived from the Global Environmental Multiscale (GEM) model. The main scope of this study is to assess the importance of weather stations when combined with the background information for hydrological modeling. A new approach to meteorological network design, considered to be a stochastic hydro-geostatistical scheme, is proposed and investigated which is particularly useful for augmenting data-sparse networks. The approach stands out from similar approaches of its kind in that it is comprised of a data assimilation component included based on the paradigm of an Observing System Simulation Experiment (OSSE), a technique used to simulate data assimilation systems in order to evaluate the sensitivity of the analysis to new observation network. The proposed OSSE-based algorithm develops gridded stochastic precipitation and temperature models to generate synthetic time-series assumed to represent the 'reference' atmosphere over the basin. The precipitation realizations are used to simulate synthetic observations, associated with hypothetical station networks of various densities, and synthetic background data, which in turn are assimilated in CaPA to realize various pseudo-analyses. The reference atmosphere and the pseudo-analyses are then compared through hydrological modeling in WATFLOOD. By comparing the flow rates, the relative performance of each pseudo-analysis associated with a specific network density is assessed. The simulations show that as the network density increases, the accuracy of the hydrological signature of the CaPA precipitation products improves hyperbolically to a certain limit beyond which adding more stations to the network does not result in further accuracy. This study identifies an observation network density that can satisfy the hydrological criteria as well as the threshold at which assimilated products outperforms numerical weather prediction outputs. It also underlines the importance of augmenting observation networks in small river basins to better resolve mesoscale weather patterns and thus improve the predictive accuracy of streamflow simulation. / May 2017
36

Uso do nariz eletrônico (e-nose) como instrumento de pré-classificação de óleos e gorduras residuais (OGR) destinados à produção de biodiesel / Use of the electronic nose (e-nose) as an instrument for pre-classification of waste cooking oil (WCO) destined to biodiesel production

Batista, Pollyanna Souza 22 June 2018 (has links)
Atualmente, o uso de óleo e gordura residual (OGR) de fritura de alimentos como matéria-prima na produção de biodiesel no Brasil representa menos de 1% do total. O principal limitante é que após o processo de fritura o óleo pode adquirir características que o tornam inadequado para obtenção de biocombustível pela via de produção tradicional. Para viabilizar economicamente o reaproveitamento de OGR, é importante o desenvolvimento de métodos simples e de baixo custo capazes de avaliar seu potencial de uso como matéria prima. Nesse contexto, este trabalho teve como objetivo avaliar o uso do nariz eletrônico na seleção de OGR destinado à produção de biodiesel, em substituição aos métodos convencionais de análises físico-químicas. Foram selecionadas 36 amostras de OGR provenientes de uso doméstico e comercial, cujas características físico-químicas foram obtidas pela análise do índice de acidez, índice de peróxido, densidade e viscosidade cinemática. Biodiesel foi produzido a partir do OGR, por meio da transesterificação alcalina na temperatura de 60°C e tempo de 2h, utilizando etanol na razão molar OGR/álcool de 1/9 e hidróxido de potássio (KOH) como catalisador na quantidade de 1% m/m. As amostras de biodiesel foram caracterizadas de acordo com especificações da pela Agência Nacional de Petróleo, Gás Natural e Biocombustíveis (ANP), em relação ao teor de éster, índice de acidez, densidade e viscosidade cinemática. As amostras de OGR foram caracterizadas em termos do seu perfil olfativo, através do nariz eletrônico, interpretados por aplicação do modelo estocástico e análise discriminante quadrática. O modelo permitiu uma avaliação qualitativa de parâmetros de interesse sem a necessidade de testes físicoquímicos, com precisão de 80% a 92%. Os resultados demonstraram que o nariz eletrônico é uma ferramenta promissora na predição da qualidade do biodiesel com base no perfil olfativo de uma amostra de OGR. / Currently, the use of waste cooking oil (WCO) as raw material in the production of biodiesel in Brazil represents less than 1% of the total. The main limitation is that after the frying process the oil can acquire characteristics that make it unsuitable for obtaining biofuel through the traditional way of production. In order to economically make feasible the reuse of OGR, it is important to develop simple and low cost methods capable of evaluating its potential use as raw material. In this context, this work aimed to evaluate the use of electronic nose in the selection of WCO for biodiesel production, replacing the conventional methods of physical-chemical analysis. 36 samples of WCO from domestic and commercial use were selected, whose physicochemical characteristics were obtained by the analysis of acidity level, peroxide level, density and kinematic viscosity. Biodiesel was produced from the OGR by means of the alkaline transesterification at 60°C and time of 2h using ethanol in the molar ratio OGR / alcohol of 1/9 and potassium hydroxide (KOH) as catalyst in the amount of 1% m/m. The biodiesel samples were characterized according to specifications of the National Agency of Petroleum, Natural Gas and Biofuels (ANP), in relation to the ester content, acidity level, density and kinematic viscosity. The WCO samples were characterized in terms of their olfactory profile through the electronic nose, interpreted by the stochastic model and quadratic discriminant analysis. The model allowed a qualitative evaluation of parameters of interest without the need of physicalchemical tests, with precision of 80% to 92%. The results demonstrate that e-nose is a promising tool in the prediction of biodiesel quality based on the olfactory profile of a sample of WCO.
37

Integration of New Technologies into Existing Mature Process to Improve Efficiency and Reduce Energy Consumption

Ahmed, Sajjad 17 June 2009 (has links)
Optimal operation of plants is becoming more important due to increasing competition and small and changing profit margins for many products. One major reason has been the realization by industry that potentially large savings can be achieved by improving processes. Growth rates and profitability are much lower now, and international competition has increased greatly. The industry is faced with a need to manufacture quality products, while minimizing production costs and complying with a variety of safety and environmental regulations. As industry is confronted with the challenge of moving toward a clearer and more sustainable path of production, new technologies are needed to achieve industrial requirements. In this research, a new methodology is proposed to integrate so-called new technologies into existing processes. Research shows that the new technologies must be carefully selected and adopted to match the complex requirements of an existing process. The new proposed methodology is based on four major steps. If the improvement in the process is not sufficient to meet business needs, new technologies can be considered. Application of a new technology is always perceived as a potential threat; therefore, financial risk assessment and reliability risk analysis help alleviate risk of investment. An industrial case study from the literature was selected to implement and validate the new methodology. The case study is a planning problem to plan the layout or design of a fleet of generating stations owned and operated by the electric utility company, Ontario Power Generation (OPG). The impact of new technology integration on the performance of a power grid consisting of a variety of power generation plants was evaluated. The reduction in carbon emissions is projected to be accomplished through a combination of fuel switching, fuel balancing and switching to new technologies: carbon capture and sequestration. The fuel-balancing technique is used to decrease carbon emissions by adjusting the operation of the fleet of existing electricity-generating stations; the technique of fuel-switching involves switching from carbon-intensive fuels to less carbon-intensive fuels, for instance, switching from coal to natural gas; carbon capture and sequestration are applied to meet carbon emission reduction requirements. Existing power plants with existing technologies consist of fossil fuel stations, nuclear stations, hydroelectric stations, wind power stations, pulverized coal stations and a natural gas combined cycle, while hypothesized power plants with new technologies include solar stations, wind power stations, pulverized coal stations, a natural gas combined cycle and an integrated gasification combined cycle with and without capture and sequestration. The proposed methodology includes financial risk management in the framework of a two stage stochastic programme for energy planning under uncertainty: demands and fuel price. A deterministic mixed integer linear programming formulation is extended to a two-stage stochastic programming model in order to take into account random parameters, which have discrete and finite probabilistic distributions. Thus, the expected value of the total costs of power generation is minimized, while the objective of carbon emission reduction is achieved. Furthermore, conditional value at risk (CVaR), a most preferable risk measure in the financial risk management, is incorporated within the framework of two-stage mixed integer programming. The mathematical formulation, which is called mean-risk model, is applied for the purpose of minimizing expected value. The process is formulated as a mixed integer linear programming model, implemented in GAMS (General Algebraic Modeling System) and solved using the CPLEX algorithm, a commercial solver embedded in GAMS. The computational results demonstrate the effectiveness of the proposed new methodology. The optimization model is applied to an existing Ontario Power Generation (OPG) fleet. Four planning scenarios are considered: a base load demand, a 1.0% growth rate in demand, a 5.0% growth rate in demand, a 10% growth rate in demand and a 20% growth rate in demand. A sensitivity analysis study is accomplished in order to investigate the effect of parameter uncertainties, such as uncertain factors on coal price and natural gas price. The optimization results demonstrate how to achieve the carbon emission mitigation goal with and without new technologies, while minimizing costs affects the configuration of the OPG fleet in terms of generation mix, capacity mix and optimal configuration. The selected new technologies are assessed in order to determine the risks of investment. Electricity costs with new technologies are lower than with the existing technologies. 60% CO2 reduction can be achieved at 20% growth in base load demand with new technologies. The total cost of electricity increases as we increase CO2 reduction or increase electricity demand. However, there is no significant change in CO2 reduction cost when CO2 reduction increases with new technologies. Total cost of electricity increases when fuel price increases. The total cost of electricity increases with financial risk management in order to lower the risk. Therefore, more electricity is produced for the industry to be on the safe side.
38

Integration of New Technologies into Existing Mature Process to Improve Efficiency and Reduce Energy Consumption

Ahmed, Sajjad 17 June 2009 (has links)
Optimal operation of plants is becoming more important due to increasing competition and small and changing profit margins for many products. One major reason has been the realization by industry that potentially large savings can be achieved by improving processes. Growth rates and profitability are much lower now, and international competition has increased greatly. The industry is faced with a need to manufacture quality products, while minimizing production costs and complying with a variety of safety and environmental regulations. As industry is confronted with the challenge of moving toward a clearer and more sustainable path of production, new technologies are needed to achieve industrial requirements. In this research, a new methodology is proposed to integrate so-called new technologies into existing processes. Research shows that the new technologies must be carefully selected and adopted to match the complex requirements of an existing process. The new proposed methodology is based on four major steps. If the improvement in the process is not sufficient to meet business needs, new technologies can be considered. Application of a new technology is always perceived as a potential threat; therefore, financial risk assessment and reliability risk analysis help alleviate risk of investment. An industrial case study from the literature was selected to implement and validate the new methodology. The case study is a planning problem to plan the layout or design of a fleet of generating stations owned and operated by the electric utility company, Ontario Power Generation (OPG). The impact of new technology integration on the performance of a power grid consisting of a variety of power generation plants was evaluated. The reduction in carbon emissions is projected to be accomplished through a combination of fuel switching, fuel balancing and switching to new technologies: carbon capture and sequestration. The fuel-balancing technique is used to decrease carbon emissions by adjusting the operation of the fleet of existing electricity-generating stations; the technique of fuel-switching involves switching from carbon-intensive fuels to less carbon-intensive fuels, for instance, switching from coal to natural gas; carbon capture and sequestration are applied to meet carbon emission reduction requirements. Existing power plants with existing technologies consist of fossil fuel stations, nuclear stations, hydroelectric stations, wind power stations, pulverized coal stations and a natural gas combined cycle, while hypothesized power plants with new technologies include solar stations, wind power stations, pulverized coal stations, a natural gas combined cycle and an integrated gasification combined cycle with and without capture and sequestration. The proposed methodology includes financial risk management in the framework of a two stage stochastic programme for energy planning under uncertainty: demands and fuel price. A deterministic mixed integer linear programming formulation is extended to a two-stage stochastic programming model in order to take into account random parameters, which have discrete and finite probabilistic distributions. Thus, the expected value of the total costs of power generation is minimized, while the objective of carbon emission reduction is achieved. Furthermore, conditional value at risk (CVaR), a most preferable risk measure in the financial risk management, is incorporated within the framework of two-stage mixed integer programming. The mathematical formulation, which is called mean-risk model, is applied for the purpose of minimizing expected value. The process is formulated as a mixed integer linear programming model, implemented in GAMS (General Algebraic Modeling System) and solved using the CPLEX algorithm, a commercial solver embedded in GAMS. The computational results demonstrate the effectiveness of the proposed new methodology. The optimization model is applied to an existing Ontario Power Generation (OPG) fleet. Four planning scenarios are considered: a base load demand, a 1.0% growth rate in demand, a 5.0% growth rate in demand, a 10% growth rate in demand and a 20% growth rate in demand. A sensitivity analysis study is accomplished in order to investigate the effect of parameter uncertainties, such as uncertain factors on coal price and natural gas price. The optimization results demonstrate how to achieve the carbon emission mitigation goal with and without new technologies, while minimizing costs affects the configuration of the OPG fleet in terms of generation mix, capacity mix and optimal configuration. The selected new technologies are assessed in order to determine the risks of investment. Electricity costs with new technologies are lower than with the existing technologies. 60% CO2 reduction can be achieved at 20% growth in base load demand with new technologies. The total cost of electricity increases as we increase CO2 reduction or increase electricity demand. However, there is no significant change in CO2 reduction cost when CO2 reduction increases with new technologies. Total cost of electricity increases when fuel price increases. The total cost of electricity increases with financial risk management in order to lower the risk. Therefore, more electricity is produced for the industry to be on the safe side.
39

[en] AN INTEGRATED MODEL FOR LOGISTICS NETWORK DESIGN OF FACILITY LOCATION, PRODUCTION, TRANSPORTATION AND INVENTORY DECISIONS / [pt] UM MODELO INTEGRADO PARA O PROJETO DE REDES LOGÍSTICAS COM DECISÕES DE LOCALIZAÇÃO DE INSTALAÇÕES, PRODUÇÃO, TRANSPORTE E ESTOQUES

MARCELO MACIEL MONTEIRO 12 July 2016 (has links)
[pt] O trabalho tem como objetivo desenvolver uma formulação matemática para o problema de projeto de redes logísticas que seja integrado e flexível de modo a contemplar escolhas de localização de instalações, transporte, produção e estoques. O projeto de redes considera seleção de fornecedores, plantas e armazéns e de opções de transportes, com alocação de produtos para plantas de manufatura e armazéns, e ainda consideram questões de estocagem na rede logística como custos de manutenção e de obtenção de estoques. A formulação resultante é um modelo de programação não linear inteira mista, feita para um único período com a demanda estocástica. Por ser um problema NP-Difícil, para a resolução do problema proposto foi utilizado o algoritmo Outer-Approximation, que foi testando por meio do dimensionamento de três classes distintas. / [en] This thesis aims to develop a mathematical formulation to an integrated and flexible logistics network design that include choices of facility locations, transportation, production and inventories. The network designs consider vendors, plants, warehouses and transportation modes choices. The proposed model considers products assignment to plants and warehouses, inventory holding and procurement costs. The mathematical formulation of the model is a Mixer Integer Non Linear Program (MINLP) problem, referring to a single period with stochastic demand. The problem is NP-Hard and we used the Outer-Approximation algorithmic as the method to resolve the model proposed. We tested the algorithmic for three different instances (scenarios).
40

Utilisation d'un panel SNPs très basse densité dans les populations en sélection de petits ruminants / Use of a very low density SNPs panel for small ruminant breeding programs

Raoul, Jérôme 28 November 2017 (has links)
Les programmes de sélection visent à produire des reproducteurs de bonnes valeurs génétiques pour la filière. La connaissance de marqueurs moléculaires du génome des individus et de mutations d’intérêt ouvrent des perspectives en termes d’organisation de la sélection. A l’aide de simulations déterministes et stochastiques, l’intérêt technique et économique de l’utilisation d’un panel de marqueurs moléculaires très basse densité a été évalué dans les populations ovines et caprines en sélection et permis d’obtenir les résultats suivants : i) utiliser un tel panel pour accroître, quand elle est limitée, la quantité de filiations paternelles n’est pas toujours rentable, ii) la stratégie de gestion des gènes d’ovulation qui maximise la rentabilité économique du plan de sélection a été déterminée par optimisation et des stratégies simples à implémenter, qui donnent des rentabilités proches de la rentabilité maximale, ont été proposées, iii) un programme de sélection génomique basé sur un panel très basse densité, permet à coût constant une efficacité supérieure aux programmes basés actuellement sur le testage sur descendance des mâles. / Breeding programs aim to transfer high genetic value breeding stock to the industry. The knowledge of molecular markers of individual’s genome and causal mutations allow to conceive new breeding program designs. Based on deterministic and stochastic simulations, the technical and economic benefits of using a very low density molecular markers panel were assessed in sheep and goat populations. Following results were obtained: i) using such a panel to increase female paternal filiations in case of incomplete pedigree is not always profitable, ii) a method of optimization has been used to derive the maximal profits of managing ovulation genes, and practical management giving profits close to the maximal profits have been determined, iii) at similar cost, a genomic design based on a very low density panel is more efficient than the current design based on progeny testing.

Page generated in 0.0561 seconds