• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1698
  • 419
  • 238
  • 214
  • 136
  • 93
  • 31
  • 26
  • 25
  • 21
  • 20
  • 15
  • 9
  • 8
  • 7
  • Tagged with
  • 3613
  • 598
  • 433
  • 364
  • 360
  • 359
  • 347
  • 328
  • 326
  • 296
  • 282
  • 257
  • 214
  • 214
  • 210
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
511

Decisions under Risk, Uncertainty and Ambiguity: Theory and Experiments

Martinez-Correa, Jimmy 11 August 2012 (has links)
I combine theory, experiments and econometrics to undertake the task of disentangling the subtleties and implications of the distinction between risk, uncertainty and ambiguity. One general conclusion is that the elements of this methodological trilogy are not equally advanced. For example, new experimental tools must be developed to adequately test the predictions of theory. My dissertation is an example of this dynamic between theoretical and applied economics.
512

Horizontal Mergers and Equilibria Comparison in Oligopoly

Akgün, Ugur 27 September 2004 (has links)
Esta tesis consiste en cuatro trabajos independientes. En amplios términos los dos temas generales que trata se pueden clasificar como fusiones horizontales y comparaciones de los parámetros del mercado del equilibrio entre competencia en precios y cantidades. En el articulo, "Fusiones con funciones de oferta", analizo los efectos de una fusión en una industria que produce un bien homogéneo cuando las empresas compiten eligien funciones de oferta. Las empresas eligen las funciones que relacionan su oferta con el precio de mercado. La competencia a la Cournot donde las empresas eligen a las cantidades, y la competencia a la Bertrand donde las empresas eligen los precios son casos especiales de la competencia en funciones de oferta. Considero una industria con un nivel fijo de capital. El capital una empresa define su tecnología de producción. Una fusión crea no solamente una empresa más grande, pero también una con una estructura más eficiente del coste que cualesquiera de sus componentes. Se muestra que cualquier fusión hace que todas las empresas que bajen su producción. La disminución de la producción por las empresas no participantes hace cualquier fusión rentable. Cualquier fusión en una industria con el capital social igualmente distribuido conduce a una pérdida del bienestar. Finalmente, una fusión es más probable que aumente el bienestar si aumenta la "simetría" en la industria. En mi trabajo "Fusiones bajo la incertidumbre", considero las decisiones de fusionar de empresas que producen bienes diferenciados en un mercado con choques idiosincrásicos de la demanda. Las empresas toman sus decisiones de fusionar antes de que choques afecten a la industria. Todas las empresas comparten la misma información sobre las condiciones de mercado en cualquier momento. En particular, después de un choque, todas las empresas saben todos los parámetros relevantes del mercado. Así, una fusión no produce ninguna ventaja informativa. Exploro el modelo bajo dos modos de competencia. Comparando los incentivos para fusionar con los incentivos en el caso de un mercado determinista, encuentro que la característica incierta en el modelo aumentan la rentabilidad de una fusión bajo competencia de precio, mientras que con competencia en cantidades, su efecto sobre la rentabilidad de la fusión es ambiguo. Sin embargo, si hay incentivos a fusionar en el caso determinista, después de la incertidumbre, aumenta la rentabilidad de la fusión. En mi trabajo "Comparando los precios de Bertrand y de Cournot: Un caso de substitutos y de complementos " confirma la sabiduría común según la cual la competencia del precio es más competitiva que la competencia de la cantidad. Resultados anteriores que apoyan esta visión en mercados de productos substitutos o en mercados de productos complementarios. Considero el caso cuando los complementos y los substitutos coexisten en un mercado de productos diferenciado. El artículo "Innovación en un modelo asimétrico: Comparando los equilibrios de Cournot y de Bertrand ". Este artículo compara los resultados y la eficiencia dinámica de los equilibrios de Cournot y de Bertrand en un duopolio diferenciado, donde solamente una empresa puede reducir el coste marginal antes de la competencia en el mercado de producto. Demostramos que, con alta sustitución y costes bajos de innovación: - los niveles de I&D pueden ser más altos bajo competencia de precio si la difusión es baja; - la producción, el exceso del consumidor y el bienestar total pueden ser más grandes bajo Cournot si la difusión es alta. Un nuevo resultado de este artículo es que con la innovación de proceso, los consumidores y los productores pueden encontrarse en mejor situación bajo competencia de cantidad. Proporcionamos ejemplos numéricos bien definidos que demuestran que estos resultados no dependen de la asimetría del modelo. / This dissertation consists of four pieces of independent work. In broad terms two general themes that it addresses can be classified as horizontal mergers and comparison of equilibrium market parameters in Cournot and Bertrand competition.In my paper, "Mergers with supply functions", I analyze the equilibrium effects of a merger in a homogenous good industry when firms compete by choosing supply schedules. Firms choose functions that relate their supply to the market price. Cournot competition where firms commit to quantities, and Bertrand competition where firms commit to prices are special cases of supply function competition - they exogenously impose horizontal and vertical supply schedules on the firms. I consider an industry with a fixed capital stock. A merger creates not only a larger firm, but also one with more efficient cost structure than any of its constituents. I find that any merger results in all firms lowering supply. The decrease in supply by non-participating firms makes any merger profitable. This differs from the effects of mergers under Cournot competition where the non-participating firms expand their supply, reducing the profitability of a merger. Any merger in an industry with equally distributed capital stock leads to a welfare loss. Finally, a merger is more likely to be welfare enhancing if it increases the "symmetry" in the industry.In my paper, titled "Mergers under uncertainty", I consider the merger decisions of firms producing differentiated products in a market with idiosyncratic demand shocks. Firms make their merger decisions taking into account that future shocks will hit the industry. All firms share the same information about the market conditions at any time. In particular, after the arrival of a shock all firms know all the relevant market parameters. Thus, a merger does not produce any informational advantage. I explore the model under two competition modes. By comparing the incentives for merger with the incentives in a benchmark case of a deterministic market, I find that the uncertain feature in the model increases the attractiveness of a merger under price competition, while with Cournot competition its effect on merger profitability is ambiguous. However, if there are incentives to merge in the deterministic case, then uncertainty increases the profitability of the merger. .In my short paper, "Comparing Bertrand and Cournot prices: A case of substitutes and complements", I reasses the common wisdom according to which price competition is more competitive than quantity competition. Previous results supporting this view deal with either substitute products markets or symmetric complement products markets. I consider the case when complements and substitutes co-exist in a differentiated products market. I introduce a set of symmetry criteria for this market, and show that when they are fulfilled, there exists a symmetric Bertrand equilibrium with a lower price than the price resulting from any symmetric Cournot equilibrium.The last piece of work in the thesis, "Innovation in an Asymmetric Setting: Comparing Cournot and Bertrand Equilibria", is a joint work with Ioana Chioveanu. This paper compares the outcomes and the dynamic efficiency of Cournot and Bertrand equilibria in a differentiated duopoly with substitute goods, where only one firm can reduce marginal cost before product market competition. We show that, with high substitutability and low innovation costs: -R&D levels can be higher under price competition if spillovers are low;-output, consumer surplus and total welfare can be larger under Cournot if spillovers are high.A new result of this article is that with process innovation, both consumers and producers can be better off under quantity competition. We provide well-defined numeric examples showing that these findings do not depend on the asymmetry of the model. The fact that innovation levels may be higher in Bertrand competition is consistent with the findings of Bester and Petrakis (1993). We identify a parameter equivalence that extends all their results from process to product innovation: In particular, incentives to improve quality may be larger under price competition, if substitutability is high.
513

Proactive management of uncertainty to improve scheduling robustness in proces industries

Bonfill Teixidor, Anna 18 December 2006 (has links)
Dinamisme, capacitat de resposta i flexibilitat són característiques essencials en el desenvolupament de la societat actual. Les noves tendències de globalització i els avenços en tecnologies de la informació i comunicació fan que s'evolucioni en un entorn altament dinàmic i incert. La incertesa present en tot procés esdevé un factor crític a l'hora de prendre decisions, així com un repte altament reconegut en l'àrea d'Enginyeria de Sistemes de Procés (PSE). En el context de programació de les operacions, els models de suport a la decisió proposats fins ara, així com també software comercial de planificació i programació d'operacions avançada, es basen generalment en dades estimades, assumint implícitament que el programa d'operacions s'executarà sense desviacions. La reacció davant els efectes de la incertesa en temps d'execució és una pràctica habitual, però no sempre resulta efectiva o factible. L'alternativa és considerar la incertesa de forma proactiva, és a dir, en el moment de prendre decisions, explotant el coneixement disponible en el propi sistema de modelització.Davant aquesta situació es plantegen les següents preguntes: què s'entén per incertesa? Com es pot considerar la incertesa en el problema de programació d'operacions? Què s'entén per robustesa i flexibilitat d'un programa d'operacions? Com es pot millorar aquesta robustesa? Quins beneficis comporta? Aquesta tesi respon a aquestes preguntes en el marc d'anàlisis operacionals en l'àrea de PSE. La incertesa es considera no de la forma reactiva tradicional, sinó amb el desenvolupament de sistemes proactius de suport a la decisió amb l'objectiu d'identificar programes d'operació robustos que serveixin com a referència pel nivell inferior de control de planta, així com també per altres centres en un entorn de cadenes de subministrament. Aquest treball de recerca estableix les bases per formalitzar el concepte de robustesa d'un programa d'operacions de forma sistemàtica. Segons aquest formalisme, els temps d'operació i les ruptures d'equip són considerats inicialment com a principals fonts d'incertesa presents a nivell de programació de la producció. El problema es modelitza mitjançant programació estocàstica, desenvolupant-se finalment un entorn d'optimització basat en simulació que captura les múltiples fonts d'incertesa, així com també estratègies de programació d'operacions reactiva, de forma proactiva. La metodologia desenvolupada en el context de programació de la producció s'estén posteriorment per incloure les operacions de transport en sistemes de múltiples entitats i incertesa en els temps de distribució. Amb aquesta perspectiva més àmplia del nivell d'operació s'estudia la coordinació de les activitats de producció i transport, fins ara centrada en nivells estratègic o tàctic. L'estudi final considera l'efecte de la incertesa en la demanda en les decisions de programació de la producció a curt termini. El problema s'analitza des del punt de vista de gestió del risc, i s'avaluen diferents mesures per controlar l'eficiència del sistema en un entorn incert.En general, la tesi posa de manifest els avantatges en reconèixer i modelitzar la incertesa, amb la identificació de programes d'operació robustos capaços d'adaptar-se a un ampli rang de situacions possibles, enlloc de programes d'operació òptims per un escenari hipotètic. La metodologia proposada a nivell d'operació es pot considerar com un pas inicial per estendre's a nivells de decisió estratègics i tàctics. Alhora, la visió proactiva del problema permet reduir el buit existent entre la teoria i la pràctica industrial, i resulta en un major coneixement del procés, visibilitat per planificar activitats futures, així com també millora l'efectivitat de les tècniques reactives i de tot el sistema en general, característiques altament desitjables per mantenir-se actiu davant la globalitat, competitivitat i dinàmica que envolten un procés. / Dynamism, responsiveness, and flexibility are essential features in the development of the current society. Globalization trends and fast advances in communication and information technologies make all evolve in a highly dynamic and uncertain environment. The uncertainty involved in a process system becomes a critical problem in decision making, as well as a recognized challenge in the area of Process Systems Engineering (PSE). In the context of scheduling, decision-support models developed up to this point, as well as commercial advanced planning and scheduling systems, rely generally on estimated input information, implicitly assuming that a schedule will be executed without deviations. The reaction to the effects of the uncertainty at execution time becomes a common practice, but it is not always effective or even possible. The alternative is to address the uncertainty proactively, i.e., at the time of reasoning, exploiting the available knowledge in the modeling procedure itself. In view of this situation, the following questions arise: what do we understand for uncertainty? How can uncertainty be considered within scheduling modeling systems? What is understood for schedule robustness and flexibility? How can schedule robustness be improved? What are the benefits? This thesis answers these questions in the context of operational analysis in PSE. Uncertainty is managed not from the traditional reactive viewpoint, but with the development of proactive decision-support systems aimed at identifying robust schedules that serve as a useful guidance for the lower control level, as well as for dependent entities in a supply chain environment. A basis to formalize the concept of schedule robustness is established. Based on this formalism, variable operation times and equipment breakdowns are first considered as the main uncertainties in short-term production scheduling. The problem is initially modeled using stochastic programming, and a simulation-based stochastic optimization framework is finally developed, which captures the multiple sources of uncertainty, as well as rescheduling strategies, proactively. The procedure-oriented system developed in the context of production scheduling is next extended to involve transport scheduling in multi-site systems with uncertain travel times. With this broader operational perspective, the coordination of production and transport activities, considered so far mainly in strategic and tactical analysis, is assessed. The final research point focuses on the effect of demands uncertainty in short-term scheduling decisions. The problem is analyzed from a risk management viewpoint, and alternative measures are assessed and compared to control the performance of the system in the uncertain environment.Overall, this research work reveals the advantages of recognizing and modeling uncertainty, with the identification of more robust schedules able to adapt to a wide range of possible situations, rather than optimal schedules for a hypothetical scenario. The management of uncertainty proposed from an operational perspective can be considered as a first step towards its extension to tactical and strategic levels of decision. The proactive perspective of the problem results in a more realistic view of the process system, and it is a promising way to reduce the gap between theory and industrial practices. Besides, it provides valuable insight on the process, visibility for future activities, as well as it improves the efficiency of reactive techniques and of the overall system, all highly desirable features to remain alive in the global, competitive, and dynamic process environment.
514

Uncertainty and indistinguishability. Application to modelling with words.

Hernández Jiménez, Enric 12 January 2007 (has links)
El concepte d'igualtat és fonamental en qualsevol teoria donat que és una noció essencial a l'hora de discernir entre els elements objecte del seu estudi i possibilitar la definició de mecanismes de classificació.Quan totes les propietats són perfectament precises (absència d'incertesa), hom obtè la igualtat clàssica a on dos objectes són considerats iguals si i només si comparteixen el mateix conjunt de propietats. Però, què passa quan considerem l'aparició d'incertesa, com en el cas a on els objectes compleixen una determinada propietat només fins a un cert grau?. Llavors, donat que alguns objectes seran més similars entre si que d'altres, sorgeix la necessitat de una noció gradual del concepte d'igualtat.Aquestes consideracions refermen la idea de que certs contextos requereixen una definició més flexible, que superi la rigidesa de la noció clàssica d'igualtat. Els operadors de T-indistingibilitat semblen bons candidats per aquest nou tipus d'igualtat que cerquem.D'altra banda, La Teoria de l'Evidència de Dempster-Shafer, com a marc pel tractament d'evidències, defineix implícitament una noció d'indistingibilitat entre els elements del domini de discurs basada en la seva compatibilitat relativa amb l'evidència considerada. El capítol segon analitza diferents mètodes per definir l'operador de T-indistingibilitat associat a una evidència donada.En el capítol tercer, després de presentar un exhaustiu estat de l'art en mesures d'incertesa, ens centrem en la qüestió del còmput de l'entropia quan sobre els elements del domini s'ha definit una relació d'indistingibilitat. Llavors, l'entropia hauria de ser mesurada no en funció de l'ocurrència d'events diferents, sinó d'acord amb la variabilitat percebuda per un observador equipat amb la relació d'indistingibilitat considerada. Aquesta interpretació suggereix el "paradigma de l'observador" que ens porta a la introducció del concepte d'entropia observacional.La incertesa és un fenomen present al món real. El desenvolupament de tècniques que en permetin el tractament és doncs, una necessitat. La 'computació amb paraules' ('computing with words') pretén assolir aquest objectiu mitjançant un formalisme basat en etiquetes lingüístiques, en contrast amb els mètodes numèrics tradicionals. L'ús d'aquestes etiquetes millora la comprensibilitat del llenguatge de representació delconeixement, a l'hora que requereix una adaptació de les tècniques inductives tradicionals.En el quart capítol s'introdueix un nou tipus d'arbre de decisió que incorpora les indistingibilitats entre elements del domini a l'hora de calcular la impuresa dels nodes. Hem anomenat arbres de decisió observacionals a aquests nou tipus, donat que es basen en la incorporació de l'entropia observacional en la funció heurística de selecció d'atributs. A més, presentem un algorisme capaç d'induir regles lingüístiques mitjançant un tractament adient de la incertesa present a les etiquetes lingüístiques o a les dades mateixes. La definició de l'algorisme s'acompanya d'una comparació formal amb altres algorismes estàndards. / The concept of equality is a fundamental notion in any theory since it is essential to the ability of discerning the objects to whom it concerns, ability which in turn is a requirement for any classification mechanism that might be defined. When all the properties involved are entirely precise, what we obtain is the classical equality, where two individuals are considered equal if and only if they share the same set of properties. What happens, however, when imprecision arises as in the case of properties which are fulfilled only up to a degree? Then, because certain individuals will be more similar than others, the need for a gradual notion of equality arises.These considerations show that certain contexts that are pervaded with uncertainty require a more flexible concept of equality that goes beyond the rigidity of the classic concept of equality. T-indistinguishability operators seem to be good candidates for this more flexible and general version of the concept of equality that we are searching for.On the other hand, Dempster-Shafer Theory of Evidence, as a framework for representing and managing general evidences, implicitly conveys the notion of indistinguishability between the elements of the domain of discourse based on their relative compatibility with the evidence at hand. In chapter two we are concerned with providing definitions for the T-indistinguishability operator associated to a given body of evidence.In chapter three, after providing a comprehensive summary of the state of the art on measures of uncertainty, we tackle the problem of computing entropy when an indistinguishability relation has been defined over the elements of the domain. Entropy should then be measured not according to the occurrence of different events, but according to the variability perceived by an observer equipped with indistinguishability abilities as defined by the indistinguishability relation considered. This idea naturally leads to the introduction of the concept of observational entropy.Real data is often pervaded with uncertainty so that devising techniques intended to induce knowledge in the presence of uncertainty seems entirely advisable.The paradigm of computing with words follows this line in order to provide a computation formalism based on linguistic labels in contrast to traditional numerical-based methods.The use of linguistic labels enriches the understandability of the representation language, although it also requires adapting the classical inductive learning procedures to cope with such labels.In chapter four, a novel approach to building decision trees is introduced, addressing the case when uncertainty arises as a consequence of considering a more realistic setting in which decision maker's discernment abilities are taken into account when computing node's impurity measures. This novel paradigm results in what have been called --observational decision trees' since the main idea stems from the notion of observational entropy in order to incorporate indistinguishability concerns. In addition, we present an algorithm intended to induce linguistic rules from data by properly managing the uncertainty present either in the set of describing labels or in the data itself. A formal comparison with standard algorithms is also provided.
515

A stochastic expansion-based approach for design under uncertainty

Walter, Miguel 12 February 2013 (has links)
An approach for robust design based on stochastic expansions is investigated. The research consists of two parts : 1) stochastic expansions for uncertainty propagation and 2) adaptive sampling for Pareto front approximation. For the first part, a strategy based on the generalized polynomial chaos (gPC) expansion method is developed. Second, in order to alleviate the computational cost of approximating the Pareto front, two strategies based on adaptive sampling for multi-objective problems are presented. The first one is based on the two aforementioned methods, whereas the second one considers, in addition, two levels of fidelity of the uncertainty propagation method.
516

Parameter Estimation and Uncertainty Analysis of Contaminant First Arrival Times at Household Drinking Water Wells

Kang, Mary January 2007 (has links)
Exposure assessment, which is an investigation of the extent of human exposure to a specific contaminant, must include estimates of the duration and frequency of exposure. For a groundwater system, the duration of exposure is controlled largely by the arrival time of the contaminant of concern at a drinking water well. This arrival time, which is normally estimated by using groundwater flow and transport models, can have a range of possible values due to the uncertainties that are typically present in real problems. Earlier arrival times generally represent low likelihood events, but play a crucial role in the decision-making process that must be conservative and precautionary, especially when evaluating the potential for adverse health impacts. Therefore, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times. To demonstrate an approach to quantify the uncertainty of arrival times, a real contaminant transport problem which involves TCE contamination due to releases from the Lockformer Company Facility in Lisle, Illinois is used. The approach used in this research consists of two major components: inverse modelling or parameter estimation, and uncertainty analysis. The parameter estimation process for this case study was selected based on insufficiencies in the model and observational data due to errors, biases, and limitations. A consideration of its purpose, which is to aid in characterising uncertainty, was also made in the process by including many possible variations in attempts to minimize assumptions. A preliminary investigation was conducted using a well-accepted parameter estimation method, PEST, and the corresponding findings were used to define characteristics of the parameter estimation process applied to this case study. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and deadzones, were incorporated in the parameter estimation process to treat specific insufficiencies. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. For each objective function, three procedures were implemented as a part of the parameter estimation approach for the given case study: a multistart procedure, a stochastic search using the Dynamically-Dimensioned Search (DDS), and a test for acceptance based on predefined physical criteria. The best performance in terms of the ability of parameter sets to satisfy the physical criteria was achieved using a Cauchy’s M-estimator that was modified for this study and designated as the LRS1 M-estimator. Due to uncertainties, multiple parameter sets obtained with the LRS1 M-estimator, the L1-estimator, and the L2-estimator are recommended for use in uncertainty analysis. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets; in contrast, deadzones proved to produce negligible benefits. The characteristics for parameter sets were examined in terms of frequency histograms and plots of parameter value versus objective function value to infer the nature of the likelihood distributions of parameters. The correlation structure was estimated using Pearson’s product-moment correlation coefficient. The parameters are generally distributed uniformly or appear to follow a random nature with few correlations in the parameter space that results after the implementation of the multistart procedure. The execution of the search procedure results in the introduction of many correlations and in parameter distributions that appear to follow lognormal, normal, or uniform distributions. The application of the physical criteria refines the parameter characteristics in the parameter space resulting from the search procedure by reducing anomalies. The combined effect of optimization and the application of the physical criteria performs the function of behavioural thresholds by removing parameter sets with high objective function values. Uncertainty analysis is performed with parameter sets obtained through two different sampling methodologies: the Monte Carlo sampling methodology, which randomly and independently samples from user-defined distributions, and the physically-based DDS-AU (P-DDS-AU) sampling methodology, which is developed based on the multiple parameter sets acquired during the parameter estimation process. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using the P-DDS-AU sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For the P-DDS-AU samples, uncertainty representation is performed using four definitions based on pseudo-likelihoods: two based on the Nash and Sutcliffe efficiency criterion, and two based on inverse error or residual variance. The definitions consist of shaping factors that strongly affect the resulting likelihood distribution. In addition, some definitions are affected by the objective function definition. Therefore, all variations are considered in the development of likelihood distribution envelopes, which are designed to maximize the amount of information available to decision-makers. The considerations that are important to the creation of an uncertainty envelope are outlined in this thesis. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria is recommended. The selection of likelihood and objective function definitions and their properties are made based on the needs of the problem; therefore, preliminary investigations should always be conducted to provide a basis for selecting appropriate methods and definitions. It is imperative to remember that the communication of assumptions and definitions used in both parameter estimation and uncertainty analysis is crucial in decision-making scenarios.
517

Petroleum Refining and Petrochemical Industry Integration and Coordination under Uncertainty

Alqahtani, Khalid January 2009 (has links)
Petroleum refining and the petrochemical industry account for a major share in the world energy and industrial market. In many situations, they represent the economy back-bone of industrial countries. Today, the volatile environment of the market and the continuous change in customer requirements lead to constant pressure to seek opportunities that properly align and coordinate the different components of the industry. In particular, petroleum refining and petrochemical industry coordination and integration is gaining a great deal of interest. However, previous research in the field either studied the two systems in isolation or assumed limited interactions between them. The aim of this thesis is to provide a framework for the planning, integration and coordination of multisite refinery and petrochemical networks using proper deterministic, stochastic and robust optimization techniques. The contributions of this dissertation fall into three categories; namely, a) Multisite refinery planning, b) Petrochemical industry planning, and c) Integration and coordination of multisite refinery and petrochemical networks. The first part of this thesis tackles the integration and coordination of a multisite refinery network. We first address the design and analysis of multisite integration and coordination strategies within a network of petroleum refineries through a mixed-integer linear programming (MILP) technique. The integrated network design specifically addresses intermediate material transfer between processing units at each site. The proposed model is then extended to account for model uncertainty by means of two-stage stochastic programming. Parameter uncertainty was considered and included coefficients of the objective function and right-hand-side parameters in the inequality constraints. Robustness is analyzed based on both model robustness and solution robustness, where each measure is assigned a scaling factor to analyze the sensitivity of the refinery plan and the integration network due to variations. The proposed technique makes use of the sample average approximation (SAA) method with statistical bounding techniques to give an insight on the sample size required to give adequate approximation of the problem. The second part of the thesis addresses the strategic planning, design and optimization of a network of petrochemical processes. We first set up and give an overview of the deterministic version of the petrochemical industry planning model adopted in this thesis. Then we extend the model to address the strategic planning, design and optimization of a network of petrochemical processes under uncertainty and robust considerations. Similar to the previous part, robustness is analyzed based on both model robustness and solution robustness. Parameter uncertainty considered in this part includes process yield, raw material and product prices, and lower product market demand. The Expected Value of Perfect Information (EVPI) and Value of the Stochastic Solution (VSS) are also investigated to numerically illustrate the value of including the randomness of the different model parameters. The final part of this dissertation addresses the integration between the multisite refinery system and the petrochemical industry. We first develop a framework for the design and analysis of possible integration and coordination strategies of multisite refinery and petrochemical networks to satisfy given petroleum and chemical product demand. The main feature of the work is the development of a methodology for the simultaneous analysis of process network integration within a multisite refinery and petrochemical system. Then we extend the petroleum refinery and petrochemical industry integration problem to consider different sources of uncertainties in model parameters. Parameter uncertainty considered includes imported crude oil price, refinery product price, petrochemical product price, refinery market demand, and petrochemical lower level product demand. We apply the sample average approximation (SAA) method within an iterative scheme to generate the required scenarios and provide solution quality by measuring the optimality gap of the final solution.
518

Uncertainty Analysis and the Identification of the Contaminant Transport and Source Parameters for a Computationally Intensive Groundwater Simulation

Yin, Yong January 2009 (has links)
Transport parameter estimation and contaminant source identification are critical steps in the development of a physically based groundwater contaminant transport model. Due to the irreversibility of the dispersion process, the calibration of a transport model of interest is inherently ill-posed, and very sensitive to the simplification employed in the development of the lumped models. In this research, a methodology for the calibration of physically based computationally intensive transport models was developed and applied to a case study, the Reich Farm Superfund site in Toms River, New Jersey. Using HydroGeoSphere, a physically based transient three-dimensional computationally intensive groundwater flow model with spatially and temporally varying recharge was developed. Due to the convergence issue of implementing saturation versus permeability curve (van Genuchten equation) for the large scale models with coarse discretization, a novel flux-based method was innovated to determined solutions for the unsaturated zone for soil-water-retention models. The parameters for the flow system were determined separately from the parameters for the contaminant transport model. The contaminant transport and source parameters were estimated using both approximately 15 years of TCE concentration data from continuous well records and data over a period of approximately 30 years from traditional monitoring wells, and compared using optimization with two heuristic search algorithms (DDS and MicroGA) and a gradient based multi-start PEST. The contaminant transport model calibration results indicate that overall, multi-start PEST performs best in terms of the final best objective function values with equal number of function evaluations. Multi-start PEST also was employed to identify contaminant transport and source parameters under different scenarios including spatially and temporally varying recharge and averaged recharge. For the detailed, transient flow model with spatially and temporally varying recharge, the estimated transverse dispersivity coefficients were estimated to be significantly less than that reported in the literature for the more traditional approach that uses steady-state flow with averaged, less physically based recharge values. In the end, based on the Latin Hypercube sampling, a methodology for comprehensive uncertainty analysis, which accounts for multiple parameter sets and the associated correlations, was developed and applied to the case study.
519

Stability and Performance for Two Classes of Time-Varying Uncertain Plants

Vale, Julie January 2009 (has links)
In this thesis, we consider plants with uncertain parameters where those parameters may be time-varying; we show that, with reasonable assumptions, we can design a controller that stabilizes such systems while providing near-optimal performance in the face of persistent discontinuities in the time-varying parameters. We consider two classes of uncertainty. The first class is modeled via a (possibly infinite) set of linear time invariant plants - the uncertain time variation consists of unpredictable (but sufficiently slow) switches between those plants. We consider standard LQR performance, and, in the case of a finite set of plants, the more complicated problem of LQR step tracking. Our second class is a time-varying gain margin problem: we consider an reasonably general, uncertain, time-varying function at the input of an otherwise linear time invariant nominal plant. In this second context, we consider the tracking problem wherein the signal to be tracked is modeled by a (stable) filter at the exogenous input and we measure performance via a weighted sensitivity function. The controllers are periodic and mildly nonlinear, with the exception that the controller for the second class is linear.
520

Three material decomposition in dual energy CT for brachytherapy using the iterative image reconstruction algorithm DIRA : Performance of the method for an anthropomorphic phantom

Westin, Robin January 2013 (has links)
Brachytherapy is radiation therapy performed by placing a radiation source near or inside a tumor. Difference between the current water-based brachytherapy dose formalism (TG-43) and new model based dose calculation algorithms (MBSCAs) can differ by more than a factor of 10 in the calculated doses. There is a need for voxel-by-voxel cross-section assignment, ideally, both the tissue composition and mass density of every voxel should be known for individual patients. A method for determining tissue composition via three material decomposition (3MD) from dual energy CT scans was developed at Linköping university. The method (named DIRA) is a model based iterative reconstruction algorithm that utilizes two photon energies for image reconstruction and 3MD for quantitative tissue classification of the reconstructed volumetric dataset. This thesis has investigated the accuracy of the 3MD method applied on prostate tissue in an anthropomorphic phantom when using two different approximations of soft tissues in DIRA. Also the distributions of CT-numbers for soft tissues in a contemporary dual energy CT scanner have been determined. An investigation whether these distributions can be used for tissue classification of soft tissues via thresholding has been conducted. It was found that the relative errors of mass energy absorption coefficient (MEAC) and linear attenuation coefficient (LAC) of the approximated mixture as functions of photon energy were less than 6 \% in the energy region from 1 keV to 1 MeV. This showed that DIRA performed well for the selected anthropomorphic phantom and that it was relatively insensitive to choice of base materials for the approximation of soft tissues. The distributions of CT-numbers of liver, muscle and kidney tissues overlapped. For example a voxel containing muscle could be misclassified as liver in 42 cases of 100. This suggests that pure thresholding is insufficient as a method for tissue classification of soft tissues and that more advanced methods should be used.

Page generated in 0.0509 seconds