Spelling suggestions: "subject:"ensitivity 2analysis"" "subject:"ensitivity 3analysis""
411 |
Method to Detect and Measure Potential Market Power Caused by Transmission Network Congestions on Electricity MarketsElfstadius, Martin, Gecer, Daniel January 2008 (has links)
<p>This thesis is based on studies of the deregulated electricity markets located in the United States of America. The problem statement of the thesis evolved continuously throughout our initial period of research. Focus was finally put on monitoring and detection of potential market power caused by congestion in the transmission network. The existence of market power is a serious concern in today’s electric energy markets. A system that monitors the trading is needed and much research and many proposals on how to deal with this problem have been introduced over the years. We focus on some of these approaches and develop an approach of our own, which we call “Monopolistic Energy Calculation”. We adopt the idea to identify participants with the ability to raise prices without losing market share. An ability that should not be present on a competitive market. We take this idea further by identifying participants with the ability to make considerable price raises without losing all market shares. We propose a way to calculate the remaining market shares (Monopolistic Energy Levels) after a large price raise. These calculated levels of energy, that are only deliverable by a certain participant or by a certain group of participants, are caused by the active congestions in the network.</p><p>The approach detects the amounts of these energy levels and the location in the network at which they are present. This is a prospective method if used with a prediction of the following day’s demand, which is regularly available with high accuracy. The method can also be used for monitoring purposes to identify critical situations in real-time. The method is implemented and two sets of simulations are done in which we explain and evaluate the approach. The results are promising and the correlation between “Monopolistic Energy” and market power is confirmed.</p> / <p>Detta examensarbete är baserat på studier av de deregulerade electricitsmarknaderna i USA. Problemformuleringen var i början av detta arbete inte definitiv, utan utvecklades under en längre inledande fas av forskningsarbete. Slutligen kunde vi faställa att detektion av potentiell marknadskraft på elektricitetsmarknaden, orsakat av överbelastningar i transmissionnätverket, var av särskilt intresse. Ett system som övervakar handeln och förekomster av orättvisor orsakat av detta är nödvändigt. Det har de senaste åren gjorts mycket forskning inom detta område. Baserat på denna forskning utvecklades sedan ett eget förslag, som vi kallar ”Monopolistic Energy Calculations”. Vissa tidigare förslag på hur problemet kan angripas blev av särskilt intresse. En idé från dessa var att identifiera marknadsaktörer med förmågan att höja priser utan att förlora marknadsandelar, en icke önskvärd egenskap hos aktörer då en konkurrenskraftig marknad är eftertraktad.</p><p>Vi tar denna idé ett steg längre genom att identifiera marknadsaktörer med förmågan att höja priser signifikant utan att förlora alla marknadsandelar. Vi föreslår ett sätt att beräkna dessa energinivåer som endast är möjliga att levereras av en eller ett fåtal särskilda aktörer, som direkt följd av de aktiva stockningarna i nätverket, under antagandet av en inelastisk efterfrågan. Vi föreslår ett sätt att beräkna de återstående marknadsandelarna (Monopolistic Energy Levels) efter en stor prishöjning. Vår metod beräknar mängden av denna energi och var i nätverket dessa mängder förekommer. Denna metod kan sia om framtida problem om en estimering av morgondagens efterfråga används. Sådana estimeringar görs idag</p><p>regelbundet med hög träffsäkerhet. Metoden kan även användas i realtid för upptäckt av kritiska marknadssituationer. Simuleringar av detta görs som förklarar vår lösning och utvärderar den. Resultaten är lovande och korrelationen mellan ”Monopolistisk Energi” och marknadskraft är bekräftade.</p>
|
412 |
Multi-objective Optimization of Plug-in Hybrid Electric Vehicle (PHEV) Powertrain Families considering Variable Drive Cycles and User Types over the Vehicle LifecycleAl Hanif, S. Ehtesham 02 October 2015 (has links)
Plug-in Hybrid Electric vehicle (PHEV) technology has the potential to reduce operational costs, greenhouse gas (GHG) emissions, and gasoline consumption in the transportation market. However, the net benefits of using a PHEV depend critically on several aspects, such as individual travel patterns, vehicle powertrain design and battery technology. To examine these effects, a multi-objective optimization model was developed integrating vehicle physics simulations through a Matlab/Simulink model, battery durability, and Canadian driving survey data. Moreover, all the drivetrains are controlled implicitly by the ADVISOR powertrain simulation and analysis tool. The simulated model identifies Pareto optimal vehicle powertrain configurations using a multi-objective Pareto front pursuing genetic algorithm by varying combinations of powertrain components and allocation of vehicles to consumers for the least operational cost, and powertrain cost under various driving assumptions. A sensitivity analysis over the foremost cost parameters is included in determining the robustness of the optimized solution of the simulated model in the presence of uncertainty. Here, a comparative study is also established between conventional and hybrid electric vehicles (HEVs) to PHEVs with equivalent optimized solutions, size and performance (similar to Toyota Prius) under both the urban and highway driving environments. In addition, breakeven point analysis is carried out that indicates PHEV lifecycle cost must fall within a few percent of CVs or HEVs to become both the environmentally friendly and cost-effective transportation solutions. Finally, PHEV classes (a platform with multiple powertrain architectures) are optimized taking into account consumer diversity over various classes of light-duty vehicle to investigate consumer-appropriate architectures and manufacturer opportunities for vehicle fleet development utilizing simplified techno-financial analysis. / Graduate / 0540 / 0548 / ehtesham@uvic.ca
|
413 |
Integrated network-based models for evaluating and optimizing the impact of electric vehicles on the transportation systemZhang, Ti 13 November 2012 (has links)
The adoption of plug-in electric vehicles (PEV) requires research for models and algorithms tracing the vehicle assignment incorporating PEVs in the transportation network so that the traffic pattern can be more precisely and accurately predicted. To attain this goal, this dissertation is concerned with developing new formulations for modeling travelling behavior of electric vehicle drivers in a mixed flow traffic network environment. Much of the work in this dissertation is motivated by the special features of PEVs (such as range limitation, requirement of long electricity-recharging time, etc.), and the lack of tools of understanding PEV drivers traveling behavior and learning the impacts of charging infrastructure supply and policy on the network traffic pattern.
The essential issues addressed in this dissertation are: (1) modeling the spatial choice behavior of electric vehicle drivers and analyzing the impacts from electricity-charging speed and price; (2) modeling the temporal and spatial choices behavior of electric vehicle drivers and analyzing the impacts of electric vehicle range and penetration rate; (3) and designing the optimal charging infrastructure investments and policy in the perspective of revenue management. Stochastic traffic assignment that can take into account for charging cost and charging time is first examined. Further, a quasi-dynamic stochastic user equilibrium model for combined choices of departure time, duration of stay and route, which integrates the nested-Logit discrete choice model, is formulated as a variational inequality problem. An extension from this equilibrium model is the network design model to determine an optimal charging infrastructure capacity and pricing. The objective is to maximize revenue subject to equilibrium constraints that explicitly consider the electric vehicle drivers’ combined choices behavior.
The proposed models and algorithms are tested on small to middle size transportation networks. Extensive numerical experiments are conducted to assess the performance of the models. The research results contain the author’s initiative insights of network equilibrium models accounting for PEVs impacted by different scenarios of charging infrastructure supply, electric vehicles characteristics and penetration rates. The analytical tools developed in this dissertation, and the resulting insights obtained should offer an important first step to areas of travel demand modeling and policy making incorporating PEVs. / text
|
414 |
Sensitivity Analyses in Empirical Studies Plagued with Missing DataLiublinska, Viktoriia 07 June 2014 (has links)
Analyses of data with missing values often require assumptions about missingness mechanisms that cannot be assessed empirically, highlighting the need for sensitivity analyses. However, universal recommendations for reporting missing data and conducting sensitivity analyses in empirical studies are scarce. Both steps are often neglected by practitioners due to the lack of clear guidelines for summarizing missing data and systematic explorations of alternative assumptions, as well as the typical attendant complexity of missing not at random (MNAR) models. We propose graphical displays that help visualize and systematize the results of sensitivity analyses, building upon the idea of "tipping-point" analysis for experiments with dichotomous treatment. The resulting "enhanced tipping-point displays" (ETP) are convenient summaries of conclusions drawn from using different modeling assumptions about the missingness mechanisms, applicable to a broad range of outcome distributions. We also describe a systematic way of exploring MNAR models using ETP displays, based on a pattern-mixture factorization of the outcome distribution, and present a set of sensitivity parameters that arises naturally from such a factorization. The primary goal of the displays is to make formal sensitivity analyses more comprehensible to practitioners, thereby helping them assess the robustness of experiments' conclusions. We also present an example of a recent use of ETP displays in a medical device clinical trial, which helped lead to FDA approval. The last part of the dissertation demonstrates another method of sensitivity analysis in the same clinical trial. The trial is complicated by missingness in outcomes "due to death", and we address this issue by employing Rubin Causal Model and principal stratification. We propose an improved method to estimate the joint posterior distribution of estimands of interest using a Hamiltonian Monte Carlo algorithm and demonstrate its superiority for this problem to the standard Metropolis-Hastings algorithm. The proposed methods of sensitivity analyses provide new collections of useful tools for the analysis of data sets plagued with missing values. / Statistics
|
415 |
Development of reliable pavement modelsAguiar Moya, José Pablo, 1981- 13 October 2011 (has links)
As the cost of designing and building new highway pavements increases and the number of new construction and major rehabilitation projects decreases, the importance of ensuring that a given pavement design performs as expected in the field becomes vital. To address this issue in other fields of civil engineering, reliability analysis has been used extensively. However, in the case of pavement structural design, the reliability component is usually neglected or overly simplified. To address this need, the current dissertation proposes a framework for estimating the reliability of a given pavement structure regardless of the pavement design or analysis procedure that is being used.
As part of the dissertation, the framework is applied with the Mechanistic-Empirical Pavement Design Guide (MEPDG) and failure is considered as a function of rutting of the hot-mix asphalt (HMA) layer. The proposed methodology consists of fitting a response surface, in place of the time-demanding implicit limit state functions used within the MEPDG, in combination with an analytical approach to estimating reliability using second moment techniques: First-Order and Second-Order Reliability Methods (FORM and SORM) and simulation techniques: Monte Carlo and Latin Hypercube Simulation.
In order to demonstrate the methodology, a three-layered pavement structure is selected consisting of a hot-mix asphalt (HMA) surface, a base layer, and subgrade. Several pavement design variables are treated as random; these include HMA and base layer thicknesses, base and subgrade modulus, and HMA layer binder and air void content. Information on the variability and correlation between these variables are obtained from the Long-Term Pavement Performance (LTPP) program, and likely distributions, coefficients of variation, and correlation between the variables are estimated. Additionally, several scenarios are defined to account for climatic differences (cool, warm, and hot climatic regions), truck traffic distributions (mostly consisting of single unit trucks versus mostly consisting of single trailer trucks), and the thickness of the HMA layer (thick versus thin).
First and second order polynomial HMA rutting failure response surfaces with interaction terms are fit by running the MEPDG under a full factorial experimental design consisting of 3 levels of the aforementioned design variables. These response surfaces are then used to analyze the reliability of the given pavement structures under the different scenarios. Additionally, in order to check for the accuracy of the proposed framework, direct simulation using the MEPDG was performed for the different scenarios. Very small differences were found between the estimates based on response surfaces and direct simulation using the MEPDG, confirming the accurateness of the proposed procedure.
Finally, sensitivity analysis on the number of MEPDG runs required to fit the response surfaces was performed and it was identified that reducing the experimental design by one level still results in response surfaces that properly fit the MEPDG, ensuring the applicability of the method for practical applications. / text
|
416 |
Method to Detect and Measure Potential Market Power Caused by Transmission Network Congestions on Electricity MarketsElfstadius, Martin, Gecer, Daniel January 2008 (has links)
This thesis is based on studies of the deregulated electricity markets located in the United States of America. The problem statement of the thesis evolved continuously throughout our initial period of research. Focus was finally put on monitoring and detection of potential market power caused by congestion in the transmission network. The existence of market power is a serious concern in today’s electric energy markets. A system that monitors the trading is needed and much research and many proposals on how to deal with this problem have been introduced over the years. We focus on some of these approaches and develop an approach of our own, which we call “Monopolistic Energy Calculation”. We adopt the idea to identify participants with the ability to raise prices without losing market share. An ability that should not be present on a competitive market. We take this idea further by identifying participants with the ability to make considerable price raises without losing all market shares. We propose a way to calculate the remaining market shares (Monopolistic Energy Levels) after a large price raise. These calculated levels of energy, that are only deliverable by a certain participant or by a certain group of participants, are caused by the active congestions in the network. The approach detects the amounts of these energy levels and the location in the network at which they are present. This is a prospective method if used with a prediction of the following day’s demand, which is regularly available with high accuracy. The method can also be used for monitoring purposes to identify critical situations in real-time. The method is implemented and two sets of simulations are done in which we explain and evaluate the approach. The results are promising and the correlation between “Monopolistic Energy” and market power is confirmed. / Detta examensarbete är baserat på studier av de deregulerade electricitsmarknaderna i USA. Problemformuleringen var i början av detta arbete inte definitiv, utan utvecklades under en längre inledande fas av forskningsarbete. Slutligen kunde vi faställa att detektion av potentiell marknadskraft på elektricitetsmarknaden, orsakat av överbelastningar i transmissionnätverket, var av särskilt intresse. Ett system som övervakar handeln och förekomster av orättvisor orsakat av detta är nödvändigt. Det har de senaste åren gjorts mycket forskning inom detta område. Baserat på denna forskning utvecklades sedan ett eget förslag, som vi kallar ”Monopolistic Energy Calculations”. Vissa tidigare förslag på hur problemet kan angripas blev av särskilt intresse. En idé från dessa var att identifiera marknadsaktörer med förmågan att höja priser utan att förlora marknadsandelar, en icke önskvärd egenskap hos aktörer då en konkurrenskraftig marknad är eftertraktad. Vi tar denna idé ett steg längre genom att identifiera marknadsaktörer med förmågan att höja priser signifikant utan att förlora alla marknadsandelar. Vi föreslår ett sätt att beräkna dessa energinivåer som endast är möjliga att levereras av en eller ett fåtal särskilda aktörer, som direkt följd av de aktiva stockningarna i nätverket, under antagandet av en inelastisk efterfrågan. Vi föreslår ett sätt att beräkna de återstående marknadsandelarna (Monopolistic Energy Levels) efter en stor prishöjning. Vår metod beräknar mängden av denna energi och var i nätverket dessa mängder förekommer. Denna metod kan sia om framtida problem om en estimering av morgondagens efterfråga används. Sådana estimeringar görs idag regelbundet med hög träffsäkerhet. Metoden kan även användas i realtid för upptäckt av kritiska marknadssituationer. Simuleringar av detta görs som förklarar vår lösning och utvärderar den. Resultaten är lovande och korrelationen mellan ”Monopolistisk Energi” och marknadskraft är bekräftade.
|
417 |
Multi-fidelity Gaussian process regression for computer experimentsLe Gratiet, Loic 04 October 2013 (has links) (PDF)
This work is on Gaussian-process based approximation of a code which can be run at different levels of accuracy. The goal is to improve the predictions of a surrogate model of a complex computer code using fast approximations of it. A new formulation of a co-kriging based method has been proposed. In particular this formulation allows for fast implementation and for closed-form expressions for the predictive mean and variance for universal co-kriging in the multi-fidelity framework, which is a breakthrough as it really allows for the practical application of such a method in real cases. Furthermore, fast cross validation, sequential experimental design and sensitivity analysis methods have been extended to the multi-fidelity co-kriging framework. This thesis also deals with a conjecture about the dependence of the learning curve (ie the decay rate of the mean square error) with respect to the smoothness of the underlying function. A proof in a fairly general situation (which includes the classical models of Gaussian-process based metamodels with stationary covariance functions) has been obtained while the previous proofs hold only for degenerate kernels (ie when the process is in fact finite-dimensional). This result allows for addressing rigorously practical questions such as the optimal allocation of the budget between different levels of codes in the multi-fidelity framework.
|
418 |
Parameter, State and Uncertainty Estimation for 3-dimensional Biological Ocean ModelsMattern, Jann Paul 15 August 2012 (has links)
Realistic physical-biological ocean models pose challenges to statistical techniques due to their complexity, nonlinearity and high dimensionality. In this thesis, statistical data assimilation techniques for parameter and state estimation are adapted and applied to biological models. These methods rely on quantitative measures of agreement between models and observations. Eight such measures are compared and a suitable multiscale measure is selected for data assimilation. Build on this, two data assimilation approaches, a particle filter and a computationally efficient emulator approach are tested and contrasted. It is shown that both are suitable for state and parameter estimation. The emulator is also used to analyze sensitivity and uncertainty of a realistic biological model. Application of the statistical procedures yields insights into the model; e.g. time-dependent parameter estimates are obtained which are consistent with biological seasonal cycles and improves model predictions as evidenced by cross-validation experiments. Estimates of model sensitivity are high with respect to physical model inputs, e.g river runoff.
|
419 |
Improving microalgae biofuel production: an engineering management approachMathew, Domoyi Castro 07 1900 (has links)
The use of microalgae culture to convert CO2 from power plant flue gases into
biomass that are readily converted into biofuels offers a new frame of
opportunities to enhance, compliment or replace fossil-fuel-use. Apart from
being renewable, microalgae also have the capacity to utilise materials from a
variety of wastewater and the ability to yield both liquid and gaseous biofuels.
However, the processes of cultivation, incorporation of a production system for
power plant waste flue gas use, algae harvesting, and oil extraction from the
biomass have many challenges. Using SimaPro software, Life cycle
Assessment (LCA) of the challenges limiting the microalgae (Chlorella vulgaris)
biofuel production process was performed to study algae-based pathway for
producing biofuels. Attention was paid to material use, energy consumed and
the environmental burdens associated with the production processes. The goal
was to determine the weak spots within the production system and identify
changes in particular data-set that can lead to and lower material use, energy
consumption and lower environmental impacts than the baseline microalgae
biofuel production system. The analysis considered a hypothetical
transesterification and Anaerobic Digestion (AD) transformation of algae-to-
biofuel process. Life cycle Inventory (LCI) characterisation results of the
baseline biodiesel (BD) transesterification scenario indicates that heating to get
the biomass to 90% DWB accounts for 64% of the total input energy, while
electrical energy and fertilizer obligations represents 19% and 16% respectively.
Also, Life Cycle Impact Assessment (LCIA) results of the baseline BD
production scenario show high proportional contribution of electricity and heat
energy obligations for most impact categories considered relative to other
resources. This is attributed to the concentration/drying requirement of algae
biomass in order to ease downstream processes of lipid extraction and
subsequent transesterification of extracted lipids into BD. Thus, four prospective
alternative production scenarios were successfully characterised to evaluate the
extent of their impact scenarios on the production system with regards to
lowering material use, lower energy consumption and lower environmental
burdens than the standard algae biofuel production system. A 55.3% reduction
in mineral use obligation was evaluated as the most significant impact reduction
due to the integration of 100% recycling of production harvest water for the AD
production system. Recycling also saw water demand reduced from 3726 kg
(freshwater).kgBD-
1
to 591kg (freshwater).kgBD-
1
after accounting for
evaporative losses/biomass drying for the BD transesterification production
process. Also, the use of wastewater/sea water as alternative growth media for
the BD production system, indicated potential savings of: 4.2 MJ (11.8%) in
electricity/heat obligation, 10.7% reductions for climate change impact, and 87%
offset in mineral use requirement relative to the baseline production system.
Likewise, LCIA characterisation comparison results comparing the baseline
production scenarios with that of a set-up with co-product economic allocation
consideration show very interesting outcomes. Indicating -12 MJ surplus (-33%)
reductions for fossil fuels resource use impact category, 52.7% impact
reductions for mineral use impact and 56.6% reductions for land use impact
categories relative to the baseline BD production process model. These results
show the importance of allocation consideration to LCA as a decision support
tool. Overall, process improvements that are needed to optimise economic
viability also improve the life cycle environmental impacts or sustainability of the
production systems. Results obtained have been observed to agree reasonably
with Monte Carlo sensitivity analysis, with the production scenario proposing the
exploitation of wastewater/sea water to culture algae biomass offering the best
result outcome. This study may have implications for additional resources such
as production facility and its construction process, feedstock processing
logistics and transport infrastructure which are excluded. Future LCA study will
require extensive consideration of these additional resources such as: facility
size and its construction, better engineering data for water transfer, combined
heat and power plant efficiency estimates and the fate of long-term emissions
such as organic nitrogen in the AD digestate. Conclusions were drawn and
suggestions proffered for further study.
|
420 |
Sensitivity and Uncertainty Analysis Methods : with Applications to a Road Traffic Emission Model / Känslighets- och osäkerhetsanalysmetoder : med tillämpningar på en emissionsmodell för vägtrafikEriksson, Olle January 2007 (has links)
There is always a need to study the properties of complex input–output systems, properties that may be very difficult to determine. Two such properties are the output’s sensitivity to changes in the inputs and the output’s uncertainty if the inputs are uncertain. A system can be formulated as a model—a set of functions, equations and conditions that describe the system. We ultimately want to study and learn about the real system, but with a model that approximates the system well, we can study the model instead, which is usually easier. It is often easier to build a model as a set of combined sub-models, but good knowledge of each sub-model does not immediately lead to good knowledge of the entire model. Often, the most attractive approach to model studies is to write the model as computer software and study datasets generated by that software. Methods for sensitivity analysis (SA) and uncertainty analysis (UA) cannot be expected to be exactly the same for all models. In this thesis, we want to determine suitable SA and UA methods for a road traffic emission model, methods that can also be applied to any other model of similar structure. We examine parts of a well-known emission model and suggest a powerful data-generating tool. By studying generated datasets, we can examine properties in the model, suggest SA and UA methods and discuss the properties of these methods. We also present some of the results of applying the methods to the generated datasets. / Det finns alltid behov av att studera egenskaper hos komplexa input-output-system, egenskaper som kan vara mycket svåra att få fram. Två sådana egenskaper är ut fallets känslighet mot förändringar i ingångsvärdena och utfallets osäkerhet om ingångsvärdena har osäkerhet. Ett system kan formuleras som en modell-en mängd funktioner, ekvationer och betingelser som tillsammans liknar systemet. Vi vill egentligen studera och lära oss det verkliga systemet, men med en modell som approximerar det verkliga systemet bra kan man studera modellen istället, vilket i de flesta fall är enklare. Det är oftast enklare att bygga en modell som en mängd kombinerade delmodeller, men bra kunskap om varje delmodell leder inte omedelbart till bra kunskap om hela modellen. Det enklaste tillvägagångssättet för modellstudier är oftast att studera datamängder som genererats av modellen genom ett datorprogram. Metoder för känslighetsanalys (SA) och osäkerhetsanalys (UA) kan inte förväntas vara likadana för varje modell. I den här avhandlingen ska vi studera SA- och UA-metoder och resultat för en emissionsmodell för vägtrafik, men metoderna kan även användas för andra modeller av liknande struktur. Vi undersöker en välkänd emissionsmodell och föreslår ett kraftfullt verktyg för att generera data. Genom att studera genererade datamängder kan vi undersöka egenskaper i modellen, föreslå SA- och VA-metoder och diskutera metodernas egenskaper. Vi visar också några resultat när man tillämpar metoderna på de genererade datamängderna.
|
Page generated in 0.077 seconds