• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 13
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 77
  • 42
  • 39
  • 13
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On the Toll Setting Problem

Dewez, Sophie 08 June 2004 (has links)
In this thesis we study the problem of road taxation. This problem consists in finding the toll on the roads belonging to the government or a private company in order to maximize the revenue. An optimal taxation policy consists in determining level of tolls low enough to favor the use of toll arcs, and high enough to get important revenues. Since there are twolevels of decision, the problem is formulated as a bilevel bilinear program.
2

On ridge regression and least absolute shrinkage and selection operator

AlNasser, Hassan 30 August 2017 (has links)
This thesis focuses on ridge regression (RR) and least absolute shrinkage and selection operator (lasso). Ridge properties are being investigated in great detail which include studying the bias, the variance and the mean squared error as a function of the tuning parameter. We also study the convexity of the trace of the mean squared error in terms of the tuning parameter. In addition, we examined some special properties of RR for factorial experiments. Not only do we review ridge properties, we also review lasso properties because they are somewhat similar. Rather than shrinking the estimates toward zero in RR, the lasso is able to provide a sparse solution, setting many coefficient estimates exaclty to zero. Furthermore, we try a new approach to solve the lasso problem by formulating it as a bilevel problem and implementing a new algorithm to solve this bilevel program. / Graduate
3

On conformational sampling in fragment-based protein structure prediction

Kandathil, Shaun January 2017 (has links)
Fragment assembly methods represent the state of the art in computational protein structure prediction. However, one limitation of these methods, particularly for larger protein structures, is inadequate conformational sampling. This thesis describes studies aimed at uncovering potential causes of ineffective sampling, and the development of methods to try and address these problems. To identify behaviours that might lead to poor conformational sampling, we developed measures to study fragment-based sampling trajectories. Applying these measures to the Rosetta Abinitio and EdaFold methods showed similarities and differences in the ways that these methods make predictions, and pointed to common limitations. In both protocols, structural features such as alpha-helices were more frequently altered during the search, as compared with regions such as loops. Analyses of the fragment libraries used by these methods showed that fragments covering loop regions were less likely to possess native-like structural features, and this likely exacerbated the problems of inadequate sampling in these regions. Inadequate loop sampling leads to poor fold-level exploration within individual runs of methods such as Rosetta, and this necessitates the use of many independent runs. Guided by these findings, we developed new heuristic-based search algorithms. These algorithms were designed to facilitate the exploration of multiple energy basins within runs. Over many runs, the enhanced exploration in our protocols produced decoy sets with larger fractions of native-like solutions as compared to runs of Rosetta. Experiments with different fragment sets indicated that our methods could better translate increased fragment set quality into improvements in predictive accuracy distributions. These improvements depend most strongly on the ability of search algorithms to reliably generate native-like structures using a fragment set. In contrast, inadequate retention of native-like decoys when associated with unfavourable score values appears to be less of an issue. This thesis shows that targeted developments in conformational sampling strategies can improve the accuracy and reliability of predictions. With effective conformational sampling methods, developments in methods for fragment set construction and other areas may more reliably enhance predictive ability.
4

Carbon Tax Based on the Emission Factor

Almutairi, Hossa 26 September 2013 (has links)
In response to growing concerns about the negative impact of GHG emissions, several countries such as the European Union have adopted a cap-and-trade policy to limit the overall emissions levels. Alternatively, other countries including Argentina, Canada, the United Kingdom, and United States have proposed an intensity-based cap-and-trade system that targets emission intensities, measured in emissions per dollars or unit of output. Arguably,intensity regulations can accommodate future economic growth, reduce cost uncertainty, engage developing countries in international efforts to mitigate climate change, and provide incentives to improve energy efficiency and to use less carbon-intensive fuels. This work models and studies a carbon tax scheme where policy makers set a target emission factor, which is used as an intensity measure, for a specific industry and tax firms if they exceed that limit. The policy aims to promote energy efficiency, alleviate the impact on low emitters, and allow high emitters some flexibility to comply. We examine the effectiveness of the policy in reducing the emission factor due to manufacturing and transportation. The major objective of this research is to provide policy makers with a decision support tool that can aid in investigating the impact of an intensity-based carbon tax on regulated sectors and in finding the tax rate that achieves a target reduction. Therefore, we first propose a social-welfare maximizing model that can serve as a tool to evaluate the economic and environmental impacts of the policy. We compare the outcomes of the intensity-based tax and other existing environmental policies; namely, carbon tax imposed on overall emissions, cap-and-trade systems, and mandatory caps using case studies that are built within the context of the cement industry. The effectiveness of the policy is measured by achieving a balance between the target emission factor and the social welfare. To find the optimal tax rate that achieves a target reduction, we propose a bilevel programming model where at the upper level, the government sets a target emission factor for the industry and taxes firms if they exceed that target, and at the lower level, the industry sets output levels that maximize social welfare. In the design of the policy, the government takes into account the decisions of the producers regarding fuel types and production quantities as well as the decisions of the market regarding demand. To evaluate the effectiveness of the policy, we build case studies in the context of cement industry. The policy is found to be effective in reducing the CO2 emissions by opting for a less carbon-intensive fuel with a little impact on social welfare. To examine the effectiveness of the intensity-based carbon tax on reducing CO2 emissions from transportation, which is a major supply chain activity, we finally propose a bilevel program where at the upper level the government decides on the tax rate and at the lower level firms decide on the design of their supply chain and truck types. The policy is found to be effective in inducing firms to reduce their emission factors and consequently reducing the overall emissions.
5

Carbon Tax Based on the Emission Factor

Almutairi, Hossa 26 September 2013 (has links)
In response to growing concerns about the negative impact of GHG emissions, several countries such as the European Union have adopted a cap-and-trade policy to limit the overall emissions levels. Alternatively, other countries including Argentina, Canada, the United Kingdom, and United States have proposed an intensity-based cap-and-trade system that targets emission intensities, measured in emissions per dollars or unit of output. Arguably,intensity regulations can accommodate future economic growth, reduce cost uncertainty, engage developing countries in international efforts to mitigate climate change, and provide incentives to improve energy efficiency and to use less carbon-intensive fuels. This work models and studies a carbon tax scheme where policy makers set a target emission factor, which is used as an intensity measure, for a specific industry and tax firms if they exceed that limit. The policy aims to promote energy efficiency, alleviate the impact on low emitters, and allow high emitters some flexibility to comply. We examine the effectiveness of the policy in reducing the emission factor due to manufacturing and transportation. The major objective of this research is to provide policy makers with a decision support tool that can aid in investigating the impact of an intensity-based carbon tax on regulated sectors and in finding the tax rate that achieves a target reduction. Therefore, we first propose a social-welfare maximizing model that can serve as a tool to evaluate the economic and environmental impacts of the policy. We compare the outcomes of the intensity-based tax and other existing environmental policies; namely, carbon tax imposed on overall emissions, cap-and-trade systems, and mandatory caps using case studies that are built within the context of the cement industry. The effectiveness of the policy is measured by achieving a balance between the target emission factor and the social welfare. To find the optimal tax rate that achieves a target reduction, we propose a bilevel programming model where at the upper level, the government sets a target emission factor for the industry and taxes firms if they exceed that target, and at the lower level, the industry sets output levels that maximize social welfare. In the design of the policy, the government takes into account the decisions of the producers regarding fuel types and production quantities as well as the decisions of the market regarding demand. To evaluate the effectiveness of the policy, we build case studies in the context of cement industry. The policy is found to be effective in reducing the CO2 emissions by opting for a less carbon-intensive fuel with a little impact on social welfare. To examine the effectiveness of the intensity-based carbon tax on reducing CO2 emissions from transportation, which is a major supply chain activity, we finally propose a bilevel program where at the upper level the government decides on the tax rate and at the lower level firms decide on the design of their supply chain and truck types. The policy is found to be effective in inducing firms to reduce their emission factors and consequently reducing the overall emissions.
6

Optimization Models and Algorithms for Vulnerability Analysis and Mitigation Planning of Pyro-Terrorism

Rashidi, Eghbal 12 August 2016 (has links)
In this dissertation, an important homeland security problem is studied. With the focus on wildfire and pyro-terrorism management. We begin the dissertation by studying the vulnerability of landscapes to pyro-terrorism. We develop a maximal covering based optimization model to investigate the impact of a pyro-terror attack on landscapes based on the ignition locations of fires. We use three test case landscapes for experimentation. We compare the impact of a pyro-terror wildfire with the impacts of naturally-caused wildfires with randomly located ignition points. Our results indicate that a pyro-terror attack, on average, has more than twice the impact on landscapes than wildfires with randomly located ignition points. In the next chapter, we develop a Stackelberg game model, a min-max network interdiction framework that identifies a fuel management schedule that, with limited budget, maximally mitigates the impact of a pyro-terror attack. We develop a decomposition algorithm called MinMaxDA to solve the model for three test case landscapes, located in Western U.S. Our results indicate that fuel management, even when conducted on a small scale (when 2% of a landscape is treated), can mitigate a pyro-terror attack by 14%, on average, comparing to doing nothing. For a fuel management plan with 5%, and 10% budget, it can reduce the damage by 27% and 43% on average. Finally, we extend our study to the problem of suppression response after a pyro-terror attack. We develop a max-min model to identify the vulnerability of initial attack resources when used to fight a pyro-terror attack. We use a test case landscape for experimentation and develop a decomposition algorithm called Bounded Decomposition Algorithm (BDA) to solve the problem since the model has bilevel max-min structure with binary variables in the lower level and therefore not solvable by conventional methods. Our results indicate that although pyro-terror attacks with one ignition point can be controlled with an initial attack, pyro-terror attacks with two and more ignition points may not be controlled by initial attack. Also, a faster response is more promising in controlling pyro-terror fires.
7

Modified Pal Interpolation And Sampling Bilevel Signals With Finite Rate Of Innovation

Ramesh, Gayatri 01 January 2013 (has links)
Sampling and interpolation are two important topics in signal processing. Signal processing is a vast field of study that deals with analysis and operations of signals such as sounds, images, sensor data, telecommunications and so on. It also utilizes many mathematical theories such as approximation theory, analysis and wavelets. This dissertation is divided into two chapters: Modified Pal´ Interpolation and Sampling Bilevel Signals with Finite Rate of Innovation. In the first chapter, we introduce a new interpolation process, the modified Pal interpolation, based on papers by P ´ al, J ´ oo´ and Szabo, and we establish the existence and uniqueness of interpolation polynomials of modified ´ Pal type. ´ The paradigm to recover signals with finite rate of innovation from their samples is a fairly recent field of study. In the second chapter, we show that causal bilevel signals with finite rate of innovation can be stably recovered from their samples provided that the sampling period is at or above the maximal local rate of innovation, and that the sampling kernel is causal and positive on the first sampling period. Numerical simulations are presented to discuss the recovery of bilevel causal signals in the presence of noise.
8

Computing Equivalent hydropower models in Sweden using inflow clustering

Lilja, Daniel January 2023 (has links)
To simulate a hydropower system, one can use what is known as a Detailed model. However, due to the complexity of river systems, this is often a computationally heavy task. Equivalent models, which aim to reproduce the result of a Detailed model, are used to significantly reduce the computation time for these simulations. This thesis attempts to compute Equivalent models for hydropower systems in Sweden by categorizing the inflow data using a spectral clustering method. Computing the Equivalent models also involves solving a bilevel optimization problem, which is done using a variant of the particle swarm optimization algorithm. Equivalent models are computed for all four electricity trading areas in Sweden, using solutions of a Detailed model which includes ten rivers. Then, the Equivalent models are evaluated based on their similarity to the Detailed model in terms of power production and objective value. The results vary depending on the area and period, and the Equivalent models range from 8% - 15% error in terms of the relative power production difference. The results indicate that the inflow clustering procedure produces adequate Equivalent models in most cases. / För att simulera ett vattenkraftsystem, kan en så kallad Detaljerad modell användas. På grund av komplexiteten av flodsystem, kan lösningen av en Detaljerad modell ta mycket lång tid att hitta. Ekvivalenta modeller, som strävar att efterlikna en Detaljerad modell, används för att reducera lösningstiden markant. Denna avhandling försöker beräkna Ekvivalenta modeller för vattenkraftsystem i Sverige genom att kategorisera inflödesdata med hjälp av en spektral klustringsmetod. Beräkningen av de Ekvivalenta modellerna involverar att lösa ett så kallat bilevelproblem, vilket görs med en variant av particle swarm optimization. Ekvivalenta modeller beräknas för alla fyra elområden i Sverige, baserad på lösningar av en Detaljerad modell som inkluderar tio floder. Sedan utvärderas de Ekvivalenta modellerna efter hur mycket de liknar den Detaljerade modellens kraftproduktion samt objektivvärde. Resultaten varierar beroende på elområde och period, och de Ekvivalenta modellerna har fel på 8% - 15% i den relativa kraftproduktionsskillnaden. Resultaten indikerar att metoden att klustra efter inflöde ger tilfredsställande Ekvivalenta modeller i de flesta fallen.
9

Optimization problems of electricity market under modern power grid

Lei, Ming 22 February 2016 (has links)
Nowadays, electricity markets are becoming more deregulated, especially development of smart grid and introduction of renewable energy promote regulations of energy markets. On the other hand, the uncertainties of new energy sources and market participants’ bidding bring more challenges to power system operation and transmission system planning. These problems motivate us to study spot price (also called locational marginal pricing) of electricity markets, the strategic bidding of wind power producer as an independent power producer into power market, transmission expansion planning considering wind power investment, and analysis of the maximum loadability of a power grid. The work on probabilistic spot pricing for a utility grid includes renewable wind power generation in a deregulated environment, taking into account both the uncertainty of load forecasting and the randomness of wind speed. Based on the forecasted normal-distributed load and Weibull-distributed wind speed, probabilistic optimal power flow is formulated by including spinning reserve cost associated with wind power plants and emission cost in addition to conventional thermal power plant cost model. Simulations show that the integration of wind power can effectively decrease spot price, also increase the risk of overvoltage. Based on the concept of loacational marginal pricing which is determined by a marketclearing algorithm, further research is conducted on optimal offering strategies for wind power producers participating in a day-ahead market employing a stochastic market-clearing algoivrithm. The proposed procedure to drive strategic offers relies on a stochastic bilevel model: the upper level problem represents the profit maximization of the strategic wind power producer, while the lower level one represents the marketing clearing and the corresponding price formulation aiming to co-optimize both energy and reserve. Thirdly, to improve wind power integration, we propose a bilevel problem incorporating two-stage stochastic programming for transmission expansion planning to accommodate large-scale wind power investments in electricity markets. The model integrates cooptimizations of energy and reserve to deal with uncertainties of wind power production. In the upper level problem, the objective of independent system operator (ISO) modelling transmission investments under uncertain environments is to minimize the transmission and wind power investment cost, and the expected load shedding cost. The lower level problem is composed of a two stage stochastic programming problem for energy schedule and reserve dispatch simultaneously. Case studies are carried out for illustrating the effectiveness of the proposed model. The above market-clearing or power system operation is based on direct current optimal power flow (DC-OPF) model which is a linear problem without reactive power constraints. Power system maximum loadability is a crucial index to determine voltage stability. The fourth work in this thesis proposes a Lagrange semi-definite programming (SDP) method to solve the non-linear and non-convex optimization alternating current (AC) problem of the maximum loadability of security constrained power system. Simulation results from the IEEE three-bus system and IEEE 24-bus Reliability Test System (RTS) show that the proposed method is able to obtain the global optimal solution for the maximum loadability problem. Lastly, we summarize the conclusions from studies on the above mentioned optimization problems of electric power market under modern grid, as well as the influence of wind power integration on power system reliability, and transmission expansion planning, as well as the operations of electricity markets. Meanwhile, we also present some open questions on the related research, such as non-convex constraints in the lower-level problem of a bilevel problem, and integrating N-1 security criterion of transmission planning. / Graduate / lei.ming296@gmail.com
10

Effectiveness of continuous or bilevel positive airway pressure versus standard medical therapy for acute asthma

Hanekom, Silmara Guanaes 09 July 2008 (has links)
ABSTRACT Patients with respiratory failure secondary to acute asthma exacerbation (AAE) frequently present at emergency units. Some patients may develop respiratory muscle fatigue. Current guidelines for the treatment of an AAE center on pharmacological treatment and invasive mechanical ventilation. Noninvasive positive pressure ventilation (NPPV) has an established role in COPD exacerbations. The role it can play in an AAE remains unanswered although it is frequently used in the clinical setting. Aims: The present study proposed to investigate if the early use of NPPV in the forms of continuous positive airway pressure (CPAP) or bilevel positive pressure ventilation (BPPV) together with standard medical therapy in AAE can decrease time of response to therapy compared to standard medical therapy alone. We further tested the effect of BPPV against CPAP. Methods: Asthmatic patients who presented with a severe AAE (PEFR % predicted < 60 %) at the emergency unit were randomized to either standard medical therapy (ST), ST and CPAP or ST and BPPV. Thirty patients fulfilled the inclusion criteria for the study. Groups presented similar baseline characteristics. The mean age for the group was 42.1 ± 12.6 years. Mean baseline PEFR % predicted was 35.2 ± 10.7 % (ST), 30.5 ± 11.7 % (ST + CPAP) and 33.5 ±13.8 % (ST + BPPV). Results: Hourly improvement (Δ) in respiratory rate and sensation of breathlessness was significantly better in the BPPV intervention group. Improvement (Δ) from baseline to end of treatment in respiratory rate and sensation of breathlessness was significant for both CPAP and BPPV (p = 0.0463; p = 0.0132 respectively) compared to ST alone. Lung function was significantly improved in the CPAP intervention group hourly and from baseline to end of treatment (p = 0.0403 for PEFR and p = 0.0293 for PEFR % predicted) compared to ST + BPPV and ST alone. The mean shift (Δ) in PEFR from baseline to 3 hours of treatment was 67.4, 123.5 and 86.8 L/min (p = 0.0445) for ST, ST + CPAP and ST + BPPV respectively. This corresponded to a 38.1, 80.8 and 51.7 % improvement in lung function respectively. Discussion: The effect of BPPV on the reduction of respiratory rate and sensation of breathlessness could be related to the inspiratory assistance provided by BPPV. The significant improvement in lung function in the CPAP group could be related to its intrinsic effect on the airway smooth muscle and / or on the airway smooth muscle load. Conclusion: The present results suggest that adding NPPV to standard treatment for an AAE not only improves clinical signs faster but also improves lung function faster. CPAP seems to have an intrinsic effect on the airway smooth muscle so rendering it more effective in ameliorating lung function.

Page generated in 0.0443 seconds