• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 327
  • 113
  • 91
  • 76
  • 36
  • 24
  • 12
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 878
  • 878
  • 145
  • 124
  • 121
  • 118
  • 113
  • 101
  • 101
  • 85
  • 82
  • 81
  • 73
  • 71
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Global sensitivity analysis of reactor parameters / Bolade Adewale Adetula

Adetula, Bolade Adewale January 2011 (has links)
Calculations of reactor parameters of interest (such as neutron multiplication factors, decay heat, reaction rates, etc.), are often based on models which are dependent on groupwise neutron cross sections. The uncertainties associated with these neutron cross sections are propagated to the final result of the calculated reactor parameters. There is a need to characterize this uncertainty and to be able to apportion the uncertainty in a calculated reactor parameter to the different sources of uncertainty in the groupwise neutron cross sections, this procedure is known as sensitivity analysis. The focus of this study is the application of a modified global sensitivity analysis technique to calculations of reactor parameters that are dependent on groupwise neutron cross–sections. Sensitivity analysis can help in identifying the important neutron cross sections for a particular model, and also helps in establishing best–estimate optimized nuclear reactor physics models with reduced uncertainties. In this study, our approach to sensitivity analysis will be similar to the variance–based global sensitivity analysis technique, which is robust, has a wide range of applicability and provides accurate sensitivity information for most models. However, this technique requires input variables to be mutually independent. A modification to this technique, that allows one to deal with input variables that are block–wise correlated and normally distributed, is presented. The implementation of the modified technique involves the calculation of multi–dimensional integrals, which can be prohibitively expensive to compute. Numerical techniques specifically suited to the evaluation of multidimensional integrals namely Monte Carlo, quasi–Monte Carlo and sparse grids methods are used, and their efficiency is compared. The modified technique is illustrated and tested on a two–group cross–section dependent problem. In all the cases considered, the results obtained with sparse grids achieved much better accuracy, while using a significantly smaller number of samples. / Thesis (M.Sc. Engineering Sciences (Nuclear Engineering))--North-West University, Potchefstroom Campus, 2011.
212

Development of a capital investment framework for a gold mine / M. Clasen

Clasen, Mari January 2011 (has links)
This study was done against the backdrop that executives should carefully consider all the options to manage difficult periods before letting employees go, especially if they are going to rehire employees shortly after the economic recovery. Therefore, the study investigated whether investing in operational development of a plant can be used to increase feasibility, rather than to make across–the–board labour cuts. Two South African mining companies were chosen for this study. They are two investment centres at AngloGold Ashanti, Mine X Ltd. and Mine Z Ltd. The investigating project was done at Mine X to extract gold from the neighbouring Mine Z. Mine X will have access to the minerals 40 years in advance of Mine Z due to insufficient essential infrastructure at Mine Z. The life–time of the project is 18 years (estimated). The main objective of this study is to investigate the feasibility, from Mine X’s point of view, with a deepening project including Mine Z. The most significant aspect will be to determine which investment timeframe decision will gain Mine X a feasible position in terms of economic growth. This will be achieved by the following secondary objectives in making a capital investment decision: 1. To describe the nature and significance of investment decision making. 2. To recognise appropriate capital investment evaluation techniques in conjunction with sensitivity analysis. 3. To apply the techniques and sensitivity analysis in order to make a decision of a possible, feasible investment opportunity at Mine X. 4. To develop a framework to identify the project’s components and associate and access difficulties for Mine X‘s project lifecycle. The feasibility study undertakes multiple scenarios and provides recommendations and a final report, based on the scenario that is the most viable. The following techniques which were identified were used to analyse the feasibility of the project: Net present value, internal rate of return and payback period. All these above techniques will be analysed in three different scenarios, namely: 1. Mine X will stay with its current operations without any new projects. 2. The development project will begin immediately. 3. A six–month delay in development of the project. The study found that the net present value was positive, the internal rate of return was more than the discount rate and the payback period was shorter than the project’s life–time regarding to all three above–mentioned scenarios. The highest net present value is calculated in case the project starts immediately. Both the internal rate of return and the payback period indicated that a six month delay in the project is the most viable. After considering all the facts, the study concluded due to the highest net present value the best feasible recommendation would be to start the project immediately. The value of this study is that it is the first study to investigate the relationship between the viability to delay or to start the investment project immediately in the South African mining industry. This study is also unique, since it takes into account how mining industries world–wide can achieve long–term success through development projects without losing key players, due to impulsive short–term downsizing decisions. / Thesis (M.Com. (Management Accountancy))--North-West University, Potchefstroom Campus, 2012.
213

Decision support algorithms for power system and power electronic design

Heidari, Maziar 10 September 2010 (has links)
The thesis introduces an approach for obtaining higher level decision support information using electromagnetic transient (EMT) simulation programs. In this approach, a suite of higher level driver programs (decision support tools) control the simulator to gain important information about the system being simulated. These tools conduct a sequence of simulation runs, in each of which the study parameters are carefully selected based on the observations of the earlier runs in the sequence. In this research two such tools have been developed in conjunction with the PSCAD/EMTDC electromagnetic transient simulation program. The first tool is an improved optimization algorithm, which is used for automatic optimization of the system parameters to achieve a desired performance. This algorithm improves the capabilities of the previously reported method of optimization-enabled electromagnetic transient simulation by using an enhanced gradient-based optimization algorithm with constraint handling techniques. In addition to allow handling of design problems with more than one objective the thesis proposes to augment the optimization tool with the technique of Pareto optimality. A sequence of optimization runs are conducted to obtain the Pareto frontier, which quantifies the tradeoffs between the design objectives. The frontier can be used by the designer for decision making process. The second tool developed in this research helps the designer to study the effects of uncertainties in a design. By using a similar multiple-run approach this sensitivity analysis tool provides surrogate models of the system, which are simple mathematical functions that represent different aspects of the system performance. These models allow the designer to analyze the effects of uncertainties on system performance without having to conduct any further time-consuming EMT simulations. In this research it has been also proposed to add probabilistic analysis capabilities to the developed sensitivity analysis tool. Since probabilistic analysis of a system using conventional techniques (e.g. Monte-Carlo simulations) normally requires a large number of EMT simulation runs, using surrogate models instead of the actual simulation runs yields significant savings in terms of shortened simulation time. A number of examples have been used throughout the thesis to demonstrate the application and usefulness of the proposed tools.
214

Global sensitivity analysis of reactor parameters / Bolade Adewale Adetula

Adetula, Bolade Adewale January 2011 (has links)
Calculations of reactor parameters of interest (such as neutron multiplication factors, decay heat, reaction rates, etc.), are often based on models which are dependent on groupwise neutron cross sections. The uncertainties associated with these neutron cross sections are propagated to the final result of the calculated reactor parameters. There is a need to characterize this uncertainty and to be able to apportion the uncertainty in a calculated reactor parameter to the different sources of uncertainty in the groupwise neutron cross sections, this procedure is known as sensitivity analysis. The focus of this study is the application of a modified global sensitivity analysis technique to calculations of reactor parameters that are dependent on groupwise neutron cross–sections. Sensitivity analysis can help in identifying the important neutron cross sections for a particular model, and also helps in establishing best–estimate optimized nuclear reactor physics models with reduced uncertainties. In this study, our approach to sensitivity analysis will be similar to the variance–based global sensitivity analysis technique, which is robust, has a wide range of applicability and provides accurate sensitivity information for most models. However, this technique requires input variables to be mutually independent. A modification to this technique, that allows one to deal with input variables that are block–wise correlated and normally distributed, is presented. The implementation of the modified technique involves the calculation of multi–dimensional integrals, which can be prohibitively expensive to compute. Numerical techniques specifically suited to the evaluation of multidimensional integrals namely Monte Carlo, quasi–Monte Carlo and sparse grids methods are used, and their efficiency is compared. The modified technique is illustrated and tested on a two–group cross–section dependent problem. In all the cases considered, the results obtained with sparse grids achieved much better accuracy, while using a significantly smaller number of samples. / Thesis (M.Sc. Engineering Sciences (Nuclear Engineering))--North-West University, Potchefstroom Campus, 2011.
215

Development of a capital investment framework for a gold mine / M. Clasen

Clasen, Mari January 2011 (has links)
This study was done against the backdrop that executives should carefully consider all the options to manage difficult periods before letting employees go, especially if they are going to rehire employees shortly after the economic recovery. Therefore, the study investigated whether investing in operational development of a plant can be used to increase feasibility, rather than to make across–the–board labour cuts. Two South African mining companies were chosen for this study. They are two investment centres at AngloGold Ashanti, Mine X Ltd. and Mine Z Ltd. The investigating project was done at Mine X to extract gold from the neighbouring Mine Z. Mine X will have access to the minerals 40 years in advance of Mine Z due to insufficient essential infrastructure at Mine Z. The life–time of the project is 18 years (estimated). The main objective of this study is to investigate the feasibility, from Mine X’s point of view, with a deepening project including Mine Z. The most significant aspect will be to determine which investment timeframe decision will gain Mine X a feasible position in terms of economic growth. This will be achieved by the following secondary objectives in making a capital investment decision: 1. To describe the nature and significance of investment decision making. 2. To recognise appropriate capital investment evaluation techniques in conjunction with sensitivity analysis. 3. To apply the techniques and sensitivity analysis in order to make a decision of a possible, feasible investment opportunity at Mine X. 4. To develop a framework to identify the project’s components and associate and access difficulties for Mine X‘s project lifecycle. The feasibility study undertakes multiple scenarios and provides recommendations and a final report, based on the scenario that is the most viable. The following techniques which were identified were used to analyse the feasibility of the project: Net present value, internal rate of return and payback period. All these above techniques will be analysed in three different scenarios, namely: 1. Mine X will stay with its current operations without any new projects. 2. The development project will begin immediately. 3. A six–month delay in development of the project. The study found that the net present value was positive, the internal rate of return was more than the discount rate and the payback period was shorter than the project’s life–time regarding to all three above–mentioned scenarios. The highest net present value is calculated in case the project starts immediately. Both the internal rate of return and the payback period indicated that a six month delay in the project is the most viable. After considering all the facts, the study concluded due to the highest net present value the best feasible recommendation would be to start the project immediately. The value of this study is that it is the first study to investigate the relationship between the viability to delay or to start the investment project immediately in the South African mining industry. This study is also unique, since it takes into account how mining industries world–wide can achieve long–term success through development projects without losing key players, due to impulsive short–term downsizing decisions. / Thesis (M.Com. (Management Accountancy))--North-West University, Potchefstroom Campus, 2012.
216

Multivariate Spatial Process Gradients with Environmental Applications

Terres, Maria Antonia January 2014 (has links)
<p>Previous papers have elaborated formal gradient analysis for spatial processes, focusing on the distribution theory for directional derivatives associated with a response variable assumed to follow a Gaussian process model. In the current work, these ideas are extended to additionally accommodate one or more continuous covariate(s) whose directional derivatives are of interest and to relate the behavior of the directional derivatives of the response surface to those of the covariate surface(s). It is of interest to assess whether, in some sense, the gradients of the response follow those of the explanatory variable(s), thereby gaining insight into the local relationships between the variables. The joint Gaussian structure of the spatial random effects and associated directional derivatives allows for explicit distribution theory and, hence, kriging across the spatial region using multivariate normal theory. The gradient analysis is illustrated for bivariate and multivariate spatial models, non-Gaussian responses such as presence-absence and point patterns, and outlined for several additional spatial modeling frameworks that commonly arise in the literature. Working within a hierarchical modeling framework, posterior samples enable all gradient analyses to occur as post model fitting procedures.</p> / Dissertation
217

Evaluation and Optimization of a Force Field for Crystalline Forms of Mannitol and Sorbitol

Kendrick, John, Anwar, Jamshed, de Waard, H., Amani, A., Hinrichs, W.L.J., Frijlink, H.W. January 2010 (has links)
Two force fields, the GROMOS53A5/53A6 (united atom) and the AMBER95 (all atom) parameter sets, coupled with partial atomic charges derived from quantum mechanical calculations were evaluated for their ability to reproduce the known crystalline forms of the polyols mannitol and sorbitol. The force fields were evaluated using molecular dynamics simulations at 10 K (which is akin to potential energy minimization) with the simulation cell lengths and angles free to evolve. Both force fields performed relatively poorly, not being able to simultaneously reproduce all of the crystal structures within a 5% deviation level. The parameter sets were then systematically optimized using sensitivity analysis, and a revised AMBER95 set was found to reproduce the crystal structures with less than 5% deviation from experiment. The stability of the various crystalline forms for each of the parameter sets (original and revised) was then assessed in extended MD simulations at 298 K and 1 bar covering 1 ns simulation time. The AMBER95 parameter sets (original and revised) were found to be effective in reproducing the crystal structures in these more stringent tests. Remarkably, the performance of the original AMBER95 parameter set was found to be slightly better than that of the revised set in these simulations at 298 K. The results of this study suggest that, whenever feasible, one should include molecular simulations at elevated temperatures when optimizing parameters. / Dutch Top Institute Pharma
218

Enhanced Optimality Conditions and New Constraint Qualifications for Nonsmooth Optimization Problems

Zhang, Jin 12 December 2014 (has links)
The main purpose of this dissertation is to investigate necessary optimality conditions for a class of very general nonsmooth optimization problems called the mathematical program with geometric constraints (MPGC). The geometric constraint means that the image of certain mapping is included in a nonempty and closed set. We first study the conventional nonlinear program with equality, inequality and abstract set constraints as a special case of MPGC. We derive the enhanced Fritz John condition and from which, we obtain the enhanced Karush-Kuhn-Tucker (KKT) condition and introduce the associated pseudonormality and quasinormality condition. We prove that either pseudonormality or quasinormality with regularity implies the existence of a local error bound. We also give a tighter upper estimate for the Fr\'chet subdifferential and the limiting subdifferential of the value function in terms of quasinormal multipliers which is usually a smaller set than the set of classical normal multipliers. We then consider a more general MPGC where the image of the mapping from a Banach space is included in a nonempty and closed subset of a finite dimensional space. We obtain the enhanced Fritz John necessary optimality conditions in terms of the approximate subdifferential. One of the technical difficulties in obtaining such a result in an infinite dimensional space is that no compactness result can be used to show the existence of local minimizers of a perturbed problem. We employ the celebrated Ekeland's variational principle to obtain the results instead. We then apply our results to the study of exact penalty and sensitivity analysis. We also study a special class of MPCG named mathematical programs with equilibrium constraints (MPECs). We argue that the MPEC-linear independence constraint qualification is not a constraint qualification for the strong (S-) stationary condition when the objective function is nonsmooth. We derive the enhanced Fritz John Mordukhovich (M-) stationary condition for MPECs. From this enhanced Fritz John M-stationary condition we introduce the associated MPEC generalized pseudonormality and quasinormality condition and build the relations between them and some other widely used MPEC constraint qualifications. We give upper estimates for the subdifferential of the value function in terms of the enhanced M- and C-multipliers respectively. Besides, we focus on some new constraint qualifications introduced for nonlinear extremum problems in the recent literature. We show that, if the constraint functions are continuously differentiable, the relaxed Mangasarian-Fromovitz constraint qualification (or, equivalently, the constant rank of the subspace component condition) implies the existence of local error bounds. We further extend the new result to the MPECs. / Graduate / 0405
219

Direct sensitivity techniques in regional air quality models: development and application

Zhang, Wenxian 12 January 2015 (has links)
Sensitivity analysis based on a chemical transport model (CTM) serves as an important approach towards better understanding the relationship between trace contaminant levels in the atmosphere and emissions, chemical and physical processes. Previous studies on ozone control identified the high-order Decoupled Direct Method (HDDM) as an efficient tool to conduct sensitivity analysis. Given the growing recognition of the adverse health effects of fine particulate matter (i.e., particles with an aerodynamic diameter less than 2.5 micrometers (PM2.5)), this dissertation presents the development of a HDDM sensitivity technique for particulate matter and its implementation it in a widely used CTM, CMAQ. Compared to previous studies, two new features of the implementation are 1) including sensitivities of aerosol water content and activity coefficients, and 2) tracking the chemical regimes of the embedded thermodynamic model. The new features provide more accurate sensitivities especially for nitrate and ammonium. Results compare well with brute force sensitivities and are shown to be more stable and computationally efficient. Next, this dissertation explores the applications of HDDM. Source apportionment analysis for the Houston region in September 2006 indicates that nonlinear responses accounted for 3.5% to 33.7% of daily average PM2.5, and that PM2.5 formed rapidly during night especially in the presence of abundant ozone and under stagnant conditions. Uncertainty analysis based on the HDDM found that on average, uncertainties in the emissions rates led to 36% uncertainty in simulated daily average PM2.5 and could explain much, but not all, of the difference between simulated and observed PM2.5 concentrations at two observations sites. HDDM is then applied to assess the impact of flare VOC emissions with temporally variable combustion efficiency. Detailed study of flare emissions using the 2006 Texas special inventory indicates that daily maximum 8-hour ozone at a monitoring site can increase by 2.9 ppb when combustion is significantly decreased. The last application in this dissertation integrates the reduced form model into an electricity generation planning model, and enables representation of geospatial dependence of air quality-related health costs in the optimization process to seek the least cost planning for power generation. The integrated model can provide useful advice on selecting fuel types and locations for power plants.
220

Optimal Portfolio Execution Strategies: Uncertainty and Robustness

Moazeni, Somayeh 25 October 2011 (has links)
Optimal investment decisions often rely on assumptions about the models and their associated parameter values. Therefore, it is essential to assess suitability of these assumptions and to understand sensitivity of outcomes when they are altered. More importantly, appropriate approaches should be developed to achieve a robust decision. In this thesis, we carry out a sensitivity analysis on parameter values as well as model speci cation of an important problem in portfolio management, namely the optimal portfolio execution problem. We then propose more robust solution techniques and models to achieve greater reliability on the performance of an optimal execution strategy. The optimal portfolio execution problem yields an execution strategy to liquidate large blocks of assets over a given execution horizon to minimize the mean of the execution cost and risk in execution. For large-volume trades, a major component of the execution cost comes from price impact. The optimal execution strategy then depends on the market price dynamics, the execution price model, the price impact model, as well as the choice of the risk measure. In this study, rst, sensitivity of the optimal execution strategy to estimation errors in the price impact parameters is analyzed, when a deterministic strategy is sought to minimize the mean and variance of the execution cost. An upper bound on the size of change in the solution is provided, which indicates the contributing factors to sensitivity of an optimal execution strategy. Our results show that the optimal execution strategy and the e cient frontier may be quite sensitive to perturbations in the price impact parameters. Motivated by our sensitivity results, a regularized robust optimization approach is devised when the price impact parameters belong to some uncertainty set. We rst illustrate that the classical robust optimization might be unstable to variation in the uncertainty set. To achieve greater stability, the proposed approach imposes a regularization constraint on the uncertainty set before being used in the minimax optimization formulation. Improvement in the stability of the robust solution is discussed and some implications of the regularization on the robust solution are studied. Sensitivity of the optimal execution strategy to market price dynamics is then investigated. We provide arguments that jump di usion models using compound poisson processes naturally model uncertain price impact of other large trades. Using stochastic dynamic programming, we derive analytical solutions for minimizing the expected execution cost under jump di usion models and compare them with the optimal execution strategies obtained from a di usion process. A jump di usion model for the market price dynamics suggests the use of Conditional Value-at-Risk (CVaR) as the risk measure. Using Monte Carlo simulations, a smoothing technique, and a parametric representation of a stochastic strategy, we investigate an approach to minimize the mean and CVaR of the execution cost. The devised approach can further handle constraints using a smoothed exact penalty function.

Page generated in 0.0572 seconds