361 |
Evaluation and enhancement of the Phosphorus Index for the Mississippi DeltaFernandez Martinez, Felipe 10 May 2024 (has links) (PDF)
The Lower Mississippi Alluvial Basin (LMAB) faces significant environmental challenges due to phosphorus (P) runoff from agricultural lands, contributing to eutrophication and aquatic ecosystem degradation. Excess nutrient runoff, particularly P, threatens water quality and contributes to hypoxia in the Gulf of Mexico. The current Mississippi Phosphorus Index (P-Index), a tool for assessing P loss vulnerability from agricultural fields, has shown limitations in its applicability across the diverse conditions of the Mississippi Delta, a sub-region of the LMAB. This research presents a comprehensive revision of the P-Index by employing a suite of analytical techniques and diverse data sources, including geospatial analysis, rainfall simulations, and extensive data from soil tests, agricultural censuses, and expert evaluations. The aim was to enhance the model's sensitivity and accuracy in predicting P loss vulnerability, thereby enabling more precise nutrient management recommendations tailored to the Mississippi Delta's unique agricultural and environmental conditions. The study identified a critical lack of variability in the P-Index's recommendations for different agricultural scenarios within the region, highlighting its inadequacy in accurately reflecting the specific vulnerabilities to soil P loss. Through a detailed sensitivity analysis and recalibration of the model, incorporating updated parameters and data sources, significant improvements were achieved. The revised P-Index now better distinguishes between various agricultural practices set in the environmental conditions of the MS Delta, offering differentiated recommendations that align closely with the region's real-world complexities. Furthermore, the research underscores the necessity for ongoing investigations into the equivalencies between different soil test P methods (Lancaster and Mehlich-III) and the impact of P levels in irrigation water on nutrient cycling and loss. The recalibrated P-Index represents a significant step forward in regional nutrient management strategies, promising enhanced environmental protection and agricultural sustainability through more informed and targeted recommendations. This work emphasizes the critical need for adapting nutrient management tools like the P-Index to regional conditions, ensuring they accurately address the environmental challenges and agricultural practices specific to areas like the Mississippi Delta. Keywords: Nutrient management, Phosphorus Index, Mississippi Delta, Agricultural runoff, Soil test phosphorus, Environmental sustainability, Sensitivity analysis, Trend analysis.
|
362 |
Reliability-Based Design Optimization of Nonlinear Beam-ColumnsLi, Zhongwei 30 April 2018 (has links)
This dissertation addresses the ultimate strength analysis of nonlinear beam-columns under axial compression, the sensitivity of the ultimate strength, structural optimization and reliability analysis using ultimate strength analysis, and Reliability-Based Design Optimization (RBDO) of the nonlinear beam-columns. The ultimate strength analysis is based on nonlinear beam theory with material and geometric nonlinearities. Nonlinear constitutive law is developed for elastic-perfectly-plastic beam cross-section consisting of base plate and T-bar stiffener. The analysis method is validated using commercial nonlinear finite element analysis. A new direct solving method is developed, which combines the original governing equations with their derivatives with respect to deformation matric and solves for the ultimate strength directly. Structural optimization and reliability analysis use a gradient-based algorithm and need accurate sensitivities of the ultimate strength to design variables. Semi-analytic sensitivity of the ultimate strength is calculated from a linear set of analytical sensitivity equations which use the Jacobian matrix of the direct solving method. The derivatives of the structural residual equations in the sensitivity equation set are calculated using complex step method. The semi-analytic sensitivity is more robust and efficient as compared to finite difference sensitivity. The design variables are the cross-sectional geometric parameters. Random variables include material properties, geometric parameters, initial deflection and nondeterministic load. Failure probabilities calculated by ultimate strength reliability analysis are validated by Monte Carlo Simulation. Double-loop RBDO minimizes structural weight with reliability index constraint. The sensitivity of reliability index with respect to design variables is calculated from the gradient of limit state function at the solution of reliability analysis. By using the ultimate strength direct solving method, semi-analytic sensitivity and gradient-based optimization algorithm, the RBDO method is found to be robust and efficient for nonlinear beam-columns. The ultimate strength direct solving method, semi-analytic sensitivity, structural optimization, reliability analysis, and RBDO method can be applied to more complicated engineering structures including stiffened panels and aerospace/ocean structures. / Ph. D. / This dissertation presents a Reliability-Based Design Optimization (RBDO) procedure for nonlinear beam-columns. The beam-column cross-section has asymmetric I shape and the nonlinear material model allows plastic deformation. Structural optimization minimizes the structural weight while maintaining an ultimate strength level, i.e. the maximum load it can carry. In reality, the geometric parameters and material properties of the beam-column vary from the design value. These uncertain variations will affect the strength of the structure. Structural reliability analysis accounts for the uncertainties in structural design. Reliability index is a measurement of the structure’s probability of failure by considering these uncertainties. RBDO minimizes the structural weight while maintaining the reliability level of the beam-column. A novel numerical method is presented which solves an explicit set of equations to obtain the maximum strength of the beam-column directly. By using this method, the RBDO procedure is found to be efficient and robust.
|
363 |
Performance of reverse osmosis based desalination process using spiral wound membrane: Sensitivity study of operating parameters under variable seawater conditionsAladhwani, S.H., Al-Obaidi, Mudhar A.A.R., Mujtaba, Iqbal 28 March 2022 (has links)
Yes / Reverse Osmosis (RO) process accounts for 80% of the world desalination capacity. Apparently, there is a rapid increase of deploying the RO process in seawater desalination due to its high efficiency in removing salts at a reduced energy consumption compared to thermal desalination technologies such as MSF and MED. Among different types of membranes, spiral would membranes is one of the most used. However, there is no in-depth study on the performance of spiral wound membranes in terms of salt rejection, water quality, water recovery and specific energy consumption subject to wide range of seawater salinity, temperature, feed flowrate and pressure using a high fidelity but a realistic process model which is therefore is the focus of this study. The membrane is subjected to conditions within the manufacturer's recommendations. The outcome of this research will certainly help the designers selecting optimum RO network configuration for a large-scale desalination process.
|
364 |
Analysis and Development of Control Methodologies for Semi-active SuspensionsGhasemalizadeh, Omid 14 November 2016 (has links)
Semi-active suspensions have drawn particular attention due to their superior performance over the other types of suspensions. One of their advantages is that their damping coefficient can be controlled without the need for any external source of power. In this study, a handful of control approaches are implemented on a car models using MATLAB/Simulink. The investigated control methodologies are skyhook, groundhook, hybrid skyhook-groundhook, Acceleration Driven Damper, Power Driven Damper, H∞ Robust Control, Fuzzy Logic Controller, and Inverse ANFIS. H∞ Robust Control is an advanced method that guarantees transient performance and rejects external disturbances. It is shown that H∞ with the proposed modification, has the best performance although its relatively high cost of computation could be potentially considered as a drawback. Also, the proposed Inverse ANFIS controller uses the power of fuzzy systems along with neural networks to help improve vehicle ride metrics significantly.
In this study, a novel approach is introduced to analyze and fine-tune semi-active suspension control algorithms. In some cases, such as military trucks moving on off-road terrains, it is critical to keep the vehicle ride quality in an acceptable range. Semi-active suspensions are used to have more control over the ride metrics compared to passive suspensions and also, be more cost-effective compared to active suspensions. The proposed methodology will investigate the skyhook-groundhook hybrid controller. This is accomplished by conducting sensitivity analysis of the controller performance to varying vehicle/road parameters. This approach utilizes sensitivity analysis and one-at-a-time methodology to find and reach the optimum point of vehicle suspensions. Furthermore, real-time tuning of the mentioned controller will be studied. The online tuning will help keep the ride quality of the vehicle close to its optimum point while the vehicle parameters are changing. A quarter-car model is used for all simulations and analyses. / Ph. D. / Passenger safety and comfort have always been two major concerns in designing and engineering vehicles. Suspensions play a vital role in this regard. They are there to ensure a very smooth and comfortable ride experience. Many technologies have been developed to increase performance of suspension and customize their functionality. However, only a few developments led to a new family of suspensions and opened a broad field in automotive engineering for researchers to do their twist and tweaks. One fascinating technology that was developed a few decades ago, was semi-active suspensions. Their advantage over conventional ones is that its stiffness can be adjusted on the fly. This property can be combined with a control methodology in order to improve the ride experience further more compared to conventional suspensions.
In this dissertation, some novel control methodologies are developed and compared with existing ones. The results are discussed exclusively for each controller.
|
365 |
Addressing inequalities in eye health with subsidies and increased fees for General Ophthalmic Services in socio-economically deprived communities: A sensitivity analysisShickle, D., Todkill, D., Chisholm, Catharine M., Rughani, S., Griffin, M., Cassels-Brown, A., May, H., Slade, S.V., Davey, Christopher J. January 2015 (has links)
Objectives:
Poor knowledge of eye health, concerns about the cost of spectacles, mistrust of optometrists and limited geographical access in socio-economically deprived areas are barriers to accessing regular eye examinations and result in low uptake and subsequent late presentation to ophthalmology clinics. Personal Medical Services (PMS) were introduced in the late 1990s to provide locally negotiated solutions to problems associated with inequalities in access to primary care. An equivalent approach to delivery of optometric services could address inequalities in the uptake of eye examinations.
Study design:
One-way and multiway sensitivity analyses.
Methods:
Variations in assumptions were included in the models for equipment and accommodation costs, uptake and length of appointments. The sensitivity analyses thresholds were cost-per-person tested below the GOS1 fee paid by the NHS and achieving break-even between income and expenditure, assuming no cross-subsidy from profits from sales of optical appliances.
Results:
Cost per test ranged from £24.01 to £64.80 and subsidy required varied from £14,490 to £108,046. Unused capacity utilised for local enhanced service schemes such as glaucoma referral refinement reduced the subsidy needed.
Conclusions:
In order to support the financial viability of primary eye care in socio-economically deprived communities, income is required from additional subsidies or from sources other than eye examinations, such as ophthalmic or other optometric community services. This would require a significant shift of activity from secondary to primary care locations. The subsidy required could also be justified by the utility gain from earlier detection of preventable sight loss. / Yorkshire Eye Research, NHS Leeds and RNIB
|
366 |
Error modeling of the carpal wristSaccoccio, Gregory Nicholas 13 February 2009 (has links)
In recent years, increased emphasis has been placed on the development of parallel-architecture mechanisms for use as robotic manipulators. Parallel robots offer the benefits of higher load-carrying capacity, greater positioning accuracy and lower weight when compared to serial devices. However, robotic wrist development has traditionally focused on serial mechanisms having a large, spherical workspace and simpler kinematic solutions. The Carpal wrist is a unique parallel mechanism consisting of a fixed base and a movable output plane connected via three serial kinematic chains. The forward and inverse kinematic problems of the Carpal wrist are solved closed-form, making the device suitable for use as a new type of robotic wrist. The closed-form solutions are dependent upon the assumptions that the fixed and moving planes are symmetric about a mid-plane and that the three kinematic chains connecting the planes are identical. This thesis investigates the errors that result from those assumptions being violated due to manufacturing and assembly errors. In the non-ideal model, pose error is found by iteratively solving a system of equations describing the output plane position and orientation and comparing them with the ideal solution. The error model is a tool for predicting the effects of kinematic parameter errors on the positioning accuracy and reachable workspace of the Carpal wrist. In this work, a general error model is developed and validated for a range of parameter error values. Special-case results are presented for errors in the individual parameters. / Master of Science
|
367 |
Simulation and sensitivity analysis of spiral wound reverse osmosis process for the removal of dimethylphenol from wastewater using 2-D dynamic modelAl-Obaidi, Mudhar A.A.R., Kara-Zaitri, Chakib, Mujtaba, Iqbal 05 May 2018 (has links)
Yes / Reverse Osmosis (RO) processes are readily used for removing pollutants, such as dimethylphenol from wastewater. A number of operating parameters must be controlled within the process constraints to achieve an efficient removal of such pollutants. Understanding the process dynamics is absolutely essential and is a pre-step for designing any effective controllers for any process. In this work, a detailed distributed two-dimensional dynamic (x and y dimensions and time) model for a spiral-wound RO process is developed extending the 2-D steady state model of the authors published earlier. The model is used to capture the dynamics of the RO process for the removal of dimethylphenol from wastewater. The performance of the 2-D model is compared with that obtained using 1-D dynamic model before the model is being used to investigate the performance of the RO process for a range of operating conditions.
|
368 |
Simulation and optimisation of a medium scale reverse osmosis brackish water desalination system under variable feed quality: Energy saving and maintenance opportunityAl-Obaidi, Mudhar A.A.R., Alsarayreh, Alanood A., Bdour, A., Jassam, S.H., Rashid, F.L., Mujtaba, Iqbal 13 July 2023 (has links)
Yes / In this work, we considered model-based simulation and optimisation of a medium scale brackish water desalination process. The mathematical model is validated using actual multistage RO plant data of Al- Hashemite University (Jordan). Using the validated model, the sensitivity of different operating parameters such as pump pressure, brackish water flow rate and seasonal water temperature (covering the whole year) on the performance indicators such as productivity, product salinity and specific energy consumption of the process is conducted. For a given feed flow rate and pump pressure, winter season produces less freshwater that in summer in line with the assumption that winter water demand is less than that in summer.
With the soaring energy prices globally, any opportunity for the reduction of energy is not only desirable from the economic point of view but is an absolute necessity to meet the net zero carbon emission pledge by many nations, as globally most desalination plants use fossil fuel as the main source of energy. Therefore, the second part of this paper attempts to minimise the specific energy consumption of the RO system using model-based optimisation technique. The study resulted not only 19 % reduction in specific energy but also 4.46 % increase in productivity in a particular season of the year. For fixed product demand, this opens the opportunity for scheduling cleaning and maintenance of the RO process without having to consider full system shutdown.
|
369 |
Modeling Startegies for Computational Systems BiologySimoni, Giulia 20 March 2020 (has links)
Mathematical models and their associated computer simulations are nowadays widely used in several research fields, such as natural sciences, engineering, as well as social sciences. In the context of systems biology, they provide a rigorous way to investigate how complex regulatory pathways are connected and how the disruption of these processes may contribute to the develop- ment of a disease, ultimately investigating the suitability of specific molecules as novel therapeutic targets. In the last decade, the launching of the precision medicine initiative has motivated the necessity to define innovative computational techniques that could be used for customizing therapies. In this context, the combination of mathematical models and computer strategies is an essential tool for biologists, which can analyze complex system pathways, as well as for the pharmaceutical industry, which is involved in promoting programs for drug discovery.
In this dissertation, we explore different modeling techniques that are used for the simulation and the analysis of complex biological systems. We analyze the state of the art for simulation algorithms both in the stochastic and in the deterministic frameworks. The same dichotomy has been studied in the context of sensitivity analysis, identifying the main pros and cons of the two approaches. Moreover, we studied the quantitative system pharmacology (QSP) modeling approach that elucidates the mechanism of action of a drug on the biological processes underlying a disease. Specifically, we present the definition, calibration and validation of a QSP model describing Gaucher disease type 1 (GD1), one of the most common lysosome storage rare disorders. All of these techniques are finally combined to define a novel computational pipeline for patient stratification. Our approach uses modeling techniques, such as model simulations, sensitivity analysis and QSP modeling, in combination with experimental data to identify the key mechanisms responsible for the stratification. The pipeline has been applied to three test cases in different biological contexts: a whole-body model of dyslipidemia, the QSP model of GD1 and a QSP model of cardiac electrophysiology. In these test cases, the pipeline proved to be accurate and robust, allowing the interpretation of the mechanistic differences underlying the phenotype classification.
|
370 |
Machine Learning-Driven Uncertainty Quantification and Parameter Analysis in Fire Risk Assessment for Nuclear Power PlantsSahin, Elvan 27 January 2025 (has links)
Fire poses a critical risk to the safe operation of nuclear power plants (NPPs), with electrical cabinet and liquid spill fires being among the most frequent and challenging scenarios to address. Traditional fire risk assessment models often lack precision due to complex physics and inherent uncertainties, especially in predicting the heat release rate (HRR) — a key parameter for assessing fire severity. This dissertation presents an innovative framework that integrates machine learning (ML) models, particularly neural networks and tree-based algorithms, with uncertainty quantification (UQ) techniques to enhance fire modeling and risk assessment in NPPs. The framework is applied to electrical enclosure cabinets and spill fires that represent about 50% of challenging fire scenarios in NPPs. By leveraging extensive experimental datasets, this study develops ML models that capture the influence of critical fire parameters on HRR, enabling more accurate predictions of fire behavior.
Key features are evaluated to establish their influence on peak HRR. Advanced UQ tools, including Monte Carlo sampling and sensitivity analysis are applied to quantify uncertainties and identify parameters with the greatest impact on model output variability. The resulting ML-driven insights allow for a refined understanding of fire dynamics, guiding experimental planning and uncertainty reduction efforts. For electrical enclosure fires, the models highlight the importance of cable surface area, heat release rate per unit area of the cable, ignition source heat release rate, ventilation area, and cabinet volume in determining peak HRR. Sensitivity analysis revealed that HRRPUA is the most significant parameter. For spill fires, the models underscore the significance of substrate thermal conductivity and slope, ignition delay time, and fuel properties, showing that fuel amount and properties are key in fixed quantity spills, while fuel discharge rate and properties are most influential in continuous spills. / Doctor of Philosophy / Fires in nuclear power plants (NPPs) present serious risks, especially in areas with complex equipment, such as electrical cabinets or areas with potential fuel spills. Predicting how fires in these settings will behave is difficult, as many variables can impact how quickly and intensely a fire grows. Current models used to assess fire risk often struggle to capture these complexities, leading to conservative estimates that may not fully reflect real-world conditions.
This dissertation introduces a new approach that uses machine learning (ML) to analyze experimental fire data relevant to NPP scenarios, identifying key factors that influence fire behavior. The models help predict a fire's peak heat release rate (HRR), a measure of fire intensity, by learning from past data and capturing important patterns that traditional models can miss. Additionally, advanced uncertainty analysis methods are used to assess which factors contribute the most to the variability in fire outcomes, helping researchers to focus on the most impactful areas for fire prevention and control.
By applying this ML-based framework, the study aims to improve the accuracy of fire risk assessments, supporting safer NPP designs and more effective response strategies. The findings offer a pathway to more precise and reliable fire modeling, ultimately helping protect people, infrastructure, and the environment from fire-related hazards in nuclear facilities.
|
Page generated in 0.0564 seconds