• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 327
  • 113
  • 91
  • 76
  • 36
  • 24
  • 12
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 877
  • 877
  • 145
  • 124
  • 121
  • 118
  • 113
  • 101
  • 101
  • 85
  • 82
  • 81
  • 73
  • 70
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

A Computer-Based Decision Tool for Prioritizing the Reduction of Airborne Chemical Emissions from Canadian Oil Refineries Using Estimated Health Impacts

Gower, Stephanie Karen January 2007 (has links)
Petroleum refineries emit a variety of airborne substances which may be harmful to human health. HEIDI II (Health Effects Indicators Decision Index II) is a computer-based decision analysis tool which assesses airborne emissions from Canada's oil refineries for reduction, based on ordinal ranking of estimated health impacts. The model was designed by a project team within NERAM (Network for Environmental Risk Assessment and Management) and assembled with significant stakeholder consultation. HEIDI II is publicly available as a deterministic Excel-based tool which ranks 31 air pollutants based on predicted disease incidence or estimated DALYS (disability adjusted life years). The model includes calculations to account for average annual emissions, ambient concentrations, stack height, meteorology/dispersion, photodegradation, and the population distribution around each refinery. Different formulations of continuous dose-response functions were applied to nonthreshold-acting air toxics, threshold-acting air toxics, and nonthreshold-acting CACs (criteria air contaminants). An updated probabilistic version of HEIDI II was developed using Matlab code to account for parameter uncertainty and identify key leverage variables. Sensitivity analyses indicate that parameter uncertainty in the model variables for annual emissions and for concentration-response/toxicological slopes have the greatest leverage on predicted health impacts. Scenario analyses suggest that the geographic distribution of population density around a refinery site is an important predictor of total health impact. Several ranking metrics (predicted case incidence, simple DALY, and complex DALY) and ordinal ranking approaches (deterministic model, average from Monte Carlo simulation, test of stochastic dominance) were used to identify priority substances for reduction; the results were similar in each case. The predicted impacts of primary and secondary particulate matter (PM) consistently outweighed those of the air toxics. Nickel, PAH (polycyclic aromatic hydrocarbons), BTEX (benzene, toluene, ethylbenzene and xylene), sulphuric acid, and vanadium were consistently identified as priority air toxics at refineries where they were reported emissions. For many substances, the difference in rank order is indeterminate when parametric uncertainty and variability are considered.
552

Back-calculating emission rates for ammonia and particulate matter from area sources using dispersion modeling

Price, Jacqueline Elaine 15 November 2004 (has links)
Engineering directly impacts current and future regulatory policy decisions. The foundation of air pollution control and air pollution dispersion modeling lies in the math, chemistry, and physics of the environment. Therefore, regulatory decision making must rely upon sound science and engineering as the core of appropriate policy making (objective analysis in lieu of subjective opinion). This research evaluated particulate matter and ammonia concentration data as well as two modeling methods, a backward Lagrangian stochastic model and a Gaussian plume dispersion model. This analysis assessed the uncertainty surrounding each sampling procedure in order to gain a better understanding of the uncertainty in the final emission rate calculation (a basis for federal regulation), and it assessed the differences between emission rates generated using two different dispersion models. First, this research evaluated the uncertainty encompassing the gravimetric sampling of particulate matter and the passive ammonia sampling technique at an animal feeding operation. Future research will be to further determine the wind velocity profile as well as determining the vertical temperature gradient during the modeling time period. This information will help quantify the uncertainty of the meteorological model inputs into the dispersion model, which will aid in understanding the propagated uncertainty in the dispersion modeling outputs. Next, an evaluation of the emission rates generated by both the Industrial Source Complex (Gaussian) model and the WindTrax (backward-Lagrangian stochastic) model revealed that the calculated emission concentrations from each model using the average emission rate generated by the model are extremely close in value. However, the average emission rates calculated by the models vary by a factor of 10. This is extremely troubling. In conclusion, current and future sources are regulated based on emission rate data from previous time periods. Emission factors are published for regulation of various sources, and these emission factors are derived based upon back-calculated model emission rates and site management practices. Thus, this factor of 10 ratio in the emission rates could prove troubling in terms of regulation if the model that the emission rate is back-calculated from is not used as the model to predict a future downwind pollutant concentration.
553

Probabilistic Risk Assessment of Special Protection Systems Operations and Design Refinement

Hsiao, Tsun-Yu 04 July 2008 (has links)
In order to prevent power system blackout, and enhance system reliability, various forms of special protection systems (SPS) and defense plans have been implemented by utilities around the world. One of the main concerns in the design of an SPS is to assure whether the system could fit with the reliability specification requirements. The failure of SPS to detect the defined conditions and carry out the required actions, or to take unnecessary actions, could lead to serious and costly consequences. Thus, a quantitative reliability assessment for SPS is important and necessary. Using a single point value for the parameter to evaluate the reliability of SPS might give incomplete information about the system reliability due to the uncertainty of reliability model and input data. When a review study suggests that some modifications of the existing scheme are necessary, the sensitivity analysis techniques could provide the tools to do this investigation to identify the most significant components that have essential effects on the reliability of the SPS. In this dissertation, by incorporating an interval theory, a risk reduction worth importance concept, and a probabilistic risk-based index, a procedure is proposed to conduct parameter uncertainty analysis, identify critical factors in the reliability model, perform probabilistic risk assessments (PRA) and determine a better option for the refinement of the studied SPS decision process logic module. One of the existing SPSs of Taipower systems is used to illustrate the practicability and appropriation of the proposed design refinement procedure. With the advent of deregulation in the power industry, utilities have experienced a great pressure to fully utilize their current facilities to the maximum level. SPSs are often considered as a cost effective way in achieving this goal. This dissertation also presents a framework for quantitative assessment of the benefits and risks due to SPS implementation. Changes in energy, spinning reserve and customer interruption costs resulting from SPS operations are evaluated and risks of SPS operations and system security are assessed. The proposed methodologies are useful for power system planners and operators to evaluate the value and effectiveness of SPS for the remedy of transmission congestion and reliability problems.
554

A methodology for the validated design space exploration of fuel cell powered unmanned aerial vehicles

Moffitt, Blake Almy 05 April 2010 (has links)
Unmanned Aerial Vehicles (UAVs) are the most dynamic growth sector of the aerospace industry today. The need to provide persistent intelligence, surveillance, and reconnaissance for military operations is driving the planned acquisition of over 5,000 UAVs over the next five years. The most pressing need is for quiet, small UAVs with endurance beyond what is capable with advanced batteries or small internal combustion propulsion systems. Fuel cell systems demonstrate high efficiency, high specific energy, low noise, low temperature operation, modularity, and rapid refuelability making them a promising enabler of the small, quiet, and persistent UAVs that military planners are seeking. Despite the perceived benefits, the actual near-term performance of fuel cell powered UAVs is unknown. Until the auto industry began spending billions of dollars in research, fuel cell systems were too heavy for useful flight applications. However, the last decade has seen rapid development with fuel cell gravimetric and volumetric power density nearly doubling every 2-3 years. As a result, a few design studies and demonstrator aircraft have appeared, but overall the design methodology and vehicles are still in their infancy. The design of fuel cell aircraft poses many challenges. Fuel cells differ fundamentally from combustion based propulsion in how they generate power and interact with other aircraft subsystems. As a result, traditional multidisciplinary analysis (MDA) codes are inappropriate. Building new MDAs is difficult since fuel cells are rapidly changing in design, and various competitive architectures exist for balance of plant, hydrogen storage, and all electric aircraft subsystems. In addition, fuel cell design and performance data is closely protected which makes validation difficult and uncertainty significant. Finally, low specific power and high volumes compared to traditional combustion based propulsion result in more highly constrained design spaces that are problematic for design space exploration. To begin addressing the current gaps in fuel cell aircraft development, a methodology has been developed to explore and characterize the near-term performance of fuel cell powered UAVs. The first step of the methodology is the development of a valid MDA. This is accomplished by using propagated uncertainty estimates to guide the decomposition of a MDA into key contributing analyses (CAs) that can be individually refined and validated to increase the overall accuracy of the MDA. To assist in MDA development, a flexible framework for simultaneously solving the CAs is specified. This enables the MDA to be easily adapted to changes in technology and the changes in data that occur throughout a design process. Various CAs that model a polymer electrolyte membrane fuel cell (PEMFC) UAV are developed, validated, and shown to be in agreement with hardware-in-the-loop simulations of a fully developed fuel cell propulsion system. After creating a valid MDA, the final step of the methodology is the synthesis of the MDA with an uncertainty propagation analysis, an optimization routine, and a chance constrained problem formulation. This synthesis allows an efficient calculation of the probabilistic constraint boundaries and Pareto frontiers that will govern the design space and influence design decisions relating to optimization and uncertainty mitigation. A key element of the methodology is uncertainty propagation. The methodology uses Systems Sensitivity Analysis (SSA) to estimate the uncertainty of key performance metrics due to uncertainties in design variables and uncertainties in the accuracy of the CAs. A summary of SSA is provided and key rules for properly decomposing a MDA for use with SSA are provided. Verification of SSA uncertainty estimates via Monte Carlo simulations is provided for both an example problem as well as a detailed MDA of a fuel cell UAV. Implementation of the methodology was performed on a small fuel cell UAV designed to carry a 2.2 kg payload with 24 hours of endurance. Uncertainty distributions for both design variables and the CAs were estimated based on experimental results and were found to dominate the design space. To reduce uncertainty and test the flexibility of the MDA framework, CAs were replaced with either empirical, or semi-empirical relationships during the optimization process. The final design was validated via a hardware-in-the loop simulation. Finally, the fuel cell UAV probabilistic design space was studied. A graphical representation of the design space was generated and the optima due to deterministic and probabilistic constraints were identified. The methodology was used to identify Pareto frontiers of the design space which were shown on contour plots of the design space. Unanticipated discontinuities of the Pareto fronts were observed as different constraints became active providing useful information on which to base design and development decisions.
555

Sensitivity Analysis of Interface Fatigue Crack Propagation in Elastic Composite Laminates

Figiel, Lukasz 14 November 2004 (has links) (PDF)
Composite laminates are an important subject of modern technology and engineering. The most common mode of failure in these materials is probably interlaminar fracture (delamination). Delamination growth under applied fatigue loads usually leads to structural integrity loss of the composite laminate, and hence its catastrophic failure. It is known that several parameters can affect the fatigue fracture performance of laminates. These include the constituent material properties, composite geometry, fatigue load variables or environmental factors. The knowledge about effects of these parameters on fatigue delamination growth can lead to a better understanding of composite fatigue fracture behaviour. Effects of some of these parameters can be elucidated by undertaking appropriate sensitivity analysis combined with the finite element method (FEM) and related software. The purpose of this work was three-fold. The first goal was the elaboration and computational implementation of FEM-based numerical strategies for the sensitivity analysis of interface fatigue crack propagation in elastic composite laminates. The second goal of this work was the numerical determination and investigation of displacement and stress fields near the crack tip, contact pressures along crack surfaces, mixed mode angle, energy release rate and the number of cumulative fatigue cycles. The third aim of the present study was to use the developed strategies to evaluate numerically the sensitivity gradients of the total energy release rate and fatigue life with respect to design variables of the curved boron/epoxy-aluminium (B/Ep-Al) composite laminate in two different material configurations under cyclic shear of constant amplitude. This study provided novel strategies for undertaking sensitivity analysis of the delamination growth under fatigue loads for elastic composite laminates using the package ANSYS. The numerical results of the work shed more light on mechanisms of interfacial crack propagation under cyclic shear in the case of a curved B/Ep-Al composite laminate. Moreover, the outcome of the sensitivity gradients demonstrated some advantages for using the sensitivity analysis to pinpoint directions for the optimisation of fatigue fracture performance of elastic laminates. The strategies proposed in this work can be used to study the sensitivity of the interface fatigue crack propagation in other elastic laminates, if the crack propagates at the interface between the elastic and isotropic components. However, the strategies can be potentially extended to composites with interfacial cracks propagating between two non-isotropic constituents under a constant amplitude fatigue load. Finally, the strategies can also be used to undertake the sensitivity analysis of composite fatigue life with respect to variables of fatigue load.
556

Energieffektivisering av luftbehandlingsaggregat : En kartläggning och energieffektivisering av luftbehandlingsaggregat

Mooshtak, Mohsen, Ekström, Fredrik January 2015 (has links)
Denna rapport kommer att undersöka elanvändningen på ventilationsaggregaten i R-huset på Mälardalens högskola och även undersöka om byte av fläkt och motor i ventilationsaggregat vid eventuellt hög elanvändning är ekonomisk lönsamt. R-husets byggår är 1993 och har de mest uråldriga ventilationsaggregaten av skolans fastigheter. Ett ventilationsaggregats livslängd kan uppskattas mellan 20-30 år. Enligt undersökningar finns det stora möjligheter till energieffektiviseringsåtgärder i sektorn bostäder och service där ventilationsaggregat med föråldrad teknik och komponenter kan vara miljöbovar med hög elanvändning. En avgränsning i arbetet är att ventilationsaggregatet med högst specifik fläkt effekt (SFP) ska undersökas noggrant i form av energieffektivisering. Energieffektiviseringen kommer att begränsas till luftbehandlingsaggregatets fläktar och motorer. Kartläggning genomfördes med tryckmätningar på ventilationsaggregaten i R-huset och även el mätningar. El mätningarna behövdes för att beräkna den aktuella elanvändningen för fläktarna. Resultatet av den beräknade elanvändningen för vardera fläkt visade sig vara hög. Tryck mätningarna utfördes för att få fram tryckuppsättningen för fläktarna i aggregaten och vid beräkning av till- och frånluftsflöde. Tryckuppsättningen och flöde över fläktar behövdes vid val av ny fläkt och motor. Resultatet av kartläggningen visar att det totala SFP-värdet som står för den specifika fläkteffekten är höga i jämförelse vid nyproduktion där boverkets byggregler har krav på SFP-värde under 2,0 kW/(m3,s). Högsta SFP-värdet ligger på 4,4 kW/(m3/s) och en livscykelkostnad (LCC) utfördes för detta ventilationsaggregat. LCC-kalkylen baseras på en kalkylperiod på 10 år då på befintliga fläktar och motorer bedöms ha den livslängden. Kalkylräntan är satt till 9 %, energiprisökningen 5 % och detta ger en real kalkylränta på 4 %. Elpriset är valt till det aktuella på 0,80 kr/kWh. LCC-kalkylen visar att det inte är lönsamt att byta till nya fläktar och motorer. Payback metoden gav 12 års återbetalningstid vilket är länge i jämförelse med studerad litteraturstudie. En känslighetsanalys utfördes med ett fördubblat elpris och då blev det ekonomiskt lönsamt att byta befintliga fläktar och motorer till nya. / This report will examine the electricity consumption of the ventilation units in the R building at Mälardalen University and also investigate whether replacing the fan and motor in the ventilation unit with new combined fan and engine units in the event of high electricity consumption is economic profitable. The building was built in 1993 and has the oldest ventilation units of all properties at the university. A ventilation unit’s life can be estimated to last for 20-30 years. According to studies, there are major opportunities for energy efficiency measures in the residential and service sector, where ventilation systems with outdated technology and components can be energy wasters with high electricity consumption. A limitation of this work is that the ventilation unit with the highest Specific Fan Power (SFP) should be examined carefully in terms of energy efficiency. Energy efficiency will be limited to the air handling fans and motors. Mapping was conducted with pressure measurements in the ventilation units and also electrical measurements were done. Electricity measurements are needed to calculate the current electricity consumption of the fans. The result of the estimated power consumption of each fan was found to be high. Pressure measurements were performed to obtain the pressure of the fans in the units and to calculate supply and exhaust air flow. The pressure rise and flow of the fans are needed in order to select a new fan and motor. The result of the survey shows that the total SFP value is high in comparison with the new construction where Boverkets regulations requirements for SFP value is below 2.0. The highest SFP-value is 4.4 kW / (m3 / s) and a Life Cycle Cost (LCC) calculation was performed for that ventilation unit. The LCC calculation is based on a calculation time period of 10 years, where the existing fans and motors are predicted to last for 10 years. The discount rate is set at 9%, energy with a price increase of 5% and this gives a total discount rate of 4%. Electricity prices are up to date and are 0.80 SEK / kWh. LCC calculation shows that it is not profitable to switch to new fans and motors and as payback method gave a 12-year repayment period. A sensitivity analysis was performed with a redoubled electricity price and it then became economically viable to replace existing fans and motors with new ones.
557

Schemes and Strategies to Propagate and Analyze Uncertainties in Computational Fluid Dynamics Applications

Geraci, Gianluca 05 December 2013 (has links) (PDF)
In this manuscript, three main contributions are illustrated concerning the propagation and the analysis of uncertainty for computational fluid dynamics (CFD) applications. First, two novel numerical schemes are proposed : one based on a collocation approach, and the other one based on a finite volume like representation in the stochastic space. In both the approaches, the key element is the introduction of anon-linear multiresolution representation in the stochastic space. The aim is twofold : reducing the dimensionality of the discrete solution and applying a time-dependent refinement/coarsening procedure in the combined physical/stochastic space. Finally, an innovative strategy, based on variance-based analysis, is proposed for handling problems with a moderate large number of uncertainties in the context of the robust design optimization. Aiming to make more robust this novel optimization strategies, the common ANOVA-like approach is also extended to high-order central moments (up to fourth order). The new approach is more robust, with respect to the original variance-based one, since the analysis relies on new sensitivity indexes associated to a more complete statistic description.
558

Ignalinos AE tikimybinio saugos vertinimo modelio neapibrėžtumo ir jautrumo analizė / Uncertainty and sensitivity analysis of Ignalina NPP probabilistic safety assessment model

Bucevičius, Nerijus 19 June 2008 (has links)
Neapibrėžtumo analizė techninių sistemų modeliavimo rezultatams yra ypač aktuali, kai modeliuojamas pavojingų sistemų darbas, saugą užtikrinančių sistemų funkcionavimas, nagrinėjami avarijų scenarijai ar kiti, su rizika susiję klausimai. Tokiais atvejais, ypatingai reaktorių saugos analizės srityje, yra labai svarbu, kad gauti modeliavimo rezultatais būtų robastiški. Šiame darbe yra atliekama Ignalinos AE tikimybinio saugos vertinimo modelio neapibrėžtumo ir jautrumo analizė. Neapibrėžtumo ir jautrumo analizė atlikta naudojantis skirtingais statistinio vertinimo metodais, taikant programų paketą SUSA. Gauti rezultatai palyginti su tikimybinio modeliavimo sistemos Risk Spectrum PSA tyrimo rezultatais. Palyginimas parodė, jog skirtingais metodais ir programiniais paketais parametrų reikšmingumas įvertintas vienodai. Statistinė neapibrėžtumo ir jautrumo analizė, taikant Monte Karlo modeliavimo metodą, leido nustatyti parametrus turėjusius didžiausią įtaką modelio rezultatui. / The uncertainty estimation is the part of full analysis for modelling of safety system functioning in case of the accident, for risk estimation and for making the risk-based decision. In this paper the uncertainty and sensitivity analysis of Ignalina NPP probabilistic safety assessment model was performed using SUSA software package. The results were compared with the results, performed using software package Risk Spectrumm PSA. Statistical analysis of uncertainty and sensitivity allows to estimate the influence of parameters on the calculation results and find those modelling parameters that have the largest impact on the result. Conclusions about for importance of a parameters and sensitivity of the result are obtained using a linear approximation of the model under analysis.
559

Tikimybinės dinamikos modeliavimas ir patikimumo analizė / Simulation and reliability analysis of probabilistic dynamics

Eimontas, Tadas 16 August 2007 (has links)
Dėl spartaus technologijų naudojimo paskutiniais dešimtmečiais kuriama vis daugiau sudėtingų sistemų, kurių saugos užtikrinimui turi būti vertinamas techninės ir programinės įrangos patikimumas bei žmogaus-operatoriaus veiksmai. Analizuojant tokias sistemas ypatingai svarbią reikšmę turi laiko faktorius ir su juo susiję determinuoti fiziniai procesai bei stochastiniai įvykiai. Šio tiriamojo darbo tikslas yra sukurti tikimybinės dinamikos modeliavimo metodiką išplėtojant susijusias patikimumo analizės priemones bei jas pritaikyti realios sistemos tyrimui. Sprendžiant užsibrėžtus uždavinius, darbe buvo pritaikyta pažangi stimuliuojamos dinamikos teorija, kol kas neturinti plataus praktinių modelių pagrindimo. Pritaikyta imitacinio modeliavimo metodika suteikė galimybę atlikti visapusišką šilumnešio praradimo avarijos saugos analizę. Reikšmingiausi darbo rezultatai yra susiję su neapibrėžtų įvykių ir dinaminių sistemų saugos analize, siekiant padidinti jų patikimumą. Didžioji atlikto darbo taikymų dalis skirta techninėms dinaminėms sistemoms. Darbe išnagrinėti ir išplėtoti modeliavimo metodai, kurie yra tinkami tirti sistemų patikimumą, susijusį su uždelstais pavojingais įvykiais ar rizikingais operatorių sprendimais. / The current probabilistic safety analysis is not capable of estimating the reliability of the complex dynamic systems where the interactions occur between hardware, software and human actions. In the safety analysis of these systems the time factor is as much important as it joins an evolution of physical variables and stochastic events. In this master thesis the simulation and reliability analysis of the probabilistic dynamics are considered. The new approach of stimulus based probabilistic dynamics is used for the Monte Carlo simulations of the dynamic system. The developed methodology was applied for the safety analysis of the loss of the coolant accident in the nuclear reactor. Besides the assessment of the probability of system failure the scenario analysis was accomplished. The essential events were identified. The uncertainty and sensitivity analysis revealed that the failure probability had a wide range of the distribution due to the uncertainty of twelve simulation parameters. Four main parameters were identified as their uncertainty had the biggest correlation with the uncertainty of the system failure. For the complete reliability analysis the relations between the failure probability and the system characteristics were determined.
560

Incorporating distributed generation into distribution network planning : the challenges and opportunities for distribution network operators

Wang, David Tse-Chi January 2010 (has links)
Diversification of the energy mix is one of the main challenges in the energy agenda of governments worldwide. Technology advances together with environmental concerns have paved the way for the increasing integration of Distributed Generation (DG) seen over recent years. Combined heat and power and renewable technologies are being encouraged and their penetration in distribution networks is increasing. This scenario presents Distribution Network Operators (DNOs) with several technical challenges in order to properly accommodate DG developments. However, depending on various factors, such as location, size, technology and robustness of the network, DG might also be beneficial to DNOs. In this thesis, the impact of DG on network planning is analysed and the implications for DNOs in incorporating DG within the network planning process are identified. In the first part, various impacts of DG to the network, such as network thermal capacity release, security of supply and on voltage, are quantified through network planning by using a modified successive elimination method and voltage sensitivity analysis. The results would potentially assist DNOs in assessing the possibilities and effort required to utilise privately-owned DG to improve network efficiency and save investment. The quantified values would also act as a fundamental element in deriving effective distribution network charging schemes. In the second part, a novel balanced genetic algorithm is introduced as an efficient means of tackling the problem of optimum network planning considering future uncertainties. The approach is used to analyse the possibilities, potential benefits and challenges to strategic network planning by considering the presence of DG in the future when the characteristics of DG are uncertain.

Page generated in 0.0887 seconds