• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 327
  • 113
  • 91
  • 76
  • 36
  • 24
  • 12
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 878
  • 878
  • 145
  • 124
  • 121
  • 118
  • 113
  • 101
  • 101
  • 85
  • 82
  • 81
  • 73
  • 71
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

La Méthode des Équations Intégrales pour des Analyses de Sensitivité.

Zribi, Habib 21 December 2005 (has links) (PDF)
Dans cette thèse, nous menons à l'aide de la méthode des équations intégrales des analyses de sensitivité de solutions ou de spectres de l'équation de conductivité par rapport aux variations géométriques ou de paramètres de l'équation. En particulier, nous considérons le problème de conductivité dans des milieux à forts contrastes, le problème de perturbation du bord d'une inclusion de conductivité, le problème de valeurs propres du Laplacien dans des domaines perturbés et le problème d'ouverture de gap dans le spectre des cristaux photoniques.
542

Quantified PIRT and uncertainty quantification for computer code validation

Luo, Hu 05 December 2013 (has links)
This study is intended to investigate and propose a systematic method for uncertainty quantification for the computer code validation application. Uncertainty quantification has gained more and more attentions in recent years. U.S. Nuclear Regulatory Commission (NRC) requires the use of realistic best estimate (BE) computer code to follow the rigorous Code Scaling, Application and Uncertainty (CSAU) methodology. In CSAU, the Phenomena Identification and Ranking Table (PIRT) was developed to identify important code uncertainty contributors. To support and examine the traditional PIRT with quantified judgments, this study proposes a novel approach, the Quantified PIRT (QPIRT), to identify important code models and parameters for uncertainty quantification. Dimensionless analysis to code field equations to generate dimensionless groups (�� groups) using code simulation results serves as the foundation for QPIRT. Uncertainty quantification using DAKOTA code is proposed in this study based on the sampling approach. Nonparametric statistical theory identifies the fixed number of code run to assure the 95 percent probability and 95 percent confidence in the code uncertainty intervals. / Graduation date: 2013 / Access restricted to the OSU Community, at author's request, from Dec. 5, 2012 - Dec. 5, 2013
543

Back-calculating emission rates for ammonia and particulate matter from area sources using dispersion modeling

Price, Jacqueline Elaine 15 November 2004 (has links)
Engineering directly impacts current and future regulatory policy decisions. The foundation of air pollution control and air pollution dispersion modeling lies in the math, chemistry, and physics of the environment. Therefore, regulatory decision making must rely upon sound science and engineering as the core of appropriate policy making (objective analysis in lieu of subjective opinion). This research evaluated particulate matter and ammonia concentration data as well as two modeling methods, a backward Lagrangian stochastic model and a Gaussian plume dispersion model. This analysis assessed the uncertainty surrounding each sampling procedure in order to gain a better understanding of the uncertainty in the final emission rate calculation (a basis for federal regulation), and it assessed the differences between emission rates generated using two different dispersion models. First, this research evaluated the uncertainty encompassing the gravimetric sampling of particulate matter and the passive ammonia sampling technique at an animal feeding operation. Future research will be to further determine the wind velocity profile as well as determining the vertical temperature gradient during the modeling time period. This information will help quantify the uncertainty of the meteorological model inputs into the dispersion model, which will aid in understanding the propagated uncertainty in the dispersion modeling outputs. Next, an evaluation of the emission rates generated by both the Industrial Source Complex (Gaussian) model and the WindTrax (backward-Lagrangian stochastic) model revealed that the calculated emission concentrations from each model using the average emission rate generated by the model are extremely close in value. However, the average emission rates calculated by the models vary by a factor of 10. This is extremely troubling. In conclusion, current and future sources are regulated based on emission rate data from previous time periods. Emission factors are published for regulation of various sources, and these emission factors are derived based upon back-calculated model emission rates and site management practices. Thus, this factor of 10 ratio in the emission rates could prove troubling in terms of regulation if the model that the emission rate is back-calculated from is not used as the model to predict a future downwind pollutant concentration.
544

Advances in groundwater protection strategy using vulnerability mapping and hydrogeological GIS databases

Gogu, Radu Constantin 05 January 2001 (has links)
Groundwater vulnerability maps are useful for environmental planning and decision-making. They are usually produced by applying vulnerability assessment methods using overlay and index techniques. On the basis of a review of the vulnerability assessment and mapping methods, new research challenges in aquifers vulnerability assessment are identified. Operations like the parameter quantification, the vulnerability index computing, and the final classification, are affected by an empirical character which of course affects also the final product: the vulnerability map. In consequence, the validity of the resulted vulnerability maps must be evaluated in function of the objectives of the survey and in function of the specific characteristics of each studied zone. Analysing their uncertainty can represent the base for their validation. Uncertainty can be investigated through sensitivity analysis or through comparisons between vulnerability maps created using different methods. Both these strategies are developed in this study and illustrated from applications on practical case studies of vulnerability mapping. Applying the EPIK parametric method, a vulnerability assessment has been made for a small karstic groundwater system in southern Belgium. The aquifer consists in a karstified limestone of Devonian age. A map of intrinsic vulnerability of the aquifer shows three vulnerability areas. A parameter-balance study and a sensitivity analysis were performed to evaluate the influence of single parameters on aquifer vulnerability assessment using the EPIK method. This approach provides a methodology for the evaluation of vulnerability mapping and for more reliable interpretation of vulnerability indices for karst groundwater resources. Five different methods for assessing the intrinsic vulnerability were tested on a case study for comparison of their results. The test area consists in a slightly karsified basin located in the Condroz region (Belgium). The basin covers about 65 km² and the karstic aquifer provides a daily water supply of about 28000 m³ in drainage galleries. Several campaigns of measurements consisting in morpho-structural observations, shallow geophysics, pumping and tracer tests have provided useful data. The tested methods were: EPIK (Doerfliger and Zwahlen, 1997), DRASTIC (Aller et al., 1987), German methods (von Hoyer & Söfner, 1998), GOD (Foster, 1987), and ISIS (Civita and De Regibus, 1995). DRASTIC and GOD represent classic approaches in vulnerability assessment. ISIS is a development based on DRASTIC, SINTACS (Civita, 1994), and GOD methods, where more importance is given to the recharge. EPIK was developed specifically for karstic geological contexts and the German methods was developed in Germany for a broad range of geological contexts. Compared results are shown and commented. It seems that despite the fact that the EPIK method can better outline the karstic features about 92% of the studied area is assessed by this technique as low vulnerable. In contrast, the other four methods are considering extended zones of high or moderate vulnerability. From the analysis, it seems also that reducing the number of considered parameters is not ideal when adaptation to various geological contexts is needed. Reliability and validity of groundwater analysis strongly depend on the availability of large volumes of high quality data. Putting all data in a coherent and logical structure supported by a computing environment helps ensure a validity and availability, and provides a powerful tool for hydrogeological studies. A hydrogeological GIS database that offers facilities for groundwater vulnerability analysis and hydrogeological modelling has been designed in Belgium, for the Walloon Region. Data from five river basins, chosen for their contrasted applications that have been developed allow now further advances. However the basic concept of the database is represented by the commonly accepted Georelational model developed in the 1970s, the database concept presents a distinctive character. There is a growing interest in the potential for integrating GIS technology and groundwater simulation models. Between the mentioned spatial database schema and the groundwater numerical model interface GMS (Groundwater Modelling System) a loose-coupling tool was created. Following time and spatial queries, the hydrogeological data stored in the database can be easily used within different groundwater numerical models. This development can represent also a solid base for the physical processes integration within the quantification of the vulnerability methods parameters. The fundamental aim of this work was to help improving the aquifers protection strategy using vulnerability mapping and GIS. The results are offering the theoretical and practical basis for developing a strategy for protecting the groundwater resources.
545

Design of highly distributed biofuel production systems

Luo, Dexin 01 November 2011 (has links)
This thesis develops quantitative methods for evaluation and design of large-scale biofuel production systems with a particular focus on bioreactor-based fuel systems. In Chapter 2, a lifecycle assessment (LCA) method is integrated with chemical process modeling to select from different process designs the one that maximizes the energy efficiency and minimizes the environmental impact of a production system. An algae-based ethanol production technology, which is in the process of commercialization, is used as a case study. Motivated by this case study, Chapter 3 studies the selection of process designs and production capacity of highly distributed bioreactor-based fuel system from an economic perspective. Nonlinear optimization models based on net present value maximization are developed that aim at selecting the optimal capacities of production equipment for both integrated and distributed-centralized process designs on symmetric production layouts. Global sensitivity analysis based on Monte Carlo estimates is performed to show the impact of different parameters on the optimal capacity decision and the corresponding net present value. Conditional Value at Risk optimization is used to compare the optimal capacity for a risk-neutral planner versus a risk-averse decision maker. Chapter 4 studies mobile distributed processing in biofuel industry as vehicle routing problem and production equipment location with an underlying pipeline network as facility location problem with a focus on general production costs. Formulations and algorithms are developed to explore how fixed cost and concavity in the production cost increases the theoretical complexity of these problems.
546

Enabling methods for the design and optimization of detection architectures

Payan, Alexia Paule Marie-Renee 08 April 2013 (has links)
The surveillance of geographic borders and critical infrastructures using limited sensor capability has always been a challenging task in many homeland security applications. While geographic borders may be very long and may go through isolated areas, critical assets may be large and numerous and may be located in highly populated areas. As a result, it is virtually impossible to secure each and every mile of border around the country, and each and every critical infrastructure inside the country. Most often, a compromise must be made between the percentage of border or critical asset covered by surveillance systems and the induced cost. Although threats to homeland security can be conceived to take place in many forms, those regarding illegal penetration of the air, land, and maritime domains under the cover of day-to-day activities have been identified to be of particular interest. For instance, the proliferation of drug smuggling, illegal immigration, international organized crime, resource exploitation, and more recently, modern piracy, require the strengthening of land border and maritime awareness and increasingly complex and challenging national security environments. The complexity and challenges associated to the above mission and to the protection of the homeland may explain why a methodology enabling the design and optimization of distributed detection systems architectures, able to provide accurate scanning of the air, land, and maritime domains, in a specific geographic and climatic environment, is a capital concern for the defense and protection community. This thesis proposes a methodology aimed at addressing the aforementioned gaps and challenges. The methodology particularly reformulates the problem in clear terms so as to facilitate the subsequent modeling and simulation of potential operational scenarios. The needs and challenges involved in the proposed study are investigated and a detailed description of a multidisciplinary strategy for the design and optimization of detection architectures in terms of detection performance and cost is provided. This implies the creation of a framework for the modeling and simulation of notional scenarios, as well as the development of improved methods for accurate optimization of detection architectures. More precisely, the present thesis describes a new approach to determining detection architectures able to provide effective coverage of a given geographical environment at a minimum cost, by optimizing the appropriate number, types, and locations of surveillance and detection systems. The objective of the optimization is twofold. First, given the topography of the terrain under study, several promising locations are determined for each sensor system based on the percentage of terrain it is covering. Second, architectures of sensor systems able to effectively cover large percentages of the terrain at minimal costs are determined by optimizing the number, types and locations of each detection system in the architecture. To do so, a modified Genetic Algorithm and a modified Particle Swarm Optimization are investigated and their ability to provide consistent results is compared. Ultimately, the modified Particle Swarm Optimization algorithm is used to obtain a Pareto frontier of detection architectures able to satisfy varying customer preferences on coverage performance and related cost.
547

Uncertainty in Regional Air Quality Modeling

Digar, Antara 05 September 2012 (has links)
Effective pollution mitigation is the key to successful air quality management. Although states invest millions of dollars to predict future air quality, the regulatory modeling and analysis process to inform pollution control strategy remains uncertain. Traditionally deterministic ‘bright-line’ tests are applied to evaluate the sufficiency of a control strategy to attain an air quality standard. A critical part of regulatory attainment demonstration is the prediction of future pollutant levels using photochemical air quality models. However, because models are uncertain, they yield a false sense of precision that pollutant response to emission controls is perfectly known and may eventually mislead the selection of control policies. These uncertainties in turn affect the health impact assessment of air pollution control strategies. This thesis explores beyond the conventional practice of deterministic attainment demonstration and presents novel approaches to yield probabilistic representations of pollutant response to emission controls by accounting for uncertainties in regional air quality planning. Computationally-efficient methods are developed and validated to characterize uncertainty in the prediction of secondary pollutant (ozone and particulate matter) sensitivities to precursor emissions in the presence of uncertainties in model assumptions and input parameters. We also introduce impact factors that enable identification of model inputs and scenarios that strongly influence pollutant concentrations and sensitivity to precursor emissions. We demonstrate how these probabilistic approaches could be applied to determine the likelihood that any control measure will yield regulatory attainment, or could be extended to evaluate probabilistic health benefits of emission controls, considering uncertainties in both air quality models and epidemiological concentration–response relationships. Finally, ground-level observations for pollutant (ozone) and precursor concentrations (oxides of nitrogen) have been used to adjust probabilistic estimates of pollutant sensitivities based on the performance of simulations in reliably reproducing ambient measurements. Various observational metrics have been explored for better scientific understanding of how sensitivity estimates vary with measurement constraints. Future work could extend these methods to incorporate additional modeling uncertainties and alternate observational metrics, and explore the responsiveness of future air quality to project trends in emissions and climate change.
548

CDMA Channel Selection Using Switched Capacitor Technique

Nejadmalayeri, Amir Hossein January 2001 (has links)
CDMA channel selection requires sharp as well as wide-band Filtering. SAW Filters which have been used for this purpose are only available in IF range. In direct conversion receivers this has to be done at low frequencies. Switched Capacitor technique has been employed to design a low power, highly selective low-pass channel select Filter for CDMA wireless receivers. The topology which has been chosen ensures the low sensitivity of the Filter response. The circuit has been designed in a mixed-mode 0. 18u CMOS technology working with a single supply of 1. 8 V while its current consumption is less than 10 mA.
549

CDMA Channel Selection Using Switched Capacitor Technique

Nejadmalayeri, Amir Hossein January 2001 (has links)
CDMA channel selection requires sharp as well as wide-band Filtering. SAW Filters which have been used for this purpose are only available in IF range. In direct conversion receivers this has to be done at low frequencies. Switched Capacitor technique has been employed to design a low power, highly selective low-pass channel select Filter for CDMA wireless receivers. The topology which has been chosen ensures the low sensitivity of the Filter response. The circuit has been designed in a mixed-mode 0. 18u CMOS technology working with a single supply of 1. 8 V while its current consumption is less than 10 mA.
550

A Computer-Based Decision Tool for Prioritizing the Reduction of Airborne Chemical Emissions from Canadian Oil Refineries Using Estimated Health Impacts

Gower, Stephanie Karen January 2007 (has links)
Petroleum refineries emit a variety of airborne substances which may be harmful to human health. HEIDI II (Health Effects Indicators Decision Index II) is a computer-based decision analysis tool which assesses airborne emissions from Canada's oil refineries for reduction, based on ordinal ranking of estimated health impacts. The model was designed by a project team within NERAM (Network for Environmental Risk Assessment and Management) and assembled with significant stakeholder consultation. HEIDI II is publicly available as a deterministic Excel-based tool which ranks 31 air pollutants based on predicted disease incidence or estimated DALYS (disability adjusted life years). The model includes calculations to account for average annual emissions, ambient concentrations, stack height, meteorology/dispersion, photodegradation, and the population distribution around each refinery. Different formulations of continuous dose-response functions were applied to nonthreshold-acting air toxics, threshold-acting air toxics, and nonthreshold-acting CACs (criteria air contaminants). An updated probabilistic version of HEIDI II was developed using Matlab code to account for parameter uncertainty and identify key leverage variables. Sensitivity analyses indicate that parameter uncertainty in the model variables for annual emissions and for concentration-response/toxicological slopes have the greatest leverage on predicted health impacts. Scenario analyses suggest that the geographic distribution of population density around a refinery site is an important predictor of total health impact. Several ranking metrics (predicted case incidence, simple DALY, and complex DALY) and ordinal ranking approaches (deterministic model, average from Monte Carlo simulation, test of stochastic dominance) were used to identify priority substances for reduction; the results were similar in each case. The predicted impacts of primary and secondary particulate matter (PM) consistently outweighed those of the air toxics. Nickel, PAH (polycyclic aromatic hydrocarbons), BTEX (benzene, toluene, ethylbenzene and xylene), sulphuric acid, and vanadium were consistently identified as priority air toxics at refineries where they were reported emissions. For many substances, the difference in rank order is indeterminate when parametric uncertainty and variability are considered.

Page generated in 0.0556 seconds