• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 25
  • 20
  • 12
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 222
  • 222
  • 40
  • 35
  • 32
  • 31
  • 30
  • 24
  • 24
  • 24
  • 22
  • 20
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

KARTOTRAK, integrated software solution for contaminated site characterization

Wagner, Laurent 03 November 2015 (has links) (PDF)
Kartotrak software allows optimal waste classification and avoids unnecessary remediation. It has been designed for those - site owners, safety authorities or contractors, involved in environmental site characterization projects - who need to locate and estimate contaminated soil volumes confidently.
212

Développement de modèles prédictifs de la toxicocinétique de substances organiques

Peyret, Thomas 02 1900 (has links)
Les modèles pharmacocinétiques à base physiologique (PBPK) permettent de simuler la dose interne de substances chimiques sur la base de paramètres spécifiques à l’espèce et à la substance. Les modèles de relation quantitative structure-propriété (QSPR) existants permettent d’estimer les paramètres spécifiques au produit (coefficients de partage (PC) et constantes de métabolisme) mais leur domaine d’application est limité par leur manque de considération de la variabilité de leurs paramètres d’entrée ainsi que par leur domaine d’application restreint (c. à d., substances contenant CH3, CH2, CH, C, C=C, H, Cl, F, Br, cycle benzénique et H sur le cycle benzénique). L’objectif de cette étude est de développer de nouvelles connaissances et des outils afin d’élargir le domaine d’application des modèles QSPR-PBPK pour prédire la toxicocinétique de substances organiques inhalées chez l’humain. D’abord, un algorithme mécaniste unifié a été développé à partir de modèles existants pour prédire les PC de 142 médicaments et polluants environnementaux aux niveaux macro (tissu et sang) et micro (cellule et fluides biologiques) à partir de la composition du tissu et du sang et de propriétés physicochimiques. L’algorithme résultant a été appliqué pour prédire les PC tissu:sang, tissu:plasma et tissu:air du muscle (n = 174), du foie (n = 139) et du tissu adipeux (n = 141) du rat pour des médicaments acides, basiques et neutres ainsi que pour des cétones, esters d’acétate, éthers, alcools, hydrocarbures aliphatiques et aromatiques. Un modèle de relation quantitative propriété-propriété (QPPR) a été développé pour la clairance intrinsèque (CLint) in vivo (calculée comme le ratio du Vmax (μmol/h/kg poids de rat) sur le Km (μM)), de substrats du CYP2E1 (n = 26) en fonction du PC n octanol:eau, du PC sang:eau et du potentiel d’ionisation). Les prédictions du QPPR, représentées par les limites inférieures et supérieures de l’intervalle de confiance à 95% à la moyenne, furent ensuite intégrées dans un modèle PBPK humain. Subséquemment, l’algorithme de PC et le QPPR pour la CLint furent intégrés avec des modèles QSPR pour les PC hémoglobine:eau et huile:air pour simuler la pharmacocinétique et la dosimétrie cellulaire d’inhalation de composés organiques volatiles (COV) (benzène, 1,2-dichloroéthane, dichlorométhane, m-xylène, toluène, styrène, 1,1,1 trichloroéthane et 1,2,4 trimethylbenzène) avec un modèle PBPK chez le rat. Finalement, la variabilité de paramètres de composition des tissus et du sang de l’algorithme pour les PC tissu:air chez le rat et sang:air chez l’humain a été caractérisée par des simulations Monte Carlo par chaîne de Markov (MCMC). Les distributions résultantes ont été utilisées pour conduire des simulations Monte Carlo pour prédire des PC tissu:sang et sang:air. Les distributions de PC, avec celles des paramètres physiologiques et du contenu en cytochrome P450 CYP2E1, ont été incorporées dans un modèle PBPK pour caractériser la variabilité de la toxicocinétique sanguine de quatre COV (benzène, chloroforme, styrène et trichloroéthylène) par simulation Monte Carlo. Globalement, les approches quantitatives mises en œuvre pour les PC et la CLint dans cette étude ont permis l’utilisation de descripteurs moléculaires génériques plutôt que de fragments moléculaires spécifiques pour prédire la pharmacocinétique de substances organiques chez l’humain. La présente étude a, pour la première fois, caractérisé la variabilité des paramètres biologiques des algorithmes de PC pour étendre l’aptitude des modèles PBPK à prédire les distributions, pour la population, de doses internes de substances organiques avant de faire des tests chez l’animal ou l’humain. / Physiologically-based pharmacokinetic (PBPK) models simulate the internal dose metrics of chemicals based on species-specific and chemical-specific parameters. The existing quantitative structure-property relationships (QSPRs) allow to estimate the chemical-specific parameters (partition coefficients (PCs) and metabolic constants) but their applicability is limited by their lack of consideration of variability in input parameters and their restricted application domain (i.e., substances containing CH3, CH2, CH, C, C=C, H, Cl, F, Br, benzene ring and H in benzene ring). The objective of this study was to develop new knowledge and tools to increase the applicability domain of QSPR-PBPK models for predicting the inhalation toxicokinetics of organic compounds in humans. First, a unified mechanistic algorithm was developed from existing models to predict macro (tissue and blood) and micro (cell and biological fluid) level PCs of 142 drugs and environmental pollutants on the basis of tissue and blood composition along with physicochemical properties. The resulting algorithm was applied to compute the tissue:blood, tissue:plasma and tissue:air PCs in rat muscle (n = 174), liver (n = 139) and adipose tissue (n = 141) for acidic, neutral, zwitterionic and basic drugs as well as ketones, acetate esters, alcohols, ethers, aliphatic and aromatic hydrocarbons. Then, a quantitative property-property relationship (QPPR) model was developed for the in vivo rat intrinsic clearance (CLint) (calculated as the ratio of the in vivo Vmax (μmol/h/kg bw rat) to the Km (μM)) of CYP2E1 substrates (n = 26) as a function of n-octanol:water PC, blood:water PC, and ionization potential). The predictions of the QPPR as lower and upper bounds of the 95% mean confidence intervals were then integrated within a human PBPK model. Subsequently, the PC algorithm and QPPR for CLint were integrated along with a QSPR model for the hemoglobin:water and oil:air PCs to simulate the inhalation pharmacokinetics and cellular dosimetry of volatile organic compounds (VOCs) (benzene, 1,2-dichloroethane, dichloromethane, m-xylene, toluene, styrene, 1,1,1-trichloroethane and 1,2,4 trimethylbenzene) using a PBPK model for rats. Finally, the variability in the tissue and blood composition parameters of the PC algorithm for rat tissue:air and human blood:air PCs was characterized by performing Markov chain Monte Carlo (MCMC) simulations. The resulting distributions were used for conducting Monte Carlo simulations to predict tissue:blood and blood:air PCs for VOCs. The distributions of PCs, along with distributions of physiological parameters and CYP2E1 content, were then incorporated within a PBPK model, to characterize the human variability of the blood toxicokinetics of four VOCs (benzene, chloroform, styrene and trichloroethylene) using Monte Carlo simulations. Overall, the quantitative approaches for PCs and CLint implemented in this study allow the use of generic molecular descriptors rather than specific molecular fragments to predict the pharmacokinetics of organic substances in humans. In this process, the current study has, for the first time, characterized the variability of the biological input parameters of the PC algorithms to expand the ability of PBPK models to predict the population distributions of the internal dose metrics of organic substances prior to testing in animals or humans.
213

Conceptual design of wastewater treatment plants using multiple objectives

Flores Alsina, Xavier 28 April 2008 (has links)
La implementació de la Directiva Europea 91/271/CEE referent a tractament d'aigües residuals urbanes va promoure la construcció de noves instal·lacions al mateix temps que la introducció de noves tecnologies per tractar nutrients en àrees designades com a sensibles. Tant el disseny d'aquestes noves infraestructures com el redisseny de les ja existents es va portar a terme a partir d'aproximacions basades fonamentalment en objectius econòmics degut a la necessitat d'acabar les obres en un període de temps relativament curt. Aquests estudis estaven basats en coneixement heurístic o correlacions numèriques provinents de models determinístics simplificats. Així doncs, moltes de les estacions depuradores d'aigües residuals (EDARs) resultants van estar caracteritzades per una manca de robustesa i flexibilitat, poca controlabilitat, amb freqüents problemes microbiològics de separació de sòlids en el decantador secundari, elevats costos d'operació i eliminació parcial de nutrients allunyant-les de l'òptim de funcionament. Molts d'aquestes problemes van sorgir degut a un disseny inadequat, de manera que la comunitat científica es va adonar de la importància de les etapes inicials de disseny conceptual. Precisament per aquesta raó, els mètodes tradicionals de disseny han d'evolucionar cap a sistemes d'avaluació mes complexos, que tinguin en compte múltiples objectius, assegurant així un millor funcionament de la planta. Tot i la importància del disseny conceptual tenint en compte múltiples objectius, encara hi ha un buit important en la literatura científica tractant aquest camp d'investigació. L'objectiu que persegueix aquesta tesi és el de desenvolupar un mètode de disseny conceptual d'EDARs considerant múltiples objectius, de manera que serveixi d'eina de suport a la presa de decisions al seleccionar la millor alternativa entre diferents opcions de disseny. Aquest treball de recerca contribueix amb un mètode de disseny modular i evolutiu que combina diferent tècniques com: el procés de decisió jeràrquic, anàlisi multicriteri, optimació preliminar multiobjectiu basada en anàlisi de sensibilitat, tècniques d'extracció de coneixement i mineria de dades, anàlisi multivariant i anàlisi d'incertesa a partir de simulacions de Monte Carlo. Això s'ha aconseguit subdividint el mètode de disseny desenvolupat en aquesta tesis en quatre blocs principals: (1) generació jeràrquica i anàlisi multicriteri d'alternatives, (2) anàlisi de decisions crítiques, (3) anàlisi multivariant i (4) anàlisi d'incertesa. El primer dels blocs combina un procés de decisió jeràrquic amb anàlisi multicriteri. El procés de decisió jeràrquic subdivideix el disseny conceptual en una sèrie de qüestions mes fàcilment analitzables i avaluables mentre que l'anàlisi multicriteri permet la consideració de diferent objectius al mateix temps. D'aquesta manera es redueix el nombre d'alternatives a avaluar i fa que el futur disseny i operació de la planta estigui influenciat per aspectes ambientals, econòmics, tècnics i legals. Finalment aquest bloc inclou una anàlisi de sensibilitat dels pesos que proporciona informació de com varien les diferents alternatives al mateix temps que canvia la importància relativa del objectius de disseny.El segon bloc engloba tècniques d'anàlisi de sensibilitat, optimització preliminar multiobjectiu i extracció de coneixement per donar suport al disseny conceptual d'EDAR, seleccionant la millor alternativa un cop s'han identificat decisions crítiques. Les decisions crítiques són aquelles en les que s'ha de seleccionar entre alternatives que compleixen de forma similar els objectius de disseny però amb diferents implicacions pel que respecte a la futura estructura i operació de la planta. Aquest tipus d'anàlisi proporciona una visió més àmplia de l'espai de disseny i permet identificar direccions desitjables (o indesitjables) cap on el procés de disseny pot derivar. El tercer bloc de la tesi proporciona l'anàlisi multivariant de les matrius multicriteri obtingudes durant l'avaluació de les alternatives de disseny. Específicament, les tècniques utilitzades en aquest treball de recerca engloben: 1) anàlisi de conglomerats, 2) anàlisi de components principals/anàlisi factorial i 3) anàlisi discriminant. Com a resultat és possible un millor accés a les dades per realitzar la selecció de les alternatives, proporcionant més informació per a una avaluació mes efectiva, i finalment incrementant el coneixement del procés d'avaluació de les alternatives de disseny generades. En el quart i últim bloc desenvolupat en aquesta tesi, les diferents alternatives de disseny són avaluades amb incertesa. L'objectiu d'aquest bloc és el d'estudiar el canvi en la presa de decisions quan una alternativa és avaluada incloent o no incertesa en els paràmetres dels models que descriuen el seu comportament. La incertesa en el paràmetres del model s'introdueix a partir de funcions de probabilitat. Desprès es porten a terme simulacions Monte Carlo, on d'aquestes distribucions se n'extrauen números aleatoris que es subsisteixen pels paràmetres del model i permeten estudiar com la incertesa es propaga a través del model. Així és possible analitzar la variació en l'acompliment global dels objectius de disseny per a cada una de les alternatives, quines són les contribucions en aquesta variació que hi tenen els aspectes ambientals, legals, econòmics i tècnics, i finalment el canvi en la selecció d'alternatives quan hi ha una variació de la importància relativa dels objectius de disseny. En comparació amb les aproximacions tradicionals de disseny, el mètode desenvolupat en aquesta tesi adreça problemes de disseny/redisseny tenint en compte múltiples objectius i múltiples criteris. Al mateix temps, el procés de presa de decisions mostra de forma objectiva, transparent i sistemàtica el perquè una alternativa és seleccionada en front de les altres, proporcionant l'opció que més bé acompleix els objectius marcats, mostrant els punts forts i febles, les principals correlacions entre objectius i alternatives, i finalment tenint en compte la possible incertesa inherent en els paràmetres del model que es fan servir durant les anàlisis. Les possibilitats del mètode desenvolupat es demostren en aquesta tesi a partir de diferents casos d'estudi: selecció del tipus d'eliminació biològica de nitrogen (cas d'estudi # 1), optimització d'una estratègia de control (cas d'estudi # 2), redisseny d'una planta per aconseguir eliminació simultània de carboni, nitrogen i fòsfor (cas d'estudi # 3) i finalment anàlisi d'estratègies control a nivell de planta (casos d'estudi # 4 i # 5). / The implementation of EU Directive 91/271/EEC concerning urban wastewater treatment promoted the construction of new facilities and the introduction of nutrient removal technologies in areas designated as sensitive. The need to build at a rapid pace imposed economically sound approaches for the design of the new infrastructures and the retrofit of the existing ones. These studies relied exclusively on the use of heuristic knowledge and numerical correlations generated from simplified activated sludge models. Hence, some of the resulting wastewater treatment plants (WWTPs) were characterized by a lack of robustness and flexibility, bad controller performance, frequent microbiology-related solids separation problems in the secondary settler, high operating and maintenance costs and/or partial nutrient removal, which made their performance far from optimal. Most of these problems arose because of inadequate design, making the scientific community aware of the crucial importance of the conceptual design stage. Thus, these traditional design approaches should turn into more complex assessment methods in order to conduct integrated assessments taking into account a multiplicity of objectives an hence ensuring a correct plant performance. Despite the importance of this fact only a few methods in the literature addressed the systematic evaluation of conceptual WWTP design alternatives using multiple objectives. Yet, the decisions made during this stage are of paramount importance in determining the future plant structure and operation. The main objective pursued in this thesis targets the development of a systematic conceptual design method for WWTP using multiple objectives, which supports decision making when selecting the most desirable option amongst several generated alternatives. This research work contributes with a modular and evolutionary approach combining techniques from different disciplines such as: a hierarchical decision approach, multicriteria decision analysis, preliminary multiobjective optimization using sensitivity functions, knowledge extraction and data mining techniques, multivariate statistical techniques and uncertainty analysis using Monte Carlo simulations. This is accomplished by dividing the design method into 4 different blocks: (1) hierarchical generation and multicriteria evaluation of the design alternatives, (2) analysis of critical decisions, (3) multivariate analysis and, finally, (4) uncertainty analysis. The first block of the proposed method, supports the conceptual design of WWTP combining a hierarchical decision approach with multicriteria analysis. The hierarchical decision approach breaks down the conceptual design into a number of issues that are easier to analyze and to evaluate while the multicriteria analysis allows the inclusion of different objectives at the same time. Hence, the number of alternatives to evaluate is reduced while the future WWTP design and operation is greatly influenced by environmental, technical, economical and legal aspects. Also, the inclusion of a sensitivity analysis facilitates the study of the variation of the generated alternatives with respect to the relative importance of the objectives. The second block, analysis of critical decisions, is tackled with sensitivity analysis, preliminary multiobjective optimization and knowledge extraction to assist the designer during the selection of the best alternative amongst the most promising alternatives i.e. options with a similar overall degree of satisfaction of the design objectives but with completely different implications for the future plant design and operation. The analysis provides a wider picture of the possible design space and allows the identification of desirable (or undesirable) WWTP design directions in advance.The third block of the proposed method, involves the application of multivariate statistical techniques to mine the complex multicriteria matrixes obtained during the evaluation of WWTP alternatives. Specifically, the techniques used in this research work are i) cluster analysis, ii) principal component/factor analysis, and iii) discriminant analysis. As a result, there is a significant improvement in the accessibility of the information needed for effective evaluation of WWTP alternatives, yielding more knowledge than the current evaluation methods to finally enhance the comprehension of the whole evaluation process. In the fourth and last block, uncertainty analysis of the different alternatives is further applied. The objective of this tool is to support the decision making when uncertainty on the model parameters used to carry out the analysis of the WWTP alternatives is either included or not. The uncertainty in the model parameters is introduced, i.e input uncertainty, characterising it by probability distributions. Next, Monte Carlo simulations are run to see how those input uncertainties are propagated through the model and affect the different outcomes. Thus, it is possible to study the variation of the overall degree of satisfaction of the design objectives, the contributions of the different objectives in the overall variance to finally analyze the influence of the relative importance of the design objectives during the selection of the alternatives. Thus, in comparison with the traditional approaches the conceptual design method developed in this thesis addresses design/redesign problems with respect to multiple objectives and multiple performance measures. Also, it includes a more reliable decision procedure that shows in a systematic, objective and transparent fashion the rationale way a certain alternative is selected and not the others. The decision procedure provides to the designer/decision maker with the alternative that best fulfils the defined objectives, showing its main advantages and weaknesses, the different correlations between the alternatives and evaluation criteria and dealing with the uncertainty prevailing in some of the model parameters used during the analysis. A number of case studies, selection of biological nitrogen removal process (case study #1), optimization of the setpoints in two control loops (case study #2), redesign to achieve simultaneous organic carbon, nitrogen and phosphorus removal (case study #3) and evaluation of control strategies at plant wide level (case studies #4 and #5), are used to demonstrate the capabilities of the conceptual design method.
214

Investigation of Heat Transfer Rates Around the Aerodynamic Cavities on a Flat Plate at Hypersonic Mach Numbers

Philip, Sarah Jobin January 2011 (has links) (PDF)
Aerodynamic cavities are common features on hypersonic vehicles which are caused in both large and small scale features like surface defects, pitting, gap in joints etc. In the hypersonic regime, the presence of such cavities alters the flow phenomenon considerably and heating rates adjacent to the discontinuities can be greatly enhanced due to the diversion of flow. Since the 1960s, a great deal of theoretical and experimental research has been carried out on cavity flow physics and heating. However, most of the studies have been done to characterize the effect downstream and within the cavity. In the present study, a series of were carried out in the shock tunnel to investigate the heating characteristics, upstream and on the lateral side of the cavity. Heat flux measurement has been done using indigenously developed high resistance platinum thin film gauges. High resistance gauges, as contrary to the conventionally used low resistance gauges were showing good response to the extremely low heat flux values on a flat plate with sharp leading edge. The experimental measurements of heat done on a flat plate with sharp leading edge using these gauges show good match with theoretical relation by Crabtree et al. Flow visualization using high speed camera with the cavity model and shock structures visualized were similar to reported in supersonic cavity flow. This also goes to state that in spite of the fluctuating shear layer-the main feature of hypersonic flow over a cavity ,reasonable studies can be done within the short test time of shock tunnel. Numerical Simulations by solving the Navier-Stokes equation, using the commercially available CFD package FLUENT 13.0.0 has been done to complement the experimental studies.
215

Uncertainty Analysis of Microwave Based Rainfall Estimates over a River Basin Using TRMM Orbital Data Products

Indu, J January 2014 (has links) (PDF)
Error characteristics associated with satellite-derived precipitation products are important for atmospheric and hydrological model data assimilation, forecasting, and climate diagnostic applications. This information also aids in the refinement of physical assumptions within algorithms by identifying geographical regions and seasons where existing algorithm physics may be incorrect or incomplete. Examination of relative errors between independent estimates derived from satellite microwave data is particularly important over regions with limited surface-based equipments for measuring rain rate such as the global oceans and tropical continents. In this context, analysis of microwave based satellite datasets from the Tropical Rainfall Measuring Mission (TRMM) enables to not only provide information regarding the inherent uncertainty within the current TRMM products, but also serves as an opportunity to prototype error characterization methodologies for the TRMM follow-on program, the Global Precipitation Measurement (GPM) . Most of the TRMM uncertainty evaluation studies focus on the accuracy of rainfall accumulated over time (e.g., season/year). Evaluation of instantaneous rainfall intensities from TRMM orbital data products is relatively rare. These instantaneous products are known to potentially cause large uncertainties during real time flood forecasting studies at the watershed scale. This is more so over land regions, where the highly varying land surface emissivity offers a myriad of complications, hindering accurate rainfall estimation. The error components of orbital data products also tend to interact nonlinearly with hydrologic modeling uncertainty. Keeping these in mind, the present thesis fosters the development of uncertainty analysis using instantaneous satellite orbital data products (latest version 7 of 1B11, 2A25, 2A23, 2B31, 2A12) derived from the passive and active microwave sensors onboard TRMM satellite, namely TRMM Microwave Imager (TMI) and precipitation radar (PR). The study utilizes 11 years of orbital data from 2002 to 2012 over the Indian subcontinent and examines the influence of various error sources on the convective and stratiform precipitation types. Two approaches are taken up to examine uncertainty. While the first approach analyses independent contribution of error from these orbital data products, the second approach examines their combined effect. Based on the first approach, analysis conducted over the land regions of Mahanadi basin, India investigates three sources of uncertainty in detail. These include 1) errors due to improper delineation of rainfall signature within microwave footprint (rain/no rain classification), 2) uncertainty offered by the transfer function linking rainfall with TMI low frequency channels and 3) sampling errors owing to the narrow swath and infrequent visits of TRMM sensors. The second approach is hinged on evaluating the performance of rainfall estimates from each of these orbital data products by accumulating them within a spatial domain and using error decomposition methodologies. Microwave radiometers have taken unprecedented satellite images of earth’s weather, proving to be a valuable tool for quantitative estimation of precipitation from space. However, as mentioned earlier, with the widespread acceptance of microwave based precipitation products, it has also been recognized that they contain large uncertainties. One such source of uncertainty is contributed by improper detection of rainfall signature within radiometer footprints. To date, the most-advanced passive microwave retrieval algorithms make use of databases constructed by cloud or numerical weather model simulations that associate calculated microwave brightness temperature to physically plausible sample rain events. Delineation of rainfall signature from microwave footprints, also known as rain/norain classification (RNC) is an essential step without which the succeeding retrieval technique (using the database) gets corrupted easily. Although tremendous advances have been made to catapult RNC algorithms from simple empirical relations formulated for computational expedience to elaborate computer intensive schemes which effectively discriminate rainfall, a number of challenges remain to be addressed. Most of the algorithms that are globally developed for land, ocean and coastal regions may not perform well for regional catchments of small areal extent. Motivated by this fact, the present work develops a regional rainfall detection algorithm based on scattering index methodology for the land regions of study area. Performance evaluation of this algorithm, developed using low frequency channels (of 19 GHz, 22 GHz), are statistically tested for individual case study events during 2011 and 2012 Indian summer monsoonal months. Contingency table statistics and performance diagram show superior performance of the algorithm for land regions of the study region with accurate rain detection observed in 95% of the case studies. However, an important limitation of this approach is comparatively poor detection of low intensity stratiform rainfall. The second source of uncertainty which is addressed by the present thesis, involves prediction of overland rainfall using TMI low frequency channels. Land, being a radiometrically warm and highly variable background, offers a myriad of complications for overland rain retrieval using microwave radiometer (like TMI). Hence, land rainfall algorithms of TRMM TMI have traditionally incorporated empirical relations of microwave brightness temperature (Tb) with rain rate, rather than relying on physically based radiative transfer modeling of rainfall (as implemented in TMI ocean algorithm). In the present study, sensitivity analysis is conducted using spearman rank correlation coefficient as the indicator, to estimate the best combination of TMI low frequency channels that are highly sensitive to near surface rainfall rate (NSR) from PR. Results indicate that, the TMI channel combinations not only contain information about rainfall wherein liquid water drops are the dominant hydrometeors, but also aids in surface noise reduction over a predominantly vegetative land surface background. Further, the variations of rainfall signature in these channel combinations were seldom assessed properly due to their inherent uncertainties and highly non linear relationship with rainfall. Copula theory is a powerful tool to characterize dependency between complex hydrological variables as well as aid in uncertainty modeling by ensemble generation. Hence, this work proposes a regional model using Archimedean copulas, to study dependency of TMI channel combinations with respect to precipitation, over the land regions of Mahanadi basin, India, using version 7 orbital data from TMI and PR. Studies conducted for different rainfall regimes over the study area show suitability of Clayton and Gumbel copula for modeling convective and stratiform rainfall types for majority of the intraseasonal months. Further, large ensembles of TMI Tb (from the highly sensitive TMI channel combination) were generated conditional on various quantiles (25th, 50th, 75th, 95th) of both convective and stratiform rainfall types. Comparatively greater ambiguity was observed in modeling extreme values of convective rain type. Finally, the efficiency of the proposed model was tested by comparing the results with traditionally employed linear and quadratic models. Results reveal superior performance of the proposed copula based technique. Another persistent source of uncertainty inherent in low earth orbiting satellites like TRMM arise due to sampling errors of non negligible proportions owing to the narrow swath of satellite sensors coupled with a lack of continuous coverage due to infrequent satellite visits. This study investigates sampling uncertainty of seasonal rainfall estimates from PR, based on 11 years of PR 2A25 data product over the Indian subcontinent. A statistical bootstrap technique is employed to estimate the relative sampling errors using the PR data themselves. Results verify power law scaling characteristics of relative sampling errors with respect to space time scale of measurement. Sampling uncertainty estimates for mean seasonal rainfall was found to exhibit seasonal variations. To give a practical demonstration of the implications of bootstrap technique, PR relative sampling errors over the sub tropical river basin of Mahanadi, India were examined. Results revealed that bootstrap technique incurred relative sampling errors of <30% (for 20 grid), <35% (for 10 grid), <40% (for 0.50 grid) and <50% (for 0.250 grid). With respect to rainfall type, overall sampling uncertainty was found to be dominated by sampling uncertainty due to stratiform rainfall over the basin. In order to study the effect of sampling type on relative sampling uncertainty, the study compares the resulting error estimates with those obtained from latin hypercube sampling. Based on this study, it may be concluded that bootstrap approach can be successfully used for ascertaining relative sampling errors offered by TRMM-like satellites over gauged or ungauged basins lacking in in-situ validation data. One of the important goals of TRMM Ground Validation Program has been to estimate the random and systematic uncertainty associated with TRMM rainfall estimates. Disentangling uncertainty in seasonal rainfall offered by independent observations of TMI and PR enables to identify errors and inconsistencies in the measurements by these instruments. Motivated by this thought, the present work examines the spatial error structure of daily precipitation derived from the version 7 TRMM instantaneous orbital data products through comparison with the APHRODITE data over a subtropical region namely Mahanadi river basin of the Indian subcontinent for the seasonal rainfall of 6 years from June 2002 to September 2007. The instantaneous products examined include TMI and PR data products of 2A12, 2A25 and 2B31 (combined data from PR and TMI). The spatial distribution of uncertainty from these data products was quantified based on the performance metrics derived from the contingency table. For the seasonal daily precipitation over 10x10 grids, the data product of 2A12 showed greater skill in detecting and quantifying the volume of rainfall when compared with 2A25 and 2B31 data products. Error characterization using various error models revealed that random errors from multiplicative error models were homoscedastic and that they better represented rainfall estimates from 2A12 algorithm. Error decomposition technique, performed to disentangle systematic and random errors, testified that the multiplicative error model representing rainfall from 2A12 algorithm, successfully estimated a greater percentage of systematic error than 2A25 or 2B31 algorithms. Results indicate that even though the radiometer derived 2A12 is known to suffer from many sources of uncertainties, spatial and temporal analysis over the case study region testifies that the 2A12 rainfall estimates are in a very good agreement with the reference estimates for the data period considered. These findings clearly document that proper characterization of error structure offered by TMI and PR has wider implications in decision making, prior to incorporating the resulting orbital products for basin scale hydrologic modeling. The current missions of GPM envision a constellation of microwave sensors that can provide instantaneous products with a relatively negligible sampling error at daily or higher time scales. This study due to its simplicity and physical approach offers the ideal basis for future improvements in uncertainty modeling in precipitation.
216

GPU-enhanced power flow analysis / Calcul de Flux de Puissance amélioré grâce aux Processeurs Graphiques

Marin, Manuel 11 December 2015 (has links)
Cette thèse propose un large éventail d'approches afin d'améliorer différents aspects de l'analyse des flux de puissance avec comme fils conducteur l'utilisation du processeurs graphiques (GPU). Si les GPU ont rapidement prouvés leurs efficacités sur des applications régulières pour lesquelles le parallélisme de données était facilement exploitable, il en est tout autrement pour les applications dites irrégulières. Ceci est précisément le cas de la plupart des algorithmes d'analyse de flux de puissance. Pour ce travail, nous nous inscrivons dans cette problématique d'optimisation de l'analyse de flux de puissance à l'aide de coprocesseur de type GPU. L'intérêt est double. Il étend le domaine d'application des GPU à une nouvelle classe de problème et/ou d'algorithme en proposant des solutions originales. Il permet aussi à l'analyse des flux de puissance de rester pertinent dans un contexte de changements continus dans les systèmes énergétiques, et ainsi d'en faciliter leur évolution. Nos principales contributions liées à la programmation sur GPU sont: (i) l'analyse des différentes méthodes de parcours d'arbre pour apporter une réponse au problème de la régularité par rapport à l'équilibrage de charge ; (ii) l'analyse de l'impact du format de représentation sur la performance des implémentations d'arithmétique floue. Nos contributions à l'analyse des flux de puissance sont les suivantes: (ii) une nouvelle méthode pour l'évaluation de l'incertitude dans l'analyse des flux de puissance ; (ii) une nouvelle méthode de point fixe pour l'analyse des flux de puissance, problème que l'on qualifie d'intrinsèquement parallèle. / This thesis addresses the utilization of Graphics Processing Units (GPUs) for improving the Power Flow (PF) analysis of modern power systems. Currently, GPUs are challenged by applications exhibiting an irregular computational pattern, as is the case of most known methods for PF analysis. At the same time, the PF analysis needs to be improved in order to cope with new requirements of efficiency and accuracy coming from the Smart Grid concept. The relevance of GPU-enhanced PF analysis is twofold. On one hand, it expands the application domain of GPU to a new class of problems. On the other hand, it consistently increases the computational capacity available for power system operation and design. The present work attempts to achieve that in two complementary ways: (i) by developing novel GPU programming strategies for available PF algorithms, and (ii) by proposing novel PF analysis methods that can exploit the numerous features present in GPU architectures. Specific contributions on GPU computing include: (i) a comparison of two programming paradigms, namely regularity and load-balancing, for implementing the so-called treefix operations; (ii) a study of the impact of the representation format over performance and accuracy, for fuzzy interval algebraic operations; and (iii) the utilization of architecture-specific design, as a novel strategy to improve performance scalability of applications. Contributions on PF analysis include: (i) the design and evaluation of a novel method for the uncertainty assessment, based on the fuzzy interval approach; and (ii) the development of an intrinsically parallel method for PF analysis, which is not affected by the Amdahl's law.
217

Modeling sea-level rise uncertainties for coastal defence adaptation using belief functions / Utilisation des fonctions de croyance pour la modélisation des incertitudes dans les projections de l'élévation du niveau marin pour l'adaptation côtière

Ben Abdallah, Nadia 12 March 2014 (has links)
L’adaptation côtière est un impératif pour faire face à l’élévation du niveau marin,conséquence directe du réchauffement climatique. Cependant, la mise en place d’actions et de stratégies est souvent entravée par la présence de diverses et importantes incertitudes lors de l’estimation des aléas et risques futurs. Ces incertitudes peuvent être dues à une connaissance limitée (de l’élévation du niveau marin futur par exemple) ou à la variabilité naturelle de certaines variables (les conditions de mer extrêmes). La prise en compte des incertitudes dans la chaîne d’évaluation des risques est essentielle pour une adaptation efficace.L’objectif de ce travail est de proposer une méthodologie pour la quantification des incertitudes basée sur les fonctions de croyance – un formalisme de l’incertain plus flexible que les probabilités. Les fonctions de croyance nous permettent de décrire plus fidèlement l’information incomplète fournie par des experts (quantiles,intervalles, etc.), et de combiner différentes sources d’information. L’information statistique peut quand à elle être décrite par de fonctions des croyance définies à partir de la fonction de vraisemblance. Pour la propagation d’incertitudes, nous exploitons l’équivalence mathématique entre fonctions de croyance et intervalles aléatoires, et procédons par échantillonnage Monte Carlo. La méthodologie est appliquée dans l’estimation des projections de la remontée du niveau marin global à la fin du siècle issues de la modélisation physique, d’élicitation d’avis d’experts, et de modèle semi-empirique. Ensuite, dans une étude de cas, nous évaluons l’impact du changement climatique sur les conditions de mers extrêmes et évaluons le renforcement nécessaire d’une structure afin de maintenir son niveau de performance fonctionnelle. / Coastal adaptation is an imperative to deal with the elevation of the global sealevel caused by the ongoing global warming. However, when defining adaptationactions, coastal engineers encounter substantial uncertainties in the assessment of future hazards and risks. These uncertainties may stem from a limited knowledge (e.g., about the magnitude of the future sea-level rise) or from the natural variabilityof some quantities (e.g., extreme sea conditions). A proper consideration of these uncertainties is of principal concern for efficient design and adaptation.The objective of this work is to propose a methodology for uncertainty analysis based on the theory of belief functions – an uncertainty formalism that offers greater features to handle both aleatory and epistemic uncertainties than probabilities.In particular, it allows to represent more faithfully experts’ incomplete knowledge (quantiles, intervals, etc.) and to combine multi-sources evidence taking into account their dependences and reliabilities. Statistical evidence can be modeledby like lihood-based belief functions, which are simply the translation of some inference principles in evidential terms. By exploiting the mathematical equivalence between belief functions and random intervals, uncertainty can be propagated through models by Monte Carlo simulations. We use this method to quantify uncertainty in future projections of the elevation of the global sea level by 2100 and evaluate its impact on some coastal risk indicators used in coastal design. Sea-level rise projections are derived from physical modelling, expert elicitation, and historical sea-level measurements. Then, within a methodologically-oriented case study,we assess the impact of climate change on extreme sea conditions and evaluate there inforcement of a typical coastal defence asset so that its functional performance is maintained.
218

KARTOTRAK, integrated software solution for contaminated site characterization: presentation of 3D geomodeling software, held at IAMG 2015 in Freiberg

Wagner, Laurent 03 November 2015 (has links)
Kartotrak software allows optimal waste classification and avoids unnecessary remediation. It has been designed for those - site owners, safety authorities or contractors, involved in environmental site characterization projects - who need to locate and estimate contaminated soil volumes confidently.
219

Approche bayésienne de l'évaluation de l'incertitude de mesure : application aux comparaisons interlaboratoires / Bayesian approach for the evaluation of measurement uncertainty applied to interlaboratory comparisons

Demeyer, Séverine 04 March 2011 (has links)
La modélisation par équations structurelles est très répandue dans des domaines très variés et nous l'appliquons pour la première fois en métrologie dans le traitement de données de comparaisons interlaboratoires. Les modèles à équations structurelles à variables latentes sont des modèles multivariés utilisés pour modéliser des relations de causalité entre des variables observées (les données). Le modèle s'applique dans le cas où les données peuvent être regroupées dans des blocs disjoints où chaque bloc définit un concept modélisé par une variable latente. La structure de corrélation des variables observées est ainsi résumée dans la structure de corrélation des variables latentes. Nous proposons une approche bayésienne des modèles à équations structurelles centrée sur l'analyse de la matrice de corrélation des variables latentes. Nous appliquons une expansion paramétrique à la matrice de corrélation des variables latentes afin de surmonter l'indétermination de l'échelle des variables latentes et d'améliorer la convergence de l'algorithme de Gibbs utilisé. La puissance de l'approche structurelle nous permet de proposer une modélisation riche et flexible des biais de mesure qui vient enrichir le calcul de la valeur de consensus et de son incertitude associée dans un cadre entièrement bayésien. Sous certaines hypothèses l'approche permet de manière innovante de calculer les contributions des variables de biais au biais des laboratoires. Plus généralement nous proposons un cadre bayésien pour l'amélioration de la qualité des mesures. Nous illustrons et montrons l'intérêt d'une modélisation structurelle des biais de mesure sur des comparaisons interlaboratoires en environnement. / Structural equation modelling is a widespread approach in a variety of domains and is first applied here to interlaboratory comparisons in metrology. Structural Equation Models with latent variables (SEM) are multivariate models used to model causality relationships in observed variables (the data). It is assumed that data can be grouped into separate blocks each describing a latent concept modelled by a latent variable. The correlation structure of the observed variables is transferred into the correlation structure of the latent variables. A Bayesian approach of SEM is proposed based on the analysis of the correlation matrix of latent variables using parameter expansion to overcome identifiability issues and improving the convergence of the Gibbs sampler. SEM is used as a powerful and flexible tool to model measurement bias with the aim of improving the reliability of the consensus value and its associated uncertainty in a fully Bayesian framework. The approach also allows to compute the contributions of the observed variables to the bias of the laboratories, under additional hypotheses. More generally a global Bayesian framework is proposed to improve the quality of measurements. The approach is illustrated on the structural equation modelling of measurement bias in interlaboratory comparisons in environment.
220

Komplexní teoretická analýza metody sloupku pro zjišťování zbytkových napětí / Comprehesive Theoeretical Analysis of Ring-Core Method for Residual Stress Determination

Civín, Adam January 2012 (has links)
Comprehensive analysis of the ringcore method used for the determination of the residual stresses in mechanical components is presented in this thesis. Principles, advantages, disadvantages and applicability of this semidestructive experimental method are discussed too. At the same time the ringcore method is compared with the hole drilling method, which is used more frequently. All aspects of the ringcore method are analyzed by the finite element method. FE simulations, performed on the universal numerical model, verified principles of the integral method and the incremental strain method. FE simulations also provided basic information for the uncertainty analysis, which significantly affects the accuracy of the residual stress measurement. The main goal, which the present work deals with, is to create a global overview of all ringcore methods´ aspects elaborated in a clear and complex form.

Page generated in 0.0608 seconds