• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 69
  • 22
  • 12
  • 8
  • 6
  • 6
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 161
  • 48
  • 24
  • 22
  • 20
  • 20
  • 19
  • 18
  • 17
  • 15
  • 14
  • 14
  • 14
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Monitoring stokové sítě ve městě Brně / Monitoring of the sewerage system in the city of Brno

Dvorský, Petr January 2019 (has links)
This thesis deals with the monitoring of the sewerage system in the city of Brno. In the research part, there is made an overview of the most used measuring equipment for the monitoring of the sewerage system – level meters, flowmeters, automatic samplers. The practical part is devoted to the monitoring of the sewerage system in Brno for partial evaluation of the benefits of the project „Reconstruction and completion of sewerage in Brno“, realized in 2012 – 2014. Measuring campaign has taken place from June to October 2017. In the course of the work, the instalation of measuring equipment and the collection of measured data was done. After finishing measuring campaign, the data were evaluated from different aspects – the number of overflows, the quantity of overflowing water, the dilution ratio and the quantity of discharged pollution. Then, the hydraulic and construction technical assement of the pipelines of sewer network and overflow chambers and assements of the quantity of the rainwater were made. The results of the evaluation were compared with the results of the previous monitoring campaingns and with the data obtained before the project has been realized. Based on this, the benefit of the project was determined.
122

Integrální vzorkovač sedimentů / Integral sediment sampler

Zouhar, Josef January 2009 (has links)
This thesis deals with analysis of flow inside integral SPM sampler. Flowing is described for one-phase flow and multi-phase flow (water-solid particles) approaches. Results of computational modeling and experimental modeling are compared. Experimental box was constructed and its construction is noticed. Methods of used computational and experimental modeling are described. Optimal computational model is recommended. The theme of this thesis was chosen in co-operation with Czech Geological Survey whitin the frame of project MŽP „SP/1b7/156/07“.
123

Predikce průměrných hodinových koncentrací přízemního ozonu z měření pasivními dozimetry / Prediction of mean hourly values of surface ozone concentrations from passive sampler measurements

Sinkulová, Michaela January 2020 (has links)
In terms of air pollution, ground-level ozone is according to current knowledge, contributes the most to damage to ecosystems. To calculate the key indicators of potential damage to ecosystems, such as the exposure index AOT40 and stomatal flux, it is important to know the hourly ozone concentrations, which are the input data for both calculations. For the measurement of O3 air pollution concentrations for the purposes of environmental studies, continuous measurement is not used, but measurement by passive (diffusion) dosimeters, which are exposed for a longer period (usually 1 week-1 month) and thus indicate the average concentration for the relevant longer period. The aim of this diploma thesis is the prediction of hourly concentrations of ground-level ozone from measurements by diffusive samplers, which took place in the period 2006-2010 in Jizerské hory mountains. Monitoring always took place for 2 weeks during the vegetation seasons (April-October) at localities and at various altitudes (714 m above sea level - 1,000 m above sea level). Ogawa diffusive samplers were used. From these average and meteorological concentrations, hourly values of ground-level ozone concentrations were calculated according to the model from professional study and these were compared with measurements from an...
124

A study of flow fields during filling of a sampler

Zhang, Zhi January 2009 (has links)
More and more attention has been paid to decreasing the number and size of non-metallic inclusions existing in the final products recently in steel industries. Therefore, more efforts have been made to monitor the inclusions' size distributions during the metallurgy process, especially at the secondary steelmaking period. A liquid sampling procedure is one of the commonly applied methods that monitoring the inclusion size distribution in ladles, for example, during the secondary steelmaking. Here, a crucial point is that the steel sampler should be filled and solidified without changing the inclusion characteristics that exist at steel making temperatures. In order to preserve the original size and distributions in the extracted samples, it is important to avoid their collisions and coagulations inside samplers during filling. Therefore, one of the first steps to investigate is the flow pattern inside samplers during filling in order to obtain a more in-depth knowledge of the sampling process to make sure that the influence is minimized. The main objective of this work is to fundamentally study the above mentioned sampler filling process. A production sampler employed in the industries has been scaled-up according to the similarity of Froude Number in the experimental study. A Particle Image Velocimetry (PIV) was used to capture the flow field and calculate the velocity vectors during the entire experiment. Also, a mathematical model has been developed to have an in-depth investigate of the flow pattern in side the sampler during its filling. Two different turbulence models were applied in the numerical study, the realizable k-ε model and Wilcox k-ω model. The predictions were compared to experimental results obtained by the PIV measurements. Furthermore, it was illustrated that there is a fairly good agreement between the measurements obtained by PIV and calculations predicted by the Wilcox k-ω model. Thus, it is concluded that the Wilcox k-ω model can be used in the future to predict the filling of steel samplers.
125

Analytical determination of emerging contaminants by using a new graphene-based enrichment material for solid-phase extraction and passive sampling

Liu, Yang 24 March 2020 (has links)
Emerging contaminants represent newly identified organic chemical pollutants that are not yet covered by routine monitoring and regulatory programs. Current research on these contaminants is greatly hindered by the shortage of analytical methods due to the complex matrices, extremely low concentration and their “emerging” nature. In this study the innovative analytical and monitoring methods have been developed and validated for determination of emerging pollutants in water (including pharmaceutical and personal care products, pesticides and artificial sweeteners) based on graphene-silica composite as the solid-phase extraction (SPE) sorbent and as the receiving phase in passive sampler. Graphene, a new allotropic member in the carbon family, has been considered to be a promising candidate for sorption material with high loading capacity because of its ultra-high specific surface area and large delocalized π-electron-rich structure. The composite employed in this work was synthesized by using the cross-link agent to covalently combine carboxylic acid groups of graphene-oxide with the amino groups of the modified silica gel. Afterwards, graphene-silica composite was obtained after treated with hydrothermal reaction in the microwave autoclave, which was demonstrated by X-ray diffraction (XRD). The analytical procedure entails SPE followed by high performance liquid chromatography equipped with tandem mass spectrometers (HPLC-MS/MS). Several crucial parameters were optimized to improve recovery of the analytes, including the amount of sorbents, the ratio of graphene oxide/amino-silica and pH value of water samples. The best recovery results were achieved with 100 mg 10 % (w/w) graphene-silica composite, which were over 70 % except four artificial sweeteners, ranitidine and triclosan. Compared with its commercial counterpart Oasis HLB, pH value variation of water samples has less effect on the recoveries, making graphene composite to be a potential receiving phase of monitoring tool. The batch-to-batch reproducibility was verified on six independently SPE cartridges with graphene-silica composites from two repeatable synthetic batches, showing relative standard deviations (RSDs) in the range of 8.3 % to 19.1 %, except ibuprofen and saccharin. The cartridges proved to be reusable for at least 10 times consecutive extractions, with RSD < 14.9 %, except ibuprofen and diclofenac. The Chemcatcher® passive sampler is frequently used for monitoring polar organic chemicals in surface water. Uptake kinetics is necessary to be quantified to calculate time-weighted average (TWA) concentration. A series of calibration experiments were conducted in the beaker renewal experiments as well as in the flow-through system with styrenedivinylbenzene-cross connect (SDB-XC) disks and graphene-silica composite as the receiving phase. The results obtained from the beaker renewal experiments showed that the uptake kinetics of accumulated compounds with all Chemcatcher® configurations can keep linear within 2 weeks. The innovative configuration using graphene-silica composite powder placed between two PES membranes was able to accumulate eleven of the selected compounds with uptake rate (Rs) from 0.01 L/day (acesulfame K and sucralose) to 0.08 L/day (chlothianidin), while its commercial counterpart SDB-XC disks with polyethersulfone (PES) membranes can accumulate seven substances with Rs from 0.02 L/day (sucralose and chlothianidin) to 0.15 L/day (carbamazepine). In the flow-through system, when Chemcatchers® were equipped with SDB-XC disks without PES membranes, the linear uptake range for the majority of compounds was only in one week, except atrazine. The Rs of accumulated compounds were from 0.16 L/day (chloramphenicol) to 1.04 L/day (metoprolol) that are higher than the same substances in the beaker renewal experiments, in which the Rs of chloramphenicol and metoprolol were 0.09 L/day and 0.56 L/day respectively. However, if the PES membranes were employed, the uptake kinetics in both calibration experimental designs were comparable: the Rs of accumulated compounds from the configuration with SDB-XC disks covered by PES membranes were from 0.035 L/day (sucralose) to 0.17 L/day (carbamazepine) and from the configuration with graphene-silica composite were from 0.01 L/day (gemfibrozil) to 0.08 L/day (chlothianidin). Moreover, the uptake range can keep linear within two weeks. The developed Chemcatcher® method was successfully applied in real surface waters. 1-H benzontriazole, tolyltriazole and caffeine were the main contaminants in Elbe River and the Saidenbach drinking water reservoir. The investigated results between summer and autumn monitoring period were not significantly different.:Acknowledgement I Abstract III Zusammenfassung V Content IX List of Figures XIII List of Tables XVII Table of Abbreviations XIX 1. Motivation 1 2. Introduction 3 2.1 Emerging contaminants 3 2.1.1 Definition 3 2.1.2 Sources 3 2.1.3 Concern about the adverse impacts 5 2.2 Analysis of the emerging contaminants 7 2.2.1 General analytical process 7 2.2.2 Enrichment techniques 8 2.2.2.1 Liquid-liquid extraction (LLE) 8 2.2.2.2 Solid-phase extraction (SPE) 9 2.2.2.3 Innovative type of solid-phase extraction 13 2.2.3 Analytical methods 15 2.3 Graphene and its application in analytical chemistry 19 2.3.1 Introduction 19 2.3.2 Synthesis methods of graphene 20 2.3.3 Application in sample pre-treatment 21 2.3.3.1 Graphene-based material as SPE sorbent 21 2.3.3.2 Graphene-coated fibers as SPME sorbent 22 2.3.3.3 Magnetic graphene as MSPE sorbent 23 2.3.3.4 Graphene-based MIPs 24 2.4 Chemcatcher®—a passive sampling technique 25 2.4.1 Introduction 25 2.4.2 Theory 26 2.4.2.1 Equilibrium passive sampling 27 2.4.2.2 Kinetic passive sampling 28 2.4.3 Concept of Chemcatcher® 28 2.4.4 Calibration of Chemcatcher® 33 2.4.5 Performance and reference compounds 36 3. Study objectives and hypotheses 39 3.1 Study objectives 39 3.2 Hypotheses 41 4. Material and methods 43 4.1 Materials 43 4.1.1 Chemicals and solutions 43 4.1.2 Consumable materials and instruments 44 4.2 Synthesis of graphene-silica composite 46 4.3 SPE experiments 49 4.3.1 Packing method 49 4.3.2 SPE procedure 49 4.3.3 Optimization of SPE procedures 51 4.3.4 Repeatability and reusability test 52 4.4 Chemcatcher® experiments 53 4.4.1 Preparation and precondition 53 4.4.2 Calibration of Chemcatcher® 55 4.4.2.1 Preliminary test 55 4.4.2.2 Experimental design of the beaker batch tests 56 4.4.2.3 Experimental design of the flow-through system 57 4.4.3 Monitoring application of Chemcatcher® in surface water 59 4.4.4 Elution process 60 4.4.5 Statistic data evaluation 61 4.5 HPLC-MS/MS analysis 62 5. Results and discussion 63 5.1 Preparation and characterization of graphene-silica composite 63 5.2 SPE performance of the graphene-silica composite 67 5.2.1 Preliminary test of packing methods 67 5.2.2 Optimization of SPE procedures 68 5.2.2.1 The amount of sorbent 68 5.2.2.2 Graphene ratio in the composites 68 5.2.2.3 pH value of the water sample 69 5.2.3 Repeatability and reusability test 72 5.2.3.1 Performance of the off-line SPE 72 5.2.3.2 Repeatability and reusability test results 75 5.2.4 Summarized discussion of the SPE performance 76 5.3 Calibrating results of Chemcatcher® 86 5.3.1 Pre-test results 86 5.3.1.1 Feasibility test of commercial disks as receiving phase 86 5.3.1.2 Stability test 88 5.3.1.3 Elution optimization. 88 5.3.1.4 Recovery of the filters 92 5.3.2 Calibration results of renewal experiments 93 5.3.2.1 SDB-XC disks without and with membranes 93 5.3.2.2 Graphene-silica composite as receiving phase 97 5.3.3 Calibration results of the flow-through system experiments 101 5.3.3.1 Determination of experimental parameters 101 5.3.3.2 Concentration control 103 5.3.3.3 Calibration results 105 5.3.3.4 Preliminary evaluation of performance and reference compounds 112 5.4 Application of Chemcatcher® in surface water 114 5.5 Discussion about problems of commercial disks as receiving phase in Chemcatcher® 118 5.5.1 Deformation of commercial disks 118 5.5.2 The particles in the solution after elution 119 6. Conclusion and perspective 121 7. Annex 125 7.1 Material and methods 125 7.1.1 Chemicals 125 7.1.2 Silica gel and graphene oxide 144 7.1.3 Microwave reduction program 144 7.1.4 Working schedule of the calibration experiments in flow-through system 144 7.1.5 HPLC-MS/MS conditions 146 7.2 Experimental results 149 7.2.1 Stability of the colloid solution of graphene oxide 149 7.2.2 EDX analysis results 149 7.2.3 HPLC-MS/MS results 152 7.2.4 Calibrating results of the beaker renewal experiment 153 7.2.5 Calibrating results of the flow-through system experiments 157 7.2.6 Monitoring results in the Elbe River 161 Reference 163
126

Design, Evaluation, and Particle Size Characterization of an In-Duct Flat Media Particle Loading Test System for Nuclear-Grade Asme Ag-1 Hepa Filters

Wong, Matthew Christopher 06 May 2017 (has links)
The design and performance evaluation of in-duct, isokinetic samplers capable of testing flat sheet, nuclear-grade High Efficiency Particulate Air (HEPA) filters simultaneously with a radial filter testing system is discussed in this study. Evaluations within this study utilize challenge aerosols of varying particle diameters and masses such as hydrated alumina, Arizona test dust, and flame-generated acetylene soot. Accumulated mass and pressure drop for each in-duct sampler is correlated to the full-scale radial filter accumulated mass from initial to 10 in w. c. of loading. SEM imaging of samples at 25%, 50%, 75% and 100% loading verifies particle sizes with instrumentation used, revealing filter clogging resulting from particle impaction and interception. The U.S Department of Energy requires prototype nuclear-grade HEPA filters to be qualified under ASME AG-1 standards. The data obtained can be used to determine baseline performance characteristics on pleated radial filter medium for increased loading integrity and lifecycle endurance.
127

Incorporation of Genetic Marker Information in Estimating Modelparameters for Complex Traits with Data From Large Complex Pedigrees

Luo, Yuqun 20 December 2002 (has links)
No description available.
128

Implenting a Systematic Gibbs Sampler Method to Explore Probability Bias in AI Agents

Bisht, Charu January 2024 (has links)
In an era increasingly shaped by artificial intelligence (AI), the necessity for unbiased decision-making from AI systems intensifies. Recognizing the inherent biases in humandecision-making is evident through various psychological theories. Prospect Theory, prominently featured among them, utilizes a probability weighing function (PWF) to gain insights into human decision processes. This observation prompts an intriguing question: Can this framework be extended to comprehend AI decision-making? This study employs a systematic Gibbs sampler method to measure probability weighing function of AI and validate this methodology against a dataset comprising 1 million distinct AI decision strategies. Subsequently, exemplification of its application on Recurrent Neural Networks (RNN) and Artificial Neural Networks (ANN) is seen. This allows us to discern the nuanced shapes of the Probability Weighting Functions (PWFs) inherent in ANN and RNN, thereby facilitating informed speculation on the potential presence of “probability bias” within AI. In conclusion, this research serves as a foundational step in the exploration of "probability bias" in AI decision-making. The demonstrated reliability of the systematic Gibbs sampler method significantly contributes to ongoing research, primarily by enabling the extraction of Probability Weighting Functions (PWFs). The emphasis here lies in laying the groundwork –obtaining the PWFs from AI decision processes. The subsequent phases, involving in-depth understanding and deductive conclusions about the implications of these PWFs, fall outside the current scope of this study. With the ability to discern the shapes of PWFs for AI, this research paves the way for future investigations and various tests to unravel the deeper meaning of probability bias in AI decision-making.
129

Sélection bayésienne de variables et méthodes de type Parallel Tempering avec et sans vraisemblance

Baragatti, Meïli 10 November 2011 (has links)
Cette thèse se décompose en deux parties. Dans un premier temps nous nous intéressons à la sélection bayésienne de variables dans un modèle probit mixte.L'objectif est de développer une méthode pour sélectionner quelques variables pertinentes parmi plusieurs dizaines de milliers tout en prenant en compte le design d'une étude, et en particulier le fait que plusieurs jeux de données soient fusionnés. Le modèle de régression probit mixte utilisé fait partie d'un modèle bayésien hiérarchique plus large et le jeu de données est considéré comme un effet aléatoire. Cette méthode est une extension de la méthode de Lee et al. (2003). La première étape consiste à spécifier le modèle ainsi que les distributions a priori, avec notamment l'utilisation de l'a priori conventionnel de Zellner (g-prior) pour le vecteur des coefficients associé aux effets fixes (Zellner, 1986). Dans une seconde étape, nous utilisons un algorithme Metropolis-within-Gibbs couplé à la grouping (ou blocking) technique de Liu (1994) afin de surmonter certaines difficultés d'échantillonnage. Ce choix a des avantages théoriques et computationnels. La méthode développée est appliquée à des jeux de données microarray sur le cancer du sein. Cependant elle a une limite : la matrice de covariance utilisée dans le g-prior doit nécessairement être inversible. Or il y a deux cas pour lesquels cette matrice est singulière : lorsque le nombre de variables sélectionnées dépasse le nombre d'observations, ou lorsque des variables sont combinaisons linéaires d'autres variables. Nous proposons donc une modification de l'a priori de Zellner en y introduisant un paramètre de type ridge, ainsi qu'une manière de choisir les hyper-paramètres associés. L'a priori obtenu est un compromis entre le g-prior classique et l'a priori supposant l'indépendance des coefficients de régression, et se rapproche d'un a priori précédemment proposé par Gupta et Ibrahim (2007).Dans une seconde partie nous développons deux nouvelles méthodes MCMC basées sur des populations de chaînes. Dans le cas de modèles complexes ayant de nombreux paramètres, mais où la vraisemblance des données peut se calculer, l'algorithme Equi-Energy Sampler (EES) introduit par Kou et al. (2006) est apparemment plus efficace que l'algorithme classique du Parallel Tempering (PT) introduit par Geyer (1991). Cependant, il est difficile d'utilisation lorsqu'il est couplé avec un échantillonneur de Gibbs, et nécessite un stockage important de valeurs. Nous proposons un algorithme combinant le PT avec le principe d'échanges entre chaînes ayant des niveaux d'énergie similaires dans le même esprit que l'EES. Cette adaptation appelée Parallel Tempering with Equi-Energy Moves (PTEEM) conserve l'idée originale qui fait la force de l'algorithme EES tout en assurant de bonnes propriétés théoriques et une utilisation facile avec un échantillonneur de Gibbs.Enfin, dans certains cas complexes l'inférence peut être difficile car le calcul de la vraisemblance des données s'avère trop coûteux, voire impossible. De nombreuses méthodes sans vraisemblance ont été développées. Par analogie avec le Parallel Tempering, nous proposons une méthode appelée ABC-Parallel Tempering, basée sur la théorie des MCMC, utilisant une population de chaînes et permettant des échanges entre elles. / This thesis is divided into two main parts. In the first part, we propose a Bayesian variable selection method for probit mixed models. The objective is to select few relevant variables among tens of thousands while taking into account the design of a study, and in particular the fact that several datasets are merged together. The probit mixed model used is considered as part of a larger hierarchical Bayesian model, and the dataset is introduced as a random effect. The proposed method extends a work of Lee et al. (2003). The first step is to specify the model and prior distributions. In particular, we use the g-prior of Zellner (1986) for the fixed regression coefficients. In a second step, we use a Metropolis-within-Gibbs algorithm combined with the grouping (or blocking) technique of Liu (1994). This choice has both theoritical and practical advantages. The method developed is applied to merged microarray datasets of patients with breast cancer. However, this method has a limit: the covariance matrix involved in the g-prior should not be singular. But there are two standard cases in which it is singular: if the number of observations is lower than the number of variables, or if some variables are linear combinations of others. In such situations we propose to modify the g-prior by introducing a ridge parameter, and a simple way to choose the associated hyper-parameters. The prior obtained is a compromise between the conditional independent case of the coefficient regressors and the automatic scaling advantage offered by the g-prior, and can be linked to the work of Gupta and Ibrahim (2007).In the second part, we develop two new population-based MCMC methods. In cases of complex models with several parameters, but whose likelihood can be computed, the Equi-Energy Sampler (EES) of Kou et al. (2006) seems to be more efficient than the Parallel Tempering (PT) algorithm introduced by Geyer (1991). However it is difficult to use in combination with a Gibbs sampler, and it necessitates increased storage. We propose an algorithm combining the PT with the principle of exchange moves between chains with same levels of energy, in the spirit of the EES. This adaptation which we are calling Parallel Tempering with Equi-Energy Move (PTEEM) keeps the original idea of the EES method while ensuring good theoretical properties and a practical use in combination with a Gibbs sampler.Then, in some complex models whose likelihood is analytically or computationally intractable, the inference can be difficult. Several likelihood-free methods (or Approximate Bayesian Computational Methods) have been developed. We propose a new algorithm, the Likelihood Free-Parallel Tempering, based on the MCMC theory and on a population of chains, by using an analogy with the Parallel Tempering algorithm.
130

Essays in Total Factor Productivity measurement

Severgnini, Battista 16 August 2010 (has links)
Diese Dissertation umfasst sowohl einen theoretisches als auch einen empirischen Beitrag zur Analyse der Messung der gesamten Faktorproduktivität (TFP). Das erste Kapitel inspiziert die bestehende Literatur über die häufigsten Techniken der TFP Messung und gibt einen Überblick über deren Limitierung. Das zweite Kapitel betrachtet Daten, die durch ein Real Business Cycle Modell generiert wurden und untersucht das quantifizierbare Ausmaß von Messfehlern des Solow Residuums als ein Maß für TFP Wachstum, wenn der Kapitalstock fehlerhaft gemessen wird und wenn Kapazitätsauslastung und Abschreibungen endogen sind. Das dritte Kapitel schlägt eine neue Methodologie in einem bayesianischen Zusammenhang vor, die auf Zustands- Raum-Modellen basiert. Das vierte Kapitel führt einen neuen Ansatz zur Bestimmung möglicher Spill-over Effekte auf Grund neuer Technologien auf die Produktivität ein und kombiniert eine kontrafaktische Zerlegung, die von den Hauptannahmen des Malquist Indexes abgeleitet wird mit ökonometrischen Methoden, die auf Machado and Mata (2005) zurückgehen. / This dissertation consists of theoretical and empirical contributions to the study on Total Factor Productivity (TFP) measurement. The first chapter surveys the literature on the most used techniques in measuring TFP and surveys the limits of these frameworks. The second chapter considers data generated from a Real Business Cycle model and studies the quantitative extent of measurement error for the Solow residual as a measure of TFP growth when the capital stock is measured with error and when capacity utilization and depreciation are endogenous. Furthermore, it proposes two alternative measurements of TFP growth which do not require capital stocks. The third chapter proposes a new methodology based on State-space models in a Bayesian framework. Applying the Kalman Filter to artificial data, it proposes a computation of the initial condition for productivity growth based on the properties of the Malmquist index. The fourth chapter introduces a new approach for identifying possible spillovers emanating from new technologies on productivity combining a counterfactual decomposition derived from the main properties of the Malmquist index and the econometric technique introduced by Machado and Mata (2005).

Page generated in 0.0296 seconds