• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 459
  • 274
  • 164
  • 47
  • 25
  • 22
  • 19
  • 10
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 2
  • Tagged with
  • 1198
  • 264
  • 194
  • 145
  • 124
  • 87
  • 74
  • 67
  • 62
  • 61
  • 61
  • 61
  • 58
  • 54
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

The Effects of Water Quantification on Tribal Economies: Evidence from the Western U.S.

Deol, Suhina, Deol, Suhina January 2017 (has links)
This paper looks at economic factors and water rights quantification on 95 Native American reservations economies in the western United States (U.S.). The study looks at the issues in two parts: (1) the characteristics of reservations quantifying their water rights compared to those who do not and (2) the effects of water rights quantification on reservation economic characteristics. Data was compiled from the U.S. Census Bureau, USDA, water specialists, court decrees, news articles, and scholarly papers. Results found that tribes who operate casinos and have higher revenues from agricultural goods are more likely to have quantified their water rights. Tribes with quantified water rights also had increased income levels. This study can help tribes design policies to create sustainable water management policies and economies on tribal reservations.
312

Propriétés et classification des Hamiltoniens séparables possédant des intégrales d'ordre trois

Gravel, Simon January 2003 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
313

Participium v současné španělštině / Past Participle in Contemporary Spanish

Mrkvová, Jana January 2013 (has links)
v anglickém jazyce: The topic of this dissertation is the past participle and its use in the contemporary Spanish language system. The character of the thesis is both theoretical and empirical. The theoretical part delineates the issue of the Spanish past participle in the context of non-finite verbs and explores the general matters of the past participle connected with its morphological and syntactic- semantical features. The empirical part consists of two case studies of selected past participle constructions, dar por + past participle a una vez + past participle, based on language research within the parallel corpus InterCorp and single language corpus CREA. The aim of the research is to identify the particular past participles forming the core of the above-mentioned constructions and, by way of contrastive analysis, describe the possibilities of expressing these constructions in the Czech language. The individual findings are quantified and documented by examples.
314

Résolution de problème inverse et propagation d'incertitudes : application à la dynamique des gaz compressibles / Inverse problem and uncertainty quantification : application to compressible gas dynamics

Birolleau, Alexandre 30 April 2014 (has links)
Cette thèse porte sur la propagation d'incertitudes et la résolution de problème inverse et leur accélération par Chaos Polynomial. L'objectif est de faire un état de l'art et une analyse numérique des méthodes spectrales de type Chaos Polynomial, d'en comprendre les avantages et les inconvénients afin de l'appliquer à l'étude probabiliste d'instabilités hydrodynamiques dans des expériences de tubes à choc de type Richtmyer-Meshkov. Le second chapitre fait un état de l'art illustré sur plusieurs exemples des méthodes de type Chaos Polynomial. Nous y effectuons son analyse numérique et mettons en évidence la possibilité d'améliorer la méthode, notamment sur des solutions irrégulières (en ayant en tête les difficultés liées aux problèmes hydrodynamiques), en introduisant le Chaos Polynomial généralisé itératif. Ce chapitre comporte également l'analyse numérique complète de cette nouvelle méthode. Le chapitre 3 a fait l'objet d'une publication dans Communication in Computational Physics, celle-ci a récemment été acceptée. Il fait l'état de l'art des méthodes d'inversion probabilistes et focalise sur l'inférence bayesienne. Il traite enfin de la possibilité d'accélérer la convergence de cette inférence en utilisant les méthodes spectrales décrites au chapitre précédent. La convergence théorique de la méthode d'accélération est démontrée et illustrée sur différents cas-test. Nous appliquons les méthodes et algorithmes des deux chapitres précédents à un problème complexe et ambitieux, un écoulement de gaz compressible physiquement instable (configuration tube à choc de Richtmyer-Meshkov) avec une analyse poussée des phénomènes physico-numériques en jeu. Enfin en annexe, nous présentons quelques pistes de recherche supplémentaires rapidement abordées au cours de cette thèse. / This thesis deals with uncertainty propagation and the resolution of inverse problems together with their respective acceleration via Polynomial Chaos. The object of this work is to present a state of the art and a numerical analysis of this stochastic spectral method, in order to understand its pros and cons when tackling the probabilistic study of hydrodynamical instabilities in Richtmyer-Meshkov shock tube experiments. The first chapter is introductory and allows understanding the stakes of being able to accurately take into account uncertainties in compressible gas dynamics simulations. The second chapter is both an illustrative state of the art on generalized Polynomial Chaos and a full numerical analysis of the method keeping in mind the final application on hydrodynamical problems developping shocks and discontinuous solutions. In this chapter, we introduce a new method, naming iterative generalized Polynomial Chaos, which ensures a gain with respect to generalized Polynomial Chaos, especially with non smooth solutions. Chapter three is closely related to an accepted publication in Communication in Computational Physics. It deals with stochastic inverse problems and introduces bayesian inference. It also emphasizes the possibility of accelerating the bayesian inference thanks to iterative generalized Polynomial Chaos described in the previous chapter. Theoretical convergence is established and illustrated on several test-cases. The last chapter consists in the application of the above materials to a complex and ambitious compressible gas dynamics problem (Richtmyer-Meshkov shock tube configuration) together with a deepened study of the physico-numerical phenomenon at stake. Finally, in the appendix, we also present some interesting research paths we quickly tackled during this thesis.
315

A Multi-Method Approach for the Quantification of Surface Amine Groups on Silica Nanoparticles

Sun, Ying 29 July 2019 (has links)
As nanomaterials continue to garner interest in a wide range of industries and scientific fields, commercial suppliers have met growing consumer demand by readily offering custom particles with size, shape and surface functionality made-to-order. By circumventing the challenging and complex synthesis of functionalized nanoparticles, these businesses seek to provide greater access for the experimentation and application of these nanoscale platforms. In many cases, amine functional groups are covalently attached as a surface coating on a nanoparticle to provide a starting point for chemical derivatization and commonly, conjugation of biomolecules in medical science applications. Successful conjugation can improve the compatibility, interfacing and activity of therapeutic and diagnostic nanomedicines. Amines are amongst the most popular reactive groups used in bioconjugation pathways owing to the many high-yield alkylation and acylation reaction are involved in. For the design of functionalized nanomaterials with precisely tuned surface chemical properties, it is important to develop techniques and methods which can accurately and reproducibly characterize these materials. Quantification of surface functional groups is crucial, as these groups not only allow for conjugation of chemical species, but they also influence the surface charge and therefore aggregation behavior of nanomaterials. The loss of colloidal stability of functionalized nanomaterials can often correspond to a significant if not complete loss of functionality. Thus, we sought to develop multiple characterization approaches for the quantification of surface amine groups. Silica nanoparticles were selected as a model nanomaterial as they are widely used, commercially available, and their surface chemistry has been investigated and studied for decades. Various commercial batches of silica nanoparticles were procured with sizes ranging from 20 – 120 nm. Two colorimetric assays were developed and adapted for their ease-of-use, sensitivity, and convenience. In addition, a fluorine labelling technique was developed which enabled analysis by quantitative solid-state 19F NMR and X-ray photoelectron spectroscopy (XPS). XPS provided data on surface chemical composition at a depth of ≈ 10 nm, which allowed us to determine coupling efficiencies of the fluorine labelling technique and evaluate the reactivity of the two assays. The ensemble of surface-specific quantification techniques was used to evaluate multiple commercial batches of aminated silica and investigate batch-to-batch variability and the influence of particle size with degree of functionalization. In addition, resulting measurements of surface amine content were compared and validated by an independent method based on quantitative solution 1H NMR, which was developed for total functional group content determination. This allowed for us to assess the role of accessibility and reactivity of the amine groups present in our silica particles. Overall, the objective of this study was to develop a multi-method approach for the quantification of amine functional groups on silica nanoparticles. At the same time, we hoped to set a precedent for the development and application of multiple characterization techniques with an emphasis of comparing them on the basis of reproducibility, sensitivity, and mutual validation.
316

Statistical Analysis and Bayesian Methods for Fatigue Life Prediction and Inverse Problems in Linear Time Dependent PDEs with Uncertainties

Sawlan, Zaid A 10 November 2018 (has links)
This work employs statistical and Bayesian techniques to analyze mathematical forward models with several sources of uncertainty. The forward models usually arise from phenomenological and physical phenomena and are expressed through regression-based models or partial differential equations (PDEs) associated with uncertain parameters and input data. One of the critical challenges in real-world applications is to quantify uncertainties of the unknown parameters using observations. To this purpose, methods based on the likelihood function, and Bayesian techniques constitute the two main statistical inferential approaches considered here. Two problems are studied in this thesis. The first problem is the prediction of fatigue life of metallic specimens. The second part is related to inverse problems in linear PDEs. Both problems require the inference of unknown parameters given certain measurements. We first estimate the parameters by means of the maximum likelihood approach. Next, we seek a more comprehensive Bayesian inference using analytical asymptotic approximations or computational techniques. In the fatigue life prediction, there are several plausible probabilistic stress-lifetime (S-N) models. These models are calibrated given uniaxial fatigue experiments. To generate accurate fatigue life predictions, competing S-N models are ranked according to several classical information-based measures. A different set of predictive information criteria is then used to compare the candidate Bayesian models. Moreover, we propose a spatial stochastic model to generalize S-N models to fatigue crack initiation in general geometries. The model is based on a spatial Poisson process with an intensity function that combines the S-N curves with an averaged effective stress that is computed from the solution of the linear elasticity equations.
317

Improved quantification under dataset shift / Quantificação em problemas com mudança de domínio

Vaz, Afonso Fernandes 17 May 2018 (has links)
Several machine learning applications use classifiers as a way of quantifying the prevalence of positive class labels in a target dataset, a task named quantification. For instance, a naive way of determining what proportion of positive reviews about given product in the Facebook with no labeled reviews is to (i) train a classifier based on Google Shopping reviews to predict whether a user likes a product given its review, and then (ii) apply this classifier to Facebook posts about that product. Unfortunately, it is well known that such a two-step approach, named Classify and Count, fails because of data set shift, and thus several improvements have been recently proposed under an assumption named prior shift. However, these methods only explore the relationship between the covariates and the response via classifiers and none of them take advantage of the fact that one often has access to a few labeled samples in the target set. Moreover, the literature lacks in approaches that can handle a target population that varies with another covariate; for instance: How to accurately estimate how the proportion of new posts or new webpages in favor of a political candidate varies in time? We propose novel methods that fill these important gaps and compare them using both real and artificial datasets. Finally, we provide a theoretical analysis of the methods. / Muitas aplicações de aprendizado de máquina usam classificadores para determinar a prevalência da classe positiva em um conjunto de dados de interesse, uma tarefa denominada quantificação. Por exemplo, uma maneira ingênua de determinar qual a proporção de postagens positivas sobre um determinado protuto no Facebook sem ter resenhas rotuladas é (i) treinar um classificador baseado em resenhas do Google Shopping para prever se um usuário gosta de um produto qualquer, e então (ii) aplicar esse classificador às postagens do Facebook relacionados ao produtos de interesse. Infelizmente, é sabido que essa técnica de dois passos, denominada classificar e contar, falha por não levar em conta a mudança de domínio. Assim, várias melhorias vêm sendo feitas recentemente sob uma suposição denominada prior shift. Entretanto, estes métodos exploram a relação entre as covariáveis apenas via classificadores e nenhum deles aproveitam o fato de que, em algumas situações, podemos rotular algumas amostras do conjunto de dados de interesse. Além disso, a literatura carece de abordagens que possam lidar com uma população-alvo que varia com outra covariável; por exemplo: Como estimar precisamente como a proporção de novas postagens ou páginas web a favor de um candidato político varia com o tempo? Nós propomos novos métodos que preenchem essas lacunas importantes e os comparamos utilizando conjuntos de dados reais e similados. Finalmente, nós fornecemos uma análise teórica dos métodos propostos.
318

Quantification des sources de méthane en Sibérie par inversion atmosphérque à la méso-échelle / Quantification of methane sources in Siberia using meso-scale atmospheric inversions

Berchet, Antoine 19 December 2014 (has links)
Les émissions anthropiques et naturelles de méthane en Sibérie contribuent de manièrenotable, mais mal quantifiée au budget mondial de méthane (3–11% des émissions mondiales).Au Sud de la région, les émissions anthropiques sont liées aux grands centres urbains.Au Nord, l’extraction de gaz et de pétrole en Sibérie occidentale induit d’importantessources anthropiques ponctuelles. Ces régions sont aussi couvertes de vastes zones humidesnaturelles émettant du méthane durant l’été (typiquement de mai à septembre). Nous utilisonsdes inversions atmosphériques régionales à la méso-échelle pour mieux comprendreles contributions de chaque processus dans le budget sibérien. Les inversions souffrent desincertitudes dans les observations, dans la simulation du transport et dans l’amplitude et ladistribution des émissions. Pour prendre en compte ces incertitudes, je développe une nouvelleméthode d’inversion basée sur une marginalisation des statistiques d’erreurs. Je testecette méthode et documente sa robustesse sur un cas test. Je l’applique ensuite à la Sibérie.À l’aide de mesures de concentrations atmosphériques de méthane collectées par des sitesd’observation de surface en Sibérie, j’estime le budget régional de méthane sibérien à 5–28 TgCH4.a−1 (1–5% des émissions mondiales), soit une réduction de 50% des incertitudespar rapport aux précédentes études dans la région. Grâce à cette méthode, je suis de plus enmesure de détecter des structures d’émissions par zones de quelques milliers de km2 et leurvariabilité à une résolution de 2–4 semaines. / Anthopogenic and natural methane emissions in Siberia significantly contribute to theglobal methane budget, but the magnitude of these emissions is uncertain (3–11% of globalemissions). To the South, anthropogenic emissions are related to big urban centres. To theNorth, oil and gas extraction in West Siberia is responsible for conspicuous point sources.These regions are also covered by large natural wetlands emitting methane during the snowfreeseason, roughly from May to September. Regional atmospheric inversions at a meso-scaleprovide a mean for improving our knowledge on all emission process. But inversions sufferfrom the uncertainties in the assimilated observations, in the atmospheric transport modeland in the emission magnitude and distribution. I developp a new inversion method based onerror statistic marginalization in order to account for these uncertainties. I test this methodon case study and explore its robustness. I then apply it to Siberia. Using measurements ofmethane atmospheric concentrations gathered at Siberian surface observation sites, I founda regional methane budget in Siberia of 5–28 TgCH4.a−1 (1–5% of global emissions). Thisimplies a reduction of 50% in the uncertainties on the regional budget. With the new method,I also can detect emission patterns at a resolution of a few thousands km2 and emissionvariability at a resolution of 2–4 weeks.
319

Reconstrução Quantitativa de SPECT: Avaliação de Correções / Quantitative Reconstruction of SPECT: Evaluation of Corrections

Silva, Ana Maria Marques da 23 October 1998 (has links)
O objetivo deste trabalho foi avaliar a influência das correções de atenuação e espalhamento na reconstrução quantitativa em SPECT. O estudo foi baseado em diversas simulações de Monte Carlo, com ênfase especial no modelo torso-cardíaco matemático (MCAT). Para a reconstrução, foi utilizado o algoritmo iterativo ML-EM com projetor-retroprojetor modificado pelo mapa de atenuação. Para avaliar a correção de espalhamento, foram simulados os espectros energéticos, com múltiplas ordens de espalhamento Compton. O método da dupla janela de energia (Jaszczak) foi aplicado, devido a sua simplicidade, e as imagens corrigidas foram comparadas com as de fótons primários. Foram analisadas as escolhas das janelas do fotopico e espalhamento, além da dependência do fator de espalhamento k com a distribuição de atividades do objeto. Duas abordagens foram adotadas para a obtenção dos mapas de atenuação: a estimativa do mapa uniforme diretamente dos dados de emissão, sem o uso de imagens de transmissão; e o borramento de mapas não-uniformes, reconstruídos a partir das projeções por transmissão. A estimativa do mapa de atenuação diretamente dos sinogramas de emissão baseou-se nas condições de consistência da transformada de Radon atenuada. Neste caso, foram estudados os efeitos de diferentes contagens e vários coeficientes de atenuação iniciais sobre as imagens corrigidas. Os mapas de atenuação não-uniformes foram borrados com um \"kernei\" gaussiano, aplicados nas correções e os efeitos na quantificação foram analisados. Os espectros energéticos emitidos pelo modelo MCAT mostraram que os fótons espalhados não poderiam ser excluídos a contento, mesmo que fossem utilizadas janelas de aquisição estreitas sobre o fotopico. Em relação a correção de Jaszczak, verificou-se que a escolha das janelas de fotopico e espalhamento é crucial e confirmou-se que o valor de k é altamente dependente do objeto examinado. Dada uma estimativa inicial do mapa de atenuação, o uso das condições de consistência para estimar o mapa de atenuação uniforme, consistente com os dados de emissão do modelo MCAT simulado, resultou sempre em uma mesma forma, para quaisquer valores iniciais do conjunto de parâmetros. Apesar do erro diminuir com o aumento da contagem, o melhor coeficiente de atenuação não pôde ser obtido, mesmo em altas contagens. Isto se deve a presença dos fótons espalhados, que alteraram a solução das condições de consistência, reduzindo as dimensões do mapa. Os resultados indicaram que a correção de espalhamento é o fator mais importante na reconstrução quantitativa em SPECT. Com referência aos efeitos quantitativos da correção de atenuação, não foram observadas diferenças significativas com a utilização dos mapas borrados, enquanto que a correção com mapas uniformes mostrou-se menos eficaz. / The goal of this work is to evaluate the influence of scatter and attenuation correction methods in quantitative SPECT reconstruction. The study was based on several Monte Carlo simulations, with special emphasis on the mathematical cardiac-torso phantom (MCAT). Iterative ML-EM reconstruction with modified projector-backprojector was used. To evaluate the scatter correction, energy spectra were simulated for SPECT imaging including multiple order Compton scattered photons. The dual energy window method proposed by Jaszczak was applied and scatter corrected images were compared with primary photons images. The choice of the scattering and photopeak windows and the dependence of the scatter factor k with the activity distribution were also analysed. Two approaches were adopted for obtaining the maps for attenuation correction: the estimation of the attenuation maps directly from the emission data, without transmission imaging, and the blurring of non-uniform attenuation maps, reconstructed from transmission data. The estimation of attenuation maps directly from the emission sinograms was based on the consistency conditions of attenuated Radon transform. In this case, the effects of different counting rates and various initial attenuation coefficients on the corrected images were studied. The non-uniform attenuation maps were blurred with a gaussian kernel with different variances, applied in further corrections and their effects on quantitation were examined. Analysis of energy spectra emitted from the MCAT phantom showed that scattered photons cannot be totally excluded, even when narrow acquisition windows were used. As far as the Jaszczak correction is concerned, results showed that the choice of photopeak and secondary windows is crucial and that the value of k is highly dependent on the imaged object. Given an initial estimation of the attenuation map with a constant coefficient, the use of consistency conditions to estimate the uniform map, consistent with the emission data of simulated MCAT phantom, resulted in the same shape for any set of initial parameters. In spite of the fact that the error falls with increasing counting rate, higher counts are not able to determine the best attenuation coefficient. This is due to scattered photons, which alter the solution of consistency conditions, reducing the size of estimated maps. Results indicated that the scatter correction is the most important factor inquantitative SPECT reconstruction. Furthermore, no significant differences were observed in the quantitation, when using the blurred non- uniform attenuation maps in attenuation correction, while corrections with uniform maps proved to be less efficient
320

Uncertainty Quantification and Assimilation for Efficient Coastal Ocean Forecasting

Siripatana, Adil 21 April 2019 (has links)
Bayesian inference is commonly used to quantify and reduce modeling uncertainties in coastal ocean models by computing the posterior probability distribution function (pdf) of some uncertain quantities to be estimated conditioned on available observations. The posterior can be computed either directly, using a Markov Chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation (DA) approach. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without a significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach often due to restricted Gaussian prior and noise assumptions. This thesis aims to develop, implement and test novel efficient Bayesian inference techniques to quantify and reduce modeling and parameter uncertainties of coastal ocean models. Both state and parameter estimations will be addressed within the framework of a state of-the-art coastal ocean model, the Advanced Circulation (ADCIRC) model. The first part of the thesis proposes efficient Bayesian inference techniques for uncertainty quantification (UQ) and state-parameters estimation. Based on a realistic framework of observation system simulation experiments (OSSEs), an ensemble Kalman filter (EnKF) is first evaluated against a Polynomial Chaos (PC)-surrogate MCMC method under identical scenarios. After demonstrating the relevance of the EnKF for parameters estimation, an iterative EnKF is introduced and validated for the estimation of a spatially varying Manning’s n coefficients field. Karhunen-Lo`eve (KL) expansion is also tested for dimensionality reduction and conditioning of the parameter search space. To further enhance the performance of PC-MCMC for estimating spatially varying parameters, a coordinate transformation of a Gaussian process with parameterized prior covariance function is next incorporated into the Bayesian inference framework to account for the uncertainty in covariance model hyperparameters. The second part of the thesis focuses on the use of UQ and DA on adaptive mesh models. We developed new approaches combining EnKF and multiresolution analysis, and demonstrated significant reduction in the cost of data assimilation compared to the traditional EnKF implemented on a non-adaptive mesh.

Page generated in 0.0318 seconds