• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 304
  • 139
  • 34
  • 31
  • 23
  • 19
  • 16
  • 16
  • 14
  • 12
  • 7
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 743
  • 743
  • 743
  • 141
  • 118
  • 112
  • 102
  • 86
  • 68
  • 65
  • 59
  • 57
  • 55
  • 54
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
671

Study of the aquatic dissolved organic matter from the Seine River catchment (France) by optical spectroscopy combined to asymmetrical flow field-flow fractionation / Étude de la matière organique dissoute aquatique dans le bassin versant de la Seine (France) par spectroscopie optique combinée au fractionnement par couplage flux/force avec flux asymétrique

Nguyen, Phuong Thanh 06 November 2014 (has links)
Le but principal de cette thèse était d'étudier les caractéristiques de la matière organique dissoute (MOD) dans le bassin versant de la Seine. Ce travail a été réalisé dans le cadre du programme de recherche PIREN-Seine. Les travaux présentés ici visaient plus particulièrement à identifier les sources de MOD et à suivre son évolution dans les zones d’étude. L’analyse des propriétés optiques (UV-Visible, fluorescence) de la MOD, couplée aux traitements PARAFAC et ACP, a permis de discriminer différentes sources de MOD et de mettre en évidence des variations spatio-temporelles de ses propriétés. L’axe Seine, en aval de Paris, a notamment été caractérisé par l'activité biologique la plus forte. La MOD du bassin de l’Oise a montré des caractéristiques plus "humiques", tandis que le bassin de la Marne a été caractérisé par un troisième type spécifique de MOD. Il a d’autre part été mis en évidence la présence de MODs spécifiques dans chaque zone pour les échantillons prélevés en périodes d’étiage, alors qu’une distribution homogène des composants a été obtenue pour l’ensemble des échantillons prélevés en période de crue.Le rôle environnemental des colloïdes naturels étant étroitement lié à leur taille, il a d’autre part été développé une technique analytique/séparative originale pour l’étude de ce matériel complexe, un fractionnement par couplage flux/force avec flux asymétrique (AF4). Le fractionnement par AF4 des échantillons a confirmé la variabilité spatio-temporelle en composition et en taille de la MOD d'un site de prélèvement à un autre et a permis de distinguer différentes sources de MOD colloïdale confirmant les résultats de l’étude de ses propriétés optiques. / The main goal of this thesis was to investigate the characteristics of dissolvedorganic matter (DOM) within the Seine River catchment in the Northern part of France. ThisPhD thesis was performed within the framework of the PIREN-Seine research program. Theapplication of UV/visible absorbance and EEM fluorescence spectroscopy combined toPARAFAC and PCA analyses allowed us to identify different sources of DOM andhighlighted spatial and temporal variations of DOM properties. The Seine River wascharacterized by the strongest biological activity. DOM from the Oise basin seemed to havemore "humic" characteristics, while the Marne basin was characterized by a third specifictype of DOM. For samples collected during low-water periods, the distributions of the 7components determined by PARAFAC treatment varied between the studied sub-basins,highlighting different organic materials in each zone. A homogeneous distribution of thecomponents was obtained for the samples collected in period of flood.Then, a semi-quantitative asymmetrical flow field-flow fractionation (AF4) methodology wasdeveloped to fractionate DOM. The following optimized parameters were determined: across-flow rate of 2 ml min-1 during the focus step with a focusing time of 2 min and anexponential gradient of cross-flow from 3.5 to 0.2 ml min-1 during the elution step. Thefluorescence properties of various size-based fractions of DOM were evaluated by applyingthe optimized AF4 methodology to fractionate 13 samples, selected from the three sub-basins.The fluorescence properties of these fractions were analysed, allowing us to discriminatebetween the terrestrial or autochthonous origin of DOM.
672

Oxidation of terpenes in indoor environments : A study of influencing factors

Pommer, Linda January 2003 (has links)
<p>In this thesis the oxidation of monoterpenes by O3 and NO2 and factors that influenced the oxidation were studied. In the environment both ozone (O3) and nitrogen dioxide (NO2) are present as oxidising gases, which causes sampling artefacts when using Tenax TA as an adsorbent to sample organic compounds in the air. A scrubber was developed to remove O3 and NO2 prior to the sampling tube, and artefacts during sampling were minimised when using the scrubber. The main organic compounds sampled in this thesis were two monoterpenes, alfa-pinene and delta-3-carene, due to their presence in both indoor and outdoor air. The recovery of the monoterpenes through the scrubber varied between 75-97% at relative humidities of 15-75%.</p><p>The reactions of alfa-pinene and delta-3-carene with O 3, NO2 and nitric oxide (NO) at different relative humidities (RHs) and reaction times were studied in a dark reaction chamber. The experiments were planned and performed according to an experimental design were the factors influencing the reaction (O3, NO2, NO, RH and reaction times) were varied between high and low levels. In the experiments up to 13% of the monoterpenes reacted when O3, NO2, and reaction time were at high levels, and NO, and RH were at low levels. In the evaluation eight and seven factors (including both single and interaction factors) were found to influence the amount of alfa-pinene and delta-3-carene reacted, respectively. The three most influencing factors for both of the monoterpenes were the O 3 level, the reaction time, and the RH. Increased O3 level and reaction time increased the amount of monoterpene reacted, and increased RH decreased the amount reacted.</p><p>A theoretical model of the reactions occurring in the reaction chamber was created. The amount of monoterpene reacted at different initial settings of O3, NO2, and NO were calculated, as well as the influence of different reaction pathways, and the concentrations of O3 and NO2, and NO at specific reaction times. The results of the theoretical model were that the reactivity of the gas mixture towards alfa-pinene and delta-3-carene was underestimated. But, the calculated concentrations of O3, NO2, and NO in the theoretical model were found to correspond to a high degree with experimental results performed under similar conditions. The possible associations between organic compounds in indoor air, building variables and the presence of sick building syndrome were studied using principal component analysis. The most complex model was able to separate 71% of the “sick” buildings from the “healthy” buildings. The most important variables that separated the “sick” buildings from the “healthy” buildings were a more frequent occurrence or a higher concentration of compounds with shorter retention times in the “sick” buildings.</p><p>The outcome of this thesis could be summarised as follows;</p><p>-</p><p>-</p><p>-</p><p>-</p>
673

Prediction of reservoir properties of the N-sand, vermilion block 50, Gulf of Mexico, from multivariate seismic attributes

Jaradat, Rasheed Abdelkareem 29 August 2005 (has links)
The quantitative estimation of reservoir properties directly from seismic data is a major goal of reservoir characterization. Integrated reservoir characterization makes use of different varieties of well and seismic data to construct detailed spatial estimates of petrophysical and fluid reservoir properties. The advantage of data integration is the generation of consistent and accurate reservoir models that can be used for reservoir optimization, management and development. This is particularly valuable in mature field settings where hydrocarbons are known to exist but their exact location, pay, lateral variations and other properties are poorly defined. Recent approaches of reservoir characterization make use of individual seismic attributes to estimate inter-well reservoir properties. However, these attributes share a considerable amount of information among them and can lead to spurious correlations. An alternative approach is to evaluate reservoir properties using multiple seismic attributes. This study reports the results of an investigation of the use of multivariate seismic attributes to predict lateral reservoir properties of gross thickness, net thickness, gross effective porosity, net-to-gross ratio and net reservoir porosity thickness product. This approach uses principal component analysis and principal factor analysis to transform eighteen relatively correlated original seismic attributes into a set of mutually orthogonal or independent PC??s and PF??s which are designated as multivariate seismic attributes. Data from the N-sand interval of Vermilion Block 50 field, Gulf of Mexico, was used in this study. Multivariate analyses produced eighteen PC??s and three PF??s grid maps. A collocated cokriging geostaistical technique was used to estimate the spatial distribution of reservoir properties of eighteen wells penetrating the N-sand interval. Reservoir property maps generated by using multivariate seismic attributes yield highly accurate predictions of reservoir properties when compared to predictions produced with original individual seismic attributes. To the contrary of the original seismic attribute results, predicted reservoir properties of the multivariate seismic attributes honor the lateral geological heterogeneities imbedded within seismic data and strongly maintain the proposed geological model of the N-sand interval. Results suggest that multivariate seismic attribute technique can be used to predict various reservoir properties and can be applied to a wide variety of geological and geophysical settings.
674

Sources of dioxins and other POPs to the marine environment : Identification and apportionment using pattern analysis and receptor modeling

Sundqvist, Kristina January 2009 (has links)
In the studies underlying this thesis, various source tracing techniques were applied to environmental samples from the Baltic region. Comprehensive sampling and analysis of polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs) in surface sediments in Swedish coastal and offshore areas resulted in a unique data set for this region. Nearly 150 samples of surface sediments were analyzed for all tetra- to octa-chlorinated PCDD/Fs. The levels showed large spatial variability with hotspots in several coastal regions. Neither Sweden nor the EU has introduced guideline values for PCDD/Fs in sediment, but comparisons to available guidelines and quality standards from other countries indicate that large areas of primarily coastal sediments may constitute a risk to marine organisms. Multivariate pattern analysis techniques and receptor models, such as Principal Component Analysis (PCA) and Positive Matrix Factorization (PMF), were used to trace sources. These analyses suggested that three to six source types can explain most of the observed pattern variations found in the sediment samples. Atmospheric deposition was suggested as the most important source to offshore areas, thus confirming earlier estimates. However, spatial differences indicated a larger fraction of local/regional atmospheric sources, characterized by PCDFs, in the south. This was indicated by the identification of several patterns of atmospheric origin. In coastal areas, the influence of direct emission sources was larger, and among these, chlorophenol used for wood preservation and emissions from pulp/paper production and other wood related industry appeared to be most important. The historic emissions connected to processes involving chemical reactions with chlorine (e.g. pulp bleaching) were found to be of less importance except at some coastal sites. The analysis of PCDD/Fs in Baltic herring also revealed spatial variations in the levels and pollution patterns along the coast. The geographical match against areas with elevated sediment levels indicated that transfer from sediments via water to organisms was one possible explanation. Fugacity, a concept used to predict the net transport direction between environmental matrices, was used to explore the gas exchange of hexachlorocyclohexanes (HCHs) and polychlorinated biphenyls (PCBs) between air and water. These estimates suggested that, in the Kattegat Sea, the gaseous exchange of HCHs primarily resulted in net deposition while PCBs were net volatilized under certain environmental conditions. The study also indicated that, while the air concentrations of both PCBs and γ-HCH are mostly dependent upon the origin of the air mass, the fluctuations in α-HCH were primarily influenced by seasonal changes.
675

Two- and Three-dimensional Face Recognition under Expression Variation

Mohammadzade, Narges Hoda 30 August 2012 (has links)
In this thesis, the expression variation problem in two-dimensional (2D) and three-dimensional (3D) face recognition is tackled. While discriminant analysis (DA) methods are effective solutions for recognizing expression-variant 2D face images, they are not directly applicable when only a single sample image per subject is available. This problem is addressed in this thesis by introducing expression subspaces which can be used for synthesizing new expression images from subjects with only one sample image. It is proposed that by augmenting a generic training set with the gallery and their synthesized new expression images, and then training DA methods using this new set, the face recognition performance can be significantly improved. An important advantage of the proposed method is its simplicity; the expression of an image is transformed simply by projecting it into another subspace. The above proposed solution can also be used in general pattern recognition applications. The above method can also be used in 3D face recognition where expression variation is a more serious issue. However, DA methods cannot be readily applied to 3D faces because of the lack of a proper alignment method for 3D faces. To solve this issue, a method is proposed for sampling the points of the face that correspond to the same facial features across all faces, denoted as the closest-normal points (CNPs). It is shown that the performance of the linear discriminant analysis (LDA) method, applied to such an aligned representation of 3D faces, is significantly better than the performance of the state-of-the-art methods which, rely on one-by-one registration of the probe faces to every gallery face. Furthermore, as an important finding, it is shown that the surface normal vectors of the face provide a higher level of discriminatory information rather than the coordinates of the points. In addition, the expression subspace approach is used for the recognition of 3D faces from single sample. By constructing expression subspaces from the surface normal vectors at the CNPs, the surface normal vectors of a 3D face with single sample can be synthesized under other expressions. As a result, by improving the estimation of the within-class scatter matrix using the synthesized samples, a significant improvement in the recognition performance is achieved.
676

Two- and Three-dimensional Face Recognition under Expression Variation

Mohammadzade, Narges Hoda 30 August 2012 (has links)
In this thesis, the expression variation problem in two-dimensional (2D) and three-dimensional (3D) face recognition is tackled. While discriminant analysis (DA) methods are effective solutions for recognizing expression-variant 2D face images, they are not directly applicable when only a single sample image per subject is available. This problem is addressed in this thesis by introducing expression subspaces which can be used for synthesizing new expression images from subjects with only one sample image. It is proposed that by augmenting a generic training set with the gallery and their synthesized new expression images, and then training DA methods using this new set, the face recognition performance can be significantly improved. An important advantage of the proposed method is its simplicity; the expression of an image is transformed simply by projecting it into another subspace. The above proposed solution can also be used in general pattern recognition applications. The above method can also be used in 3D face recognition where expression variation is a more serious issue. However, DA methods cannot be readily applied to 3D faces because of the lack of a proper alignment method for 3D faces. To solve this issue, a method is proposed for sampling the points of the face that correspond to the same facial features across all faces, denoted as the closest-normal points (CNPs). It is shown that the performance of the linear discriminant analysis (LDA) method, applied to such an aligned representation of 3D faces, is significantly better than the performance of the state-of-the-art methods which, rely on one-by-one registration of the probe faces to every gallery face. Furthermore, as an important finding, it is shown that the surface normal vectors of the face provide a higher level of discriminatory information rather than the coordinates of the points. In addition, the expression subspace approach is used for the recognition of 3D faces from single sample. By constructing expression subspaces from the surface normal vectors at the CNPs, the surface normal vectors of a 3D face with single sample can be synthesized under other expressions. As a result, by improving the estimation of the within-class scatter matrix using the synthesized samples, a significant improvement in the recognition performance is achieved.
677

Mutational Analysis and Redesign of Alpha-class Glutathione Transferases for Enhanced Azathioprine Activity

Modén, Olof January 2013 (has links)
Glutathione transferase (GST) A2-2 is the human enzyme most efficient in catalyzing azathioprine activation. Structure-function relationships were sought explaining the higher catalytic efficiency compared to other alpha class GSTs. By screening a DNA shuffling library, five recombined segments were identified that were conserved among the most active mutants. Mutational analysis confirmed the importance of these short segments as their insertion into low-active GSTs introduced higher azathioprine activity. Besides, H-site mutagenesis led to decreased azathioprine activity when the targeted positions belonged to these conserved segments and mainly enhanced activity when other positions were targeted. Hydrophobic residues were preferred in positions 208 and 213. The prodrug azathioprine is today primarily used for maintaining remission in inflammatory bowel disease. Therapy leads to adverse effects for 30 % of the patients and genotyping of the metabolic genes involved can explain some of these incidences. Five genotypes of human A2-2 were characterized and variant A2*E had 3–4-fold higher catalytic efficiency with azathioprine, due to a proline mutated close to the H-site. Faster activation might lead to different metabolite distributions and possibly more adverse effects. Genotyping of GSTs is recommended for further studies. Molecular docking of azathioprine into a modeled structure of A2*E suggested three positions for mutagenesis. The most active mutants had small or polar residues in the mutated positions. Mutant L107G/L108D/F222H displayed a 70-fold improved catalytic efficiency with azathioprine. Determination of its structure by X-ray crystallography showed a widened H-site, suggesting that the transition state could be accommodated in a mode better suited for catalysis. The mutational analysis increased our understanding of the azathioprine activation in alpha class GSTs and highlighted A2*E as one factor possibly behind the adverse drug-effects. A successfully redesigned GST, with 200-fold enhanced catalytic efficiency towards azathioprine compared to the starting point A2*C, might find use in targeted enzyme-prodrug therapies.
678

A Contribution to Multivariate Volatility Modeling with High Frequency Data

Marius, Matei 09 March 2012 (has links)
La tesi desenvolupa el tema de la predicció de la volatilitat financera en el context de l’ús de dades d’alta freqüència, i se centra en una línia de recerca doble: proposar models alternatius que millorarien la predicció de la volatilitat i classificar els models de volatilitat ja existents com els que es proposen en aquesta tesi. Els objectius es poden classificar en tres categories. El primer consisteix en la proposta d’un nou mètode de predicció de la volatilitat que segueix una línia de recerca desenvolupada recentment, la qual apunta al fet de mesurar la volatilitat intradia, com també la nocturna. Es proposa una categoria de models realized GARCH bivariants. El segon objectiu consisteix en la proposta d’una metodologia per predir la volatilitat diària multivariant amb models autoregressius que utilitzen estimacions de volatilitat diària (i nocturna, en el cas dels bivariants), a més d’informació d’alta freqüència, quan se’n disposava. S’aplica l’anàlisi de components principals (ACP) a un conjunt de models de tipus realized GARCH univariants i bivariants. El mètode representa una extensió d’un model ja existent (PC-GARCH) que estimava un model GARCH multivariant a partir de l’estimació de models GARCH univariants dels components principals de les variables inicials. El tercer objectiu de la tesi és classificar el rendiment dels models de predicció de la volatilitat ja existents o dels nous, a més de la precisió de les mesures intradia que s’utilitzaven en les estimacions dels models. En relació amb els resultats, s’observa que els models EGARCHX, realized EGARCH i realized GARCH(2,2) obtenen una millor valoració, mentre que els models GARCH i no realized EGARCH obtenen uns resultats inferiors en gairebé totes les proves. Això permet concloure que el fet d’incorporar mesures de volatilitat intradia millora el problema de la modelització. Quant a la classificació dels models realized bivariants, s’observa que tant els models realized GARCH bivariant (en versions completes i parcials) com el model realized EGARCH bivariant obtenen millors resultats; els segueixen els models realized GARCH(2,2) bivariant, EGARCH bivariant I EGARCHX bivariant. En comparar les versions bivariants amb les univariants, amb l’objectiu d’investigar si l’ús de mesures de volatilitat nocturna a les equacions dels models millora l’estimació de la volatilitat, es mostra que els models bivariants superen els univariants. Els resultats proven que els models bivariants no són totalment inferiors als seus homòlegs univariants, sinó que resulten ser bones alternatives per utilitzar-los en la predicció, juntament amb els models univariants, per tal d’obtenir unes estimacions més fiables. / La tesis desarrolla el tema de la predicción de la volatilidad financiera en el contexto del uso de datos de alta frecuencia, y se centra en una doble línea de investigación: la de proponer modelos alternativos que mejorarían la predicción de la volatilidad y la de clasificar modelos de volatilidad ya existentes como los propuestos en esta tesis. Los objetivos se pueden clasificar en tres categorías. El primero consiste en la propuesta de un nuevo método de predicción de la volatilidad que sigue una línea de investigación recientemente desarrollada, la cual apunta al hecho de medir la volatilidad intradía, así como la nocturna. Se propone una categoría de modelos realized GARCH bivariantes. El segundo objetivo consiste en proponer una metodología para predecir la volatilidad diaria multivariante con modelos autorregresivos que utilizaran estimaciones de volatilidad diaria (y nocturna, en el caso de los bivariantes), además de información de alta frecuencia, si la había disponible. Se aplica el análisis de componentes principales (ACP) a un conjunto de modelos de tipo realized GARCH univariantes y bivariantes. El método representa una extensión de un modelo ya existente (PCGARCH) que calculaba un modelo GARCH multivariante a partir de la estimación de modelos GARCH univariantes de los componentes principales de las variables iniciales. El tercer objetivo de la tesis es clasificar el rendimiento de los modelos de predicción de la volatilidad ya existentes o de los nuevos, así como la precisión de medidas intradía utilizadas en las estimaciones de los modelos. En relación con los resultados, se observa que los modelos EGARCHX, realized EGARCH y GARCH(2,2) obtienen una mejor valoración, mientras que los modelos GARCH y no realized EGARCH obtienen unos resultados inferiores en casi todas las pruebas. Esto permite concluir que el hecho de incorporar medidas de volatilidad intradía mejora el problema de la modelización. En cuanto a la clasificación de modelos realized bivariantes, se observa que tanto los modelos realized GARCH bivariante (en versiones completas y parciales) como realized EGARCH bivariante obtienen mejores resultados; les siguen los modelos realized GARCH(2,2) bivariante, EGARCH bivariante y EGARCHX bivariante. Al comparar las versiones bivariantes con las univariantes, con el objetivo de investigar si el uso de medidas de volatilidad nocturna en las ecuaciones de los modelos mejora la estimación de la volatilidad, se muestra que los modelos bivariantes superan los univariantes. Los resultados prueban que los modelos bivariantes no son totalmente inferiores a sus homólogos univariantes, sino que resultan ser buenas alternativas para utilizarlos en la predicción, junto con los modelos univariantes, para lograr unas estimaciones más fiables. / The thesis develops the topic of financial volatility forecasting in the context of the usage of high frequency data, and focuses on a twofold line of research: that of proposing alternative models that would enhance volatility forecasting and that of ranking existing or newly proposed volatility models. The objectives may be disseminated in three categories. The first scope constitutes of the proposal of a new method of volatility forecasting that follows a recently developed research line that pointed to using measures of intraday volatility and also of measures of night volatility, the need for new models being given by the question whether adding measures of night volatility improves day volatility estimations. As a result, a class of bivariate realized GARCH models was proposed. The second scope was to propose a methodology to forecast multivariate day volatility with autoregressive models that used day (and night for bivariate) volatility estimates, as well as high frequency information when that was available. For this, the Principal Component algorithm (PCA) was applied to a class of univariate and bivariate realized GARCH-type of models. The method represents an extension of one existing model (PC GARCH) that estimated a multivariate GARCH model by estimating univariate GARCH models of the principal components of the initial variables. The third goal of the thesis was to rank the performance of existing or newly proposed volatility forecasting models, as well as the accuracy of the intraday measures used in the realized models estimations. With regards to the univariate realized models’ rankings, it was found that EGARCHX, Realized EGARCH and Realized GARCH(2,2) models persistently ranked better, while the non-realized GARCH and EGARCH models performed poor in each stance almost. This allowed us to conclude that incorporating measures of intraday volatility enhances the modeling problem. With respect to the bivariate realized models’ ranking, it was found that Bivariate Realized GARCH (partial and complete versions) and Bivariate Realized EGARCH models performed the best, followed by the Bivariate Realized GARCH(2,2), Bivariate EGARCH and Bivariate EGARCHX models. When the bivariate versions were compared to the univariate ones in order to investigate whether using night volatility measurements in the models’ equations improves volatility estimation, it was found that the bivariate models surpassed the univariate ones when specific methodology, ranking criteria and stocks were used. The results were mixed, allowing us to conclude that the bivariate models did not prove totally inferior to their univariate counterparts, proving as good alternative options to be used in the forecasting exercise, together with the univariate models, for more reliable estimates. Finally, the PC realized models and PC bivariate realized models were estimated and their performances were ranked; improvements the PC methodology brought in high frequency multivariate modeling of stock returns were also discussed. PC models were found to be highly effective in estimating multivariate volatility of highly correlated stock assets and suggestions on how investors could use them for portfolio selection were made.
679

Prediction of reservoir properties of the N-sand, vermilion block 50, Gulf of Mexico, from multivariate seismic attributes

Jaradat, Rasheed Abdelkareem 29 August 2005 (has links)
The quantitative estimation of reservoir properties directly from seismic data is a major goal of reservoir characterization. Integrated reservoir characterization makes use of different varieties of well and seismic data to construct detailed spatial estimates of petrophysical and fluid reservoir properties. The advantage of data integration is the generation of consistent and accurate reservoir models that can be used for reservoir optimization, management and development. This is particularly valuable in mature field settings where hydrocarbons are known to exist but their exact location, pay, lateral variations and other properties are poorly defined. Recent approaches of reservoir characterization make use of individual seismic attributes to estimate inter-well reservoir properties. However, these attributes share a considerable amount of information among them and can lead to spurious correlations. An alternative approach is to evaluate reservoir properties using multiple seismic attributes. This study reports the results of an investigation of the use of multivariate seismic attributes to predict lateral reservoir properties of gross thickness, net thickness, gross effective porosity, net-to-gross ratio and net reservoir porosity thickness product. This approach uses principal component analysis and principal factor analysis to transform eighteen relatively correlated original seismic attributes into a set of mutually orthogonal or independent PC??s and PF??s which are designated as multivariate seismic attributes. Data from the N-sand interval of Vermilion Block 50 field, Gulf of Mexico, was used in this study. Multivariate analyses produced eighteen PC??s and three PF??s grid maps. A collocated cokriging geostaistical technique was used to estimate the spatial distribution of reservoir properties of eighteen wells penetrating the N-sand interval. Reservoir property maps generated by using multivariate seismic attributes yield highly accurate predictions of reservoir properties when compared to predictions produced with original individual seismic attributes. To the contrary of the original seismic attribute results, predicted reservoir properties of the multivariate seismic attributes honor the lateral geological heterogeneities imbedded within seismic data and strongly maintain the proposed geological model of the N-sand interval. Results suggest that multivariate seismic attribute technique can be used to predict various reservoir properties and can be applied to a wide variety of geological and geophysical settings.
680

New trends in dairy cattle genetic evaluation

NICOLAZZI, EZEQUIEL LUIS 24 February 2011 (has links)
I sistemi di valutazione genetica nel mondo sono in rapido sviluppo. Attualmente, i programmi di selezione “tradizionale” basati su fenotipi e rapporti di parentela tra gli animali vengono integrati, e nel futuro potrebbero essere sostituiti, dalle informazioni molecolari. In questo periodo di transizione, questa tesi riguarda ricerche su entrambi i tipi di valutazioni: dall’accertamento sull’accuratezza degli indici genetici internazionali (tradizionali), allo studio di metodi statistici utilizzati per integrare informazioni genomiche nella selezione (selezione genomica). Tre capitoli valutano gli approcci per stimare i valori genetici dai dati genomici riducendo il numero di variabili indipendenti. In modo particolare, la correzione di Bonferroni e il test di permutazioni con regressione a marcatori singoli (Capitolo III), analisi delle componenti principali con BLUP (Capitolo IV) e indice Fst tra razze con BayesA (Capitolo VI). Inoltre, il Capitolo V analizza l’accuratezza dei valori genomici con BLUP, BayesA e Bayesian LASSO includendo tutte le variabili disponibili. I risultati di questa tesi indicano che il progresso genetico atteso dall’analisi dei dati simulati può effettivamente essere ottenuto, anche se ulteriori ricerche sono necessarie per ottimizzare l’utilizzo delle informazioni molecolari in modo da ottimizzare i risultati per tutti i caratteri sotto selezione. / Genetic evaluation systems are in rapid development worldwide. In most countries, “traditional” breeding programs based on phenotypes and relationships between animals are currently being integrated and in the future might be replaced by the introduction of molecular information. This thesis stands in this transition period, therefore it covers research on both types of genetic evaluations: from the assessment of the accuracy of (traditional) international genetic evaluations to the study of statistical methods used to integrate genomic information into breeding (genomic selection). Three chapters investigate and evaluate approaches for the estimation of genetic values from genomic data reducing the number of independent variables. In particular, Bonferroni correction and Permutation test combined with single marker regression (Chapter III), principal component analysis combined with BLUP (Chapter IV) and Fst across breeds combined with BayesA (Chapter VI). In addition, Chapter V analyzes the accuracy of direct genomic values with BLUP, BayesA and Bayesian LASSO including all available variables. The results of this thesis indicate that the genetic gains expected from the analysis of simulated data can be obtained on real data. Still, further research is needed to optimize the use of genome-wide information and obtain the best possible estimates for all traits under selection.

Page generated in 0.0818 seconds