• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 16
  • 9
  • 7
  • 7
  • 5
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 161
  • 161
  • 41
  • 40
  • 35
  • 33
  • 30
  • 29
  • 23
  • 22
  • 18
  • 16
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Aeroacústica de motores aeronáuticos: uma abordagem por meta-modelo / Aeroengine aeroacoustics: a meta-model approach

Cuenca, Rafael Gigena 20 June 2017 (has links)
Desde a última década, as autoridades aeronáuticas dos países membros da ICAO vem, gradativamente, aumentando as restrições nos níveis de ruído externo de aeronaves, principalmente nas proximidades dos aeroportos. Por isso os novos motores aeronáuticos precisam ter projetos mais silenciosos, tornando as técnicas de predição de ruído de motores cada vez mais importantes. Diferente das técnicas semi-analíticas, que vêm evoluindo nas últimas décadas, as técnicas semiempíricas possuem suas bases lastreadas em técnicas e dados que remontam à década de 70, como as desenvolvidas no projeto ANOPP. Uma bancada de estudos aeroacústicos para um conjunto rotor/estator foi construída no departamento de Engenharia Aeronáutica da Escola de Engenharia de São Carlos, permitindo desenvolver uma metodologia capaz de gerar uma técnica semi-empírica utilizando métodos e dados novos. Tal bancada é capaz de variar a rotação, o espaçamento rotor/estator e controlar a vazão mássica, resultando em 71 configurações avaliadas. Para isso, uma antena de parede com 14 microfones foi usada. O espectro do ruído de banda larga é modelado como um ruído rosa e o ruído tonal é modelado por um comportamento exponencial, resultando em 5 parâmetros: nível do ruído, decaimento linear e fator de forma da banda larga, nível do primeiro tonal e o decaimento exponencial de seus harmônicos. Uma regressão superficial Kriging é utilizada para aproximar os 5 parâmetros utilizando as variáveis do experimento e o estudo mostrou que Mach Tip e RSS são as principais variáveis que definem o ruído, assim como utilizado pelo projeto ANOPP. Assim, um modelo de previsão é definido para o conjunto rotor/estator estudado na bancada, o que permite prever o espectro em condições não ensaiadas. A análise do modelo resultou em uma ferramenta de interpretação dos resultados. Ao modelo são aplicadas 3 técnicas de validação cruzada: leave one out, monte carlo e repeated k-folds e mostrou que o modelo desenvolvido possui um erro médio, do nível do ruído total do espectro, de 2.35 dBs e desvio padrão de 0.91. / Since the last decade, the countries members of ICAO, via its aeronautical authorities, has been gradually increasing the restrictions on external aircraft noise levels, especially in the vicinity of airports. Because that, the new aero-engines need quieter designs, so noise prediction techniques for aero-engines are getting even more important. Semi-analytical techniques have undergone a major evolution since the 70th until nowadays, but semi-empirical techniques still have their bases pegged in techniques and data defined on the 70th, developed in the ANOPP project. An Aeroacoustics Fan Rig to investigate a Rotor/Stator assembly was developed at Aeronautical Engineering Department of São Carlos School of Engineering, allowing the development of a methodology capable of defining a semi-empirical technique based on new data and methods. Such rig is able to vary the rotation, the rotor/stator spacing and mass flow rate, resulting in a set of 71 configurations tested. To measure the noise, a microphone wall antenna with 14 sensors were used. The broadband noise was modeled by a pink noise and the tonal with exponential behavior, resulting in 5 parameters: broadband noise level, decay and form factor and the level and decay of tonal noise. A superficial kriging regression were used to approach the parameters using the experimental variables and the investigation has shown that Mach Tip and RSS are the most important variables that defines the noise, as well on ANOPP. A prediction model for the rotor/stator noise are defined with the 5 approximation of the parameters, that allow to predict the spectra at operations points not measured. The model analyses of the model resulted on a tool for results interpretation. Tree different cross validation techniques are applied to model: leave ou out, Monte Carlo and repeated k-folds. That analysis shows that the model developed has average error of 2.35 dBs and standard deviation of 0.91 for the spectrum level predicted.
22

Seleção e análise de associação genômica em dados simulados e da qualidade da carne de ovinos da raça Santa Inês / Genomic selection and association analysis in simulated data and meat quality of Santa Inês sheep breed

Pértile, Simone Fernanda Nedel 19 August 2015 (has links)
Informações de milhares de marcadores genéticos têm sido incluídas nos programas de melhoramento genético, permitindo a seleção dos animais considerando estas informações e a identificações de regiões genômicas associadas às características de interesse econômico. Devido ao alto custo associado a esta tecnologia e às coletas de dados, os dados simulados apresentam grande importância para que novas metodologias sejam estudadas. O objetivo deste trabalho foi avaliar a eficiência do método ssGBLUP utilizando pesos para os marcadores genéticos, informações de genótipo e fenótipos, com ou sem as informações de pedigree, para seleção e associação genômica ampla, considerando diferentes coeficientes de herdabilidade, presença de efeito poligênico, diferentes números de QTL (quantitative trait loci) e pressões de seleção. Adicionalmente, dados de qualidade da carne de ovinos da raça Santa Inês foram comparados com a os padrões descritos para esta raça. A população estudada foi obtida por simulação de dados, e foi composta por 8.150 animais, sendo 5.850 animais genotipados. Os dados simulados foram analisados utilizando o método ssGBLUP com matrizes de relacionamento com ou sem informações de pedigree, utilizando pesos para os marcadores genéticos obtidos em cada iteração. As características de qualidade da carne estudadas foram: área de olho de lombo, espessura de gordura subcutânea, cor, pH ao abate e após 24 horas de resfriamento das carcaças, perdas por cocção e força de cisalhamento. Quanto maior o coeficiente de herdabilidade, melhores foram os resultados de seleção e associação genômica. Para a identificação de regiões associadas a características de interesse, não houve influência do tipo de matriz de relacionamento utilizada. Para as características com e sem efeito poligênico, quando considerado o mesmo coeficiente de herdabilidade, não houve diferenças para seleção genômica, mas a identificação de QTL foi melhor nas características sem efeito poligênico. Quanto maior a pressão de seleção, mais acuradas foram as predições dos valores genéticos genômicos. Os dados de qualidade da carne obtidos de ovinos da raça Santa Inês estão dentro dos padrões descritos para esta raça e foram identificas diversas regiões genômicas associadas às características estudadas. / Thousands of genetic markers data have been included in animal breeding programs to allow the selection of animals considering this information and to identify genomic regions associated to traits of economic interest. Simulated data have great importance to the study of new methodologies due to the high cost associated with this technology and data collection. The objectives of this study were to evaluate the efficiency of the ssGBLUP method using genotype and phenotype information, with or without pedigree information, and attributing weights for genetic markers, for selection and genome-wide association considering different coefficients of heritability, the presence of polygenic effect, different numbers of quantitative trait loci and selection pressures. Additionally, meat quality data of Santa Ines sheep breed were compared with the standards for the breed. The population of simulated data was composed by 8.150 individuals and 5.850 genotyped animals. The simulated data was analysed by the ssGBLUP method and by two relationship matrix, with or without pedigree information, and weights for genetic markers were obtained in every iteration. The traits of meat quality evaluated were: rib eye area, fat thickness, color, pH at slaughter and 24 hours after the carcass cooling, cooking losses and shear force. The results of selection and genomic association were better for the traits with the highest heritability coefficients. For traits with the greater selection pressure, more accurate predictions of the genomic breeding values were obtained. There was no difference between the relationship matrix studied to identify the regions associated with traits of interest. For the traits with and without polygenic effect, considering the same heritability coefficient, they did not show differences in genomic selection, but the identification of the QTL was better for traits without polygenic effect. The meat quality data obtained from Santa Ines sheep breed are in accordance with the standards for this breed and different genomic regions associated to the studied characteristics were identified.
23

台灣地區死亡率推估的實證方法之研究與相關年金問題之探討

曾奕翔 Unknown Date (has links)
In Taiwan area, the mortality rates at all ages have decreased since the end of World War II, and the life expectancy of people has increased from 62 in 1950's to 75 in 2000, which is an increase of 21%. The mortality improvement of the elderly (i.e. people ages 65 and over) is especially significant, which effects in the rapid population aging in Taiwan area. For example, the proportion of the elderly has increased from 6.14%in 1990 to 8.52% in 2000. On one hand, the prolonged life span for an individual means a longer period of retirement life and thus a larger retirement fund. On the other hand, a longer life for the government is equivalent to a more thorough social system for the elderly. Therefore, a reliable mortality rates projection is essential to both personal financial and social welfare planning.   In this study, we have two main objectives: First, we explore some frequent used models, such as Lee-Carter, multivariate regression and principal component methods. We use the data between 1950 to 1995 as the pilot data and 1996 to 2000 as the test data to judge which method has the smallest prediction error. In addition, based on computer simulation, we also evaluate the performance of the estimation methods for the Lee-Carter method. The second part (and the other objective) of this study is to explore the effect of mortality improvement on the pure premium of annuity insurance. In particular, we calculate the pure premium of the annuity under the best model acquired from the first part, and compare those under 1989 TSO and other life tables. We found that the pure premiums under current life tables are under estimated, which may cause the insolvency of insurance companies.
24

Inégalités probabilistes pour l'estimateur de validation croisée dans le cadre de l'apprentissage statistique et Modèles statistiques appliqués à l'économie et à la finance

Cornec, Matthieu 04 June 2009 (has links) (PDF)
L'objectif initial de la première partie de cette thèse est d'éclairer par la théorie une pratique communément répandue au sein des practiciens pour l'audit (ou risk assessment en anglais) de méthodes prédictives (ou prédicteurs) : la validation croisée (ou cross-validation en anglais). La seconde partie s'inscrit principalement dans la théorie des processus et son apport concerne essentiellement les applications à des données économiques et financières. Le chapitre 1 s'intéresse au cas classique de prédicteurs de Vapnik-Chernovenkis dimension (VC-dimension dans la suite) finie obtenus par minimisation du risque empirique. Le chapitre 2 s'intéresse donc à une autre classe de prédicteurs plus large que celle du chapitre 1 : les estimateurs stables. Dans ce cadre, nous montrons que les méthodes de validation croisée sont encore consistantes. Dans le chapitre 3, nous exhibons un cas particulier important le subagging où la méthode de validation croisée permet de construire des intervalles de confiance plus étroits que la méthodologie traditionnelle issue de la minimisation du risque empirique sous l'hypothèse de VC-dimension finie. Le chapitre 4 propose un proxy mensuel du taux de croissance du Produit Intérieur Brut français qui est disponible officiellement uniquement à fréquence trimestrielle. Le chapitre 5 décrit la méthodologie pour construire un indicateur synthétique mensuel dans les enquêtes de conjoncture dans le secteur des services en France. L'indicateur synthétique construit est publié mensuellement par l'Insee dans les Informations Rapides. Le chapitre 6 décrit d'un modèle semi-paramétrique de prix spot d'électricité sur les marchés de gros ayant des applications dans la gestion du risque de la production d'électricité.
25

Algorithms for a Partially Regularized Least Squares Problem

Skoglund, Ingegerd January 2007 (has links)
<p>Vid analys av vattenprover tagna från t.ex. ett vattendrag betäms halten av olika ämnen. Dessa halter är ofta beroende av vattenföringen. Det är av intresse att ta reda på om observerade förändringar i halterna beror på naturliga variationer eller är orsakade av andra faktorer. För att undersöka detta har föreslagits en statistisk tidsseriemodell som innehåller okända parametrar. Modellen anpassas till uppmätta data vilket leder till ett underbestämt ekvationssystem. I avhandlingen studeras bl.a. olika sätt att säkerställa en unik och rimlig lösning. Grundidén är att införa vissa tilläggsvillkor på de sökta parametrarna. I den studerade modellen kan man t.ex. kräva att vissa parametrar inte varierar kraftigt med tiden men tillåter årstidsvariationer. Det görs genom att dessa parametrar i modellen regulariseras.</p><p>Detta ger upphov till ett minsta kvadratproblem med en eller två regulariseringsparametrar. I och med att inte alla ingående parametrar regulariseras får vi dessutom ett partiellt regulariserat minsta kvadratproblem. I allmänhet känner man inte värden på regulariseringsparametrarna utan problemet kan behöva lösas med flera olika värden på dessa för att få en rimlig lösning. I avhandlingen studeras hur detta problem kan lösas numeriskt med i huvudsak två olika metoder, en iterativ och en direkt metod. Dessutom studeras några sätt att bestämma lämpliga värden på regulariseringsparametrarna.</p><p>I en iterativ lösningsmetod förbättras stegvis en given begynnelseapproximation tills ett lämpligt valt stoppkriterium blir uppfyllt. Vi använder här konjugerade gradientmetoden med speciellt konstruerade prekonditionerare. Antalet iterationer som krävs för att lösa problemet utan prekonditionering och med prekonditionering jämförs både teoretiskt och praktiskt. Metoden undersöks här endast med samma värde på de två regulariseringsparametrarna.</p><p>I den direkta metoden används QR-faktorisering för att lösa minsta kvadratproblemet. Idén är att först utföra de beräkningar som kan göras oberoende av regulariseringsparametrarna samtidigt som hänsyn tas till problemets speciella struktur.</p><p>För att bestämma värden på regulariseringsparametrarna generaliseras Reinsch’s etod till fallet med två parametrar. Även generaliserad korsvalidering och en mindre beräkningstung Monte Carlo-metod undersöks.</p> / <p>Statistical analysis of data from rivers deals with time series which are dependent, e.g., on climatic and seasonal factors. For example, it is a well-known fact that the load of substances in rivers can be strongly dependent on the runoff. It is of interest to find out whether observed changes in riverine loads are due only to natural variation or caused by other factors. Semi-parametric models have been proposed for estimation of time-varying linear relationships between runoff and riverine loads of substances. The aim of this work is to study some numerical methods for solving the linear least squares problem which arises.</p><p>The model gives a linear system of the form <em>A</em><em>1x1</em><em> + A</em><em>2x2</em><em> + n = b</em><em>1</em>. The vector <em>n</em> consists of identically distributed random variables all with mean zero. The unknowns, <em>x,</em> are split into two groups, <em>x</em><em>1</em><em> </em>and <em>x</em><em>2</em><em>.</em> In this model, usually there are more unknowns than observations and the resulting linear system is most often consistent having an infinite number of solutions. Hence some constraint on the parameter vector x is needed. One possibility is to avoid rapid variation in, e.g., the parameters<em> x</em><em>2</em><em>.</em> This can be accomplished by regularizing using a matrix <em>A</em><em>3</em>, which is a discretization of some norm. The problem is formulated</p><p>as a partially regularized least squares problem with one or two regularization parameters. The parameter <em>x</em><em>2</em> has here a two-dimensional structure. By using two different regularization parameters it is possible to regularize separately in each dimension.</p><p>We first study (for the case of one parameter only) the conjugate gradient method for solution of the problem. To improve rate of convergence blockpreconditioners of Schur complement type are suggested, analyzed and tested. Also a direct solution method based on QR decomposition is studied. The idea is to first perform operations independent of the values of the regularization parameters. Here we utilize the special block-structure of the problem. We further discuss the choice of regularization parameters and generalize in particular Reinsch’s method to the case with two parameters. Finally the cross-validation technique is treated. Here also a Monte Carlo method is used by which an approximation to the generalized cross-validation function can be computed efficiently.</p>
26

Algorithms for a Partially Regularized Least Squares Problem

Skoglund, Ingegerd January 2007 (has links)
Vid analys av vattenprover tagna från t.ex. ett vattendrag betäms halten av olika ämnen. Dessa halter är ofta beroende av vattenföringen. Det är av intresse att ta reda på om observerade förändringar i halterna beror på naturliga variationer eller är orsakade av andra faktorer. För att undersöka detta har föreslagits en statistisk tidsseriemodell som innehåller okända parametrar. Modellen anpassas till uppmätta data vilket leder till ett underbestämt ekvationssystem. I avhandlingen studeras bl.a. olika sätt att säkerställa en unik och rimlig lösning. Grundidén är att införa vissa tilläggsvillkor på de sökta parametrarna. I den studerade modellen kan man t.ex. kräva att vissa parametrar inte varierar kraftigt med tiden men tillåter årstidsvariationer. Det görs genom att dessa parametrar i modellen regulariseras. Detta ger upphov till ett minsta kvadratproblem med en eller två regulariseringsparametrar. I och med att inte alla ingående parametrar regulariseras får vi dessutom ett partiellt regulariserat minsta kvadratproblem. I allmänhet känner man inte värden på regulariseringsparametrarna utan problemet kan behöva lösas med flera olika värden på dessa för att få en rimlig lösning. I avhandlingen studeras hur detta problem kan lösas numeriskt med i huvudsak två olika metoder, en iterativ och en direkt metod. Dessutom studeras några sätt att bestämma lämpliga värden på regulariseringsparametrarna. I en iterativ lösningsmetod förbättras stegvis en given begynnelseapproximation tills ett lämpligt valt stoppkriterium blir uppfyllt. Vi använder här konjugerade gradientmetoden med speciellt konstruerade prekonditionerare. Antalet iterationer som krävs för att lösa problemet utan prekonditionering och med prekonditionering jämförs både teoretiskt och praktiskt. Metoden undersöks här endast med samma värde på de två regulariseringsparametrarna. I den direkta metoden används QR-faktorisering för att lösa minsta kvadratproblemet. Idén är att först utföra de beräkningar som kan göras oberoende av regulariseringsparametrarna samtidigt som hänsyn tas till problemets speciella struktur. För att bestämma värden på regulariseringsparametrarna generaliseras Reinsch’s etod till fallet med två parametrar. Även generaliserad korsvalidering och en mindre beräkningstung Monte Carlo-metod undersöks. / Statistical analysis of data from rivers deals with time series which are dependent, e.g., on climatic and seasonal factors. For example, it is a well-known fact that the load of substances in rivers can be strongly dependent on the runoff. It is of interest to find out whether observed changes in riverine loads are due only to natural variation or caused by other factors. Semi-parametric models have been proposed for estimation of time-varying linear relationships between runoff and riverine loads of substances. The aim of this work is to study some numerical methods for solving the linear least squares problem which arises. The model gives a linear system of the form A1x1 + A2x2 + n = b1. The vector n consists of identically distributed random variables all with mean zero. The unknowns, x, are split into two groups, x1 and x2. In this model, usually there are more unknowns than observations and the resulting linear system is most often consistent having an infinite number of solutions. Hence some constraint on the parameter vector x is needed. One possibility is to avoid rapid variation in, e.g., the parameters x2. This can be accomplished by regularizing using a matrix A3, which is a discretization of some norm. The problem is formulated as a partially regularized least squares problem with one or two regularization parameters. The parameter x2 has here a two-dimensional structure. By using two different regularization parameters it is possible to regularize separately in each dimension. We first study (for the case of one parameter only) the conjugate gradient method for solution of the problem. To improve rate of convergence blockpreconditioners of Schur complement type are suggested, analyzed and tested. Also a direct solution method based on QR decomposition is studied. The idea is to first perform operations independent of the values of the regularization parameters. Here we utilize the special block-structure of the problem. We further discuss the choice of regularization parameters and generalize in particular Reinsch’s method to the case with two parameters. Finally the cross-validation technique is treated. Here also a Monte Carlo method is used by which an approximation to the generalized cross-validation function can be computed efficiently.
27

Choosing a Kernel for Cross-Validation

Savchuk, Olga 14 January 2010 (has links)
The statistical properties of cross-validation bandwidths can be improved by choosing an appropriate kernel, which is different from the kernels traditionally used for cross- validation purposes. In the light of this idea, we developed two new methods of bandwidth selection termed: Indirect cross-validation and Robust one-sided cross- validation. The kernels used in the Indirect cross-validation method yield an improvement in the relative bandwidth rate to n^1=4, which is substantially better than the n^1=10 rate of the least squares cross-validation method. The robust kernels used in the Robust one-sided cross-validation method eliminate the bandwidth bias for the case of regression functions with discontinuous derivatives.
28

Applying Data Mining Techniques to the Prediction of Marine Smuggling Behaviors

Lee, Chang-mou 26 July 2008 (has links)
none
29

Applying Classification and Regression Trees to manage financial risk

Martin, Stephen Fredrick 16 August 2012 (has links)
This goal of this project is to develop a set of business rules to mitigate risk related to a specific financial decision within the prepaid debit card industry. Under certain circumstances issuers of prepaid debit cards may need to decide if funds on hold can be released early for use by card holders prior to the final transaction settlement. After a brief introduction to the prepaid card industry and the financial risk associated with the early release of funds on hold, the paper presents the motivation to apply the CART (Classification and Regression Trees) method. The paper provides a tutorial of the CART algorithms formally developed by Breiman, Friedman, Olshen and Stone in the monograph Classification and Regression Trees (1984), as well as, a detailed explanation of the R programming code to implement the RPART function. (Therneau 2010) Special attention is given to parameter selection and the process of finding an optimal solution that balances complexity against predictive classification accuracy when measured against an independent data set through a cross validation process. Lastly, the paper presents an analysis of the financial risk mitigation based on the resulting business rules. / text
30

Factors that Influence Cross-validation of Hierarchical Linear Models

Widman, Tracy 07 May 2011 (has links)
While use of hierarchical linear modeling (HLM) to predict an outcome is reasonable and desirable, employing the model for prediction without first establishing the model’s predictive validity is ill-advised. Estimating the predictive validity of a regression model by cross-validation has been thoroughly researched, but there is a dearth of research investigating the cross-validation of hierarchical linear models. One of the major obstacles in cross-validating HLM is the lack of a measure of explained variance similar to the squared multiple correlation coefficient in regression analysis. The purpose of this Monte Carlo simulation study is to explore the impact of sample size, centering, and predictor-criterion correlation magnitudes on potential cross-validation measurements for hierarchical linear modeling. This study considered the impact of 64 simulated conditions across three explained variance approaches: Raudenbush and Bryk’s (2002) proportional reduction in error variance, Snijders and Bosker’s (1994) modeled variance, and a measure of explained variance proposed by Gagné and Furlow (2009). For each of the explained variance approaches, a cross-validation measurement, shrinkage, was obtained. The results indicate that sample size, predictor-criterion correlations, and centering impact the cross-validation measurement. The degree and direction of the impact differs with the explained variance approach employed. Under some explained variance approaches, shrinkage decreased with larger level-2 sample sizes and increased in others. Likewise, in comparing group- and grand-mean centering, with some approaches grand-mean centering resulted in higher shrinkage estimates but smaller estimates in others. Larger total sample sizes yielded smaller shrinkage estimates, as did the predictor-criterion correlation combination in which the group-level predictor had a stronger correlation. The approaches to explained variance differed substantially in their usability for cross-validation. The Snijders and Bosker approach provided relatively large shrinkage estimates, and, depending on the predictor-criterion correlation, shrinkage under both Raudenbush and Bryk approaches could be sizable to the degree that the estimate begins to lack meaning. Researchers seeking to cross-validate HLM need to be mindful of the interplay between the explained variance approach employed and the impact of sample size, centering, and predictor-criterion correlations on shrinkage estimates when making research design decisions.

Page generated in 0.0544 seconds