• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 304
  • 139
  • 34
  • 31
  • 23
  • 19
  • 16
  • 16
  • 14
  • 12
  • 7
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 743
  • 743
  • 743
  • 141
  • 118
  • 112
  • 102
  • 86
  • 68
  • 65
  • 59
  • 57
  • 55
  • 54
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
691

Modelo HJM multifatorial integrado com distribuições empíricas condicionais: o caso brasileiro

Silva, Luiz Henrique Moraes da 31 July 2018 (has links)
Submitted by Luiz Henrique Moraes da Silva (luiz.henrique1046@gmail.com) on 2018-08-24T16:12:13Z No. of bitstreams: 1 Dissertacao_lhms_2.pdf: 1496435 bytes, checksum: 256777f511b36a71d178ad1980b4f101 (MD5) / Approved for entry into archive by Joana Martorini (joana.martorini@fgv.br) on 2018-08-24T17:42:34Z (GMT) No. of bitstreams: 1 Dissertacao_lhms_2.pdf: 1496435 bytes, checksum: 256777f511b36a71d178ad1980b4f101 (MD5) / Approved for entry into archive by Isabele Garcia (isabele.garcia@fgv.br) on 2018-08-27T13:33:13Z (GMT) No. of bitstreams: 1 Dissertacao_lhms_2.pdf: 1496435 bytes, checksum: 256777f511b36a71d178ad1980b4f101 (MD5) / Made available in DSpace on 2018-08-27T13:33:13Z (GMT). No. of bitstreams: 1 Dissertacao_lhms_2.pdf: 1496435 bytes, checksum: 256777f511b36a71d178ad1980b4f101 (MD5) Previous issue date: 2018-07-31 / O presente estudo propõe um modelo de simulação que combina o modelo multifatorial de Heath, Jarrow e Morton e distribuições de probabilidade empíricas condicionais para simular curvas de juros e ativos do mercado financeiro. Em seguida, utilizamos o modelo proposto para simular a evolução do Dólar, da estrutura a termo das taxas de juros do Brasil obtida a partir dos contratos de DI futuro e da curva de Cupom Cambial de Dólar Sujo de maneira integrada, sendo os resultados das simulações utilizados para realizar o apreçamento de ativos. Também aplicamos os resultados obtidos em um problema de otimização de portfólios, que busca maximizar o lucro de um participante sujeito às restrições regulatórias impostas pelas resoluções de Basiléia III, empregando novamente o conceito de distribuições empíricas condicionais. / This work proposes a simulation model that combines the multifactor Heath, Jarrow and Morton model with empirical conditional probability distributions to simulate interest rate curves and securities from the financial market. The work then utilizes the proposed model to simulate the USD/BRL exchange rate, the interest rate term structure obtained from the DI Future contracts and the Cupom Cambial de D´olar Sujo interest rate curve in an integrated way, using the obtained results to price securities. In addition, we apply the results obtained in a portoflio optimation problem, which seeks to maximize the profit of a market partcipant subject to the regulatory constraints imposed by the Basel III resolutions, utilizing once again the concept of empirical conditional distributions.
692

Classificação de dados cinéticos da inicialização da marcha utilizando redes neurais artificiais e máquinas de vetores de suporte

Takáo, Thales Baliero 01 July 2015 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2016-05-20T12:55:18Z No. of bitstreams: 2 Dissertação - Thales Baliero Takáo - 2015.pdf: 2798998 bytes, checksum: f90a7c928230875abd5873753316f766 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-05-20T12:56:48Z (GMT) No. of bitstreams: 2 Dissertação - Thales Baliero Takáo - 2015.pdf: 2798998 bytes, checksum: f90a7c928230875abd5873753316f766 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2016-05-20T12:56:48Z (GMT). No. of bitstreams: 2 Dissertação - Thales Baliero Takáo - 2015.pdf: 2798998 bytes, checksum: f90a7c928230875abd5873753316f766 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2015-07-01 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The aim of this work was to assess the performance of computational methods to classify ground reaction force (GRF) to identify on which surface was done the gait initiation. Twenty-five subjects were evaluated while performing the gait initiation task in two experimental conditions barefoot on hard surface and barefoot on soft surface (foam). The center of pressure (COP) variables were calculate from the GRF and the principal component analysis was used to retain the main features of medial-lateral, anterior-posterior and vertical force components. The principal components representing each force component were retained using the broken stick test. Then the support vector machines and multilayer neural networks ware trained with Backpropagation and Levenberg-Marquartd algorithm to perform the GRF classification . The evaluation of classifier models was done based on area under ROC curve and accuracy criteria. The Bootstrap cross-validation have produced area under ROC curve a and accuracy criteria using 500 samples database. The support vector machine with linear kernel and margin parameter equal 100 produced the best result using medial-lateral force as input. It registered area under ROC curve and accuracy with 0.7712 and 0.7974. Those results showed significance difference from the vertical and anterior-posterior force. Then we may conclude that the choice of GRF component and the classifier model directly influences the performance of the classification. / O objetivo deste trabalho foi avaliar o desempenho de ferramentas de inteligência computacional para a classificação da força de reação do solo (FRS) identificando em que tipo de superfície foi realizada a inicialização da marcha. A base de dados foi composta pela força de reação do solo de 25 indivíduos, adquiridas por duas plataformas de força, durante a inicialização da marcha sobre uma superfície macia (SM - colchão), e depois sobre uma superfície dura (SD). A partir da FRS foram calculadas as variáveis que descrevem o comportamento do centro de pressão (COP) e também foram extraídas as características relevantes das forças mediolateral (Fx), anteroposterior (Fy) e vertical (Fz) por meio da análise de componentes principais (ACP). A seleção das componentes principais que descrevem cada uma das forças foi feita por meio do teste broken stick . Em seguida, máquinas de vetores de suporte (MVS) e redes neurais artificiais multicamada (MLP) foram treinadas com o algoritmo Backpropagation e de Levenberg-Marquartd (LMA) para realizar a classificação da FRS. Para a avaliação dos modelos implementados a partir das ferramentas de inteligência computacional foram utilizados os índices de acurácia (ACC) e área abaixo da curva ROC (AUC). Estes índices foram obtidos na validação cruzada utilizando a técnicas bootstrap com 500 bases de dados de amostras. O melhor resultado foi obtido para a máquina de vetor de suporte com kernel linear com parâmetro de margem igual a 100 utilizando a Fx como entrada para classificação das amostras. Os índices AUC e ACC foram 0.7712 e 0.7974, respectivamente. Estes resultados apresentaram diferença estatística em relação aos modelos que utilizaram as componentes principais da Fy e Fz, permitindo concluir que a escolha da componente da FRS assim como o modelo a ser implementado influencia diretamente no desempenho dos índices que avaliam a classificação.
693

Mesure et Analyse Statistique Tout Temps du Spectre du Rayonnement Solaire / All Weather Solar Spectrum Measurement and Statistical Analysis

Tourasse, Guillaume 19 December 2016 (has links)
Ce document présente la mise en place d’un système de mesure des éclairements énergétiques spectraux solaires pour tout type de temps, sur 4 plans. Les 4 spectromètres mesurent au total 900 spectres/min et produisent chacun un spectre/min moyen et son écart type. Entre 2014 et 2015, 700 000 spectres ont été enregistrés sur un domaine compris entre 400 et 1000 nm avec un pas ≤1 nm. Un échantillon de 145 000 spectres représentatifs du climat lyonnais a été sélectionné pour une analyse statistique. Pour ce faire, l’échantillon a été réduit par partitionnement à 1175 spectres. Son domaine spectral a été étendu de 280 à 1500 nm à l’aide du RTM SMARTS. Une ACP de cet échantillon extrapolé a permis d’en réduire la description à 3 composantes et ainsi de réviser le modèle des illuminants D de la CIE. Enfin, la relation entre composition spectrale et paramètres environnementaux ou colorimétriques ouvre une voie vers des modèles statistiques de génération de spectres du rayonnement solaire. / This manuscript presents the design and setup of an all-weather spectral irradiance measurement system on 4 planes. The 4 spectrometers measure a total of 900 spectra/min to produce every minute, a mean spectral irradiance and its standard deviation. Between 2014 and 2015, this system recorded 700,000 spectra, for wavelengths ranging between 400 and 1,000 nm with a step ≤1 nm. A sample of 145,000 spectra representative of the Lyon climate was selected for statistical analysis. For this purpose, the sample was reduced in size by partitioning it in 1,175 spectra. Its spectral domain was extended to 280-1,500 nm by extrapolating the spectra with curve fitting using the SMARTS2 RTM. A PCA of the extrapolated sample reduced its description to only 3 components; hence, allowing a revision of the CIE’s illuminant D series. Finally, the relation between spectral power distribution and environmental or colorimetric parameters opens a way towards statistical models for generating solar spectra.
694

Learning in wireless sensor networks for energy-efficient environmental monitoring / Apprentissage dans les réseaux de capteurs pour une surveillance environnementale moins coûteuse en énergie

Le Borgne, Yann-Aël 30 April 2009 (has links)
Wireless sensor networks form an emerging class of computing devices capable of observing the world with an unprecedented resolution, and promise to provide a revolutionary instrument for environmental monitoring. Such a network is composed of a collection of battery-operated wireless sensors, or sensor nodes, each of which is equipped with sensing, processing and wireless communication capabilities. Thanks to advances in microelectronics and wireless technologies, wireless sensors are small in size, and can be deployed at low cost over different kinds of environments in order to monitor both over space and time the variations of physical quantities such as temperature, humidity, light, or sound. <p><p>In environmental monitoring studies, many applications are expected to run unattended for months or years. Sensor nodes are however constrained by limited resources, particularly in terms of energy. Since communication is one order of magnitude more energy-consuming than processing, the design of data collection schemes that limit the amount of transmitted data is therefore recognized as a central issue for wireless sensor networks.<p><p>An efficient way to address this challenge is to approximate, by means of mathematical models, the evolution of the measurements taken by sensors over space and/or time. Indeed, whenever a mathematical model may be used in place of the true measurements, significant gains in communications may be obtained by only transmitting the parameters of the model instead of the set of real measurements. Since in most cases there is little or no a priori information about the variations taken by sensor measurements, the models must be identified in an automated manner. This calls for the use of machine learning techniques, which allow to model the variations of future measurements on the basis of past measurements.<p><p>This thesis brings two main contributions to the use of learning techniques in a sensor network. First, we propose an approach which combines time series prediction and model selection for reducing the amount of communication. The rationale of this approach, called adaptive model selection, is to let the sensors determine in an automated manner a prediction model that does not only fits their measurements, but that also reduces the amount of transmitted data. <p><p>The second main contribution is the design of a distributed approach for modeling sensed data, based on the principal component analysis (PCA). The proposed method allows to transform along a routing tree the measurements taken in such a way that (i) most of the variability in the measurements is retained, and (ii) the network load sustained by sensor nodes is reduced and more evenly distributed, which in turn extends the overall network lifetime. The framework can be seen as a truly distributed approach for the principal component analysis, and finds applications not only for approximated data collection tasks, but also for event detection or recognition tasks. <p><p>/<p><p>Les réseaux de capteurs sans fil forment une nouvelle famille de systèmes informatiques permettant d'observer le monde avec une résolution sans précédent. En particulier, ces systèmes promettent de révolutionner le domaine de l'étude environnementale. Un tel réseau est composé d'un ensemble de capteurs sans fil, ou unités sensorielles, capables de collecter, traiter, et transmettre de l'information. Grâce aux avancées dans les domaines de la microélectronique et des technologies sans fil, ces systèmes sont à la fois peu volumineux et peu coûteux. Ceci permet leurs deploiements dans différents types d'environnements, afin d'observer l'évolution dans le temps et l'espace de quantités physiques telles que la température, l'humidité, la lumière ou le son.<p><p>Dans le domaine de l'étude environnementale, les systèmes de prise de mesures doivent souvent fonctionner de manière autonome pendant plusieurs mois ou plusieurs années. Les capteurs sans fil ont cependant des ressources limitées, particulièrement en terme d'énergie. Les communications radios étant d'un ordre de grandeur plus coûteuses en énergie que l'utilisation du processeur, la conception de méthodes de collecte de données limitant la transmission de données est devenue l'un des principaux défis soulevés par cette technologie. <p><p>Ce défi peut être abordé de manière efficace par l'utilisation de modèles mathématiques modélisant l'évolution spatiotemporelle des mesures prises par les capteurs. En effet, si un tel modèle peut être utilisé à la place des mesures, d'importants gains en communications peuvent être obtenus en utilisant les paramètres du modèle comme substitut des mesures. Cependant, dans la majorité des cas, peu ou aucune information sur la nature des mesures prises par les capteurs ne sont disponibles, et donc aucun modèle ne peut être a priori défini. Dans ces cas, les techniques issues du domaine de l'apprentissage machine sont particulièrement appropriées. Ces techniques ont pour but de créer ces modèles de façon autonome, en anticipant les mesures à venir sur la base des mesures passées. <p><p>Dans cette thèse, deux contributions sont principalement apportées permettant l'applica-tion de techniques d'apprentissage machine dans le domaine des réseaux de capteurs sans fil. Premièrement, nous proposons une approche qui combine la prédiction de série temporelle avec la sélection de modèles afin de réduire la communication. La logique de cette approche, appelée sélection de modèle adaptive, est de permettre aux unités sensorielles de determiner de manière autonome un modèle de prédiction qui anticipe correctement leurs mesures, tout en réduisant l'utilisation de leur radio.<p><p>Deuxièmement, nous avons conçu une méthode permettant de modéliser de façon distribuée les mesures collectées, qui se base sur l'analyse en composantes principales (ACP). La méthode permet de transformer les mesures le long d'un arbre de routage, de façon à ce que (i) la majeure partie des variations dans les mesures des capteurs soient conservées, et (ii) la charge réseau soit réduite et mieux distribuée, ce qui permet d'augmenter également la durée de vie du réseau. L'approche proposée permet de véritablement distribuer l'ACP, et peut être utilisée pour des applications impliquant la collecte de données, mais également pour la détection ou la classification d'événements. <p> / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
695

Assessment of rock mass quality and its effects on charge ability using drill monitoring technique

Ghosh, Rajib January 2017 (has links)
No description available.
696

Towards the identification of a neighbourhood park typology : a conceptual and methodological exploration

Bird, Madeleine 08 1900 (has links)
Peu d’études ont évalué les caractéristiques des parcs pouvant encourager l’activité physique spécifiquement chez les jeunes. Cette étude vise à estimer la fiabilité d’un outil d’observation des parcs orienté vers les jeunes, à identifier les domaines conceptuels des parcs capturés par cet outil à l’aide d’une opérationnalisation du modèle conceptuel des parcs et de l’activité physique et à identifier différents types de parcs. Un total de 576 parcs ont été évalués en utilisant un outil d’évaluation des parcs. La fiabilité intra-juges et la fiabilité inter-juges de cet outil ont été estimées. Une analyse exploratoire par composantes principales (ACP) a été effectuée en utilisant une rotation orthogonale varimax et les variables étaient retenues si elles saturaient à ≥0.3 sur une composante. Une analyse par grappes (AG) à l’aide de la méthode de Ward a ensuite été réalisée en utilisant les composantes principales et une mesure de l’aire des parcs. L’outil était généralement fiable et l’ACP a permis d'identifier dix composantes principales qui expliquaient 60% de la variance totale. L’AG a donné un résultat de neuf grappes qui expliquaient 40% de la variance totale. Les méthodes de l’ACP et l’AG sont donc faisables avec des données de parcs. Les résultats ont été interprétés en utilisant l’opérationnalisation du modèle conceptuel. / Few studies have characterized park features that may be appealing for youth physical activity (PA). This study assesses the reliability of a youth-oriented direct-observation park assessment tool; identifies park domains captured by the tool using an operationalized conceptual model of parks and PA, and identifies distinct park types. 576 parks were audited using a park observation tool; intra- and inter-rater reliability were estimated. Exploratory principal component analysis (PCA) was conducted and variables were retained if they loaded at 0.3 or higher. A cluster analysis (CA) was conducted using the principal components and park area. The tool was found to be reliable and PCA yielded ten principal components explaining 60% of the total variance. The CA yielded a nine-cluster outcome explaining 40% of the total variance. PCA and CA were found to be feasible methods to use with park data. The operationalization of the conceptual model helped interpret these results.
697

Tracing and apportioning sources of dioxins using multivariate pattern recognition techniques

Assefa, Anteneh January 2015 (has links)
High levels of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) in edible fish in the Baltic Sea have raised health concerns in the Baltic region and the rest of Europe. Thus, there are urgent needs to characterize sources in order to formulate effective mitigation strategies. The aim of this thesis is to contribute to a better understanding of past and present sources of PCDD/Fs in the Baltic Sea environment by exploring chemical fingerprints in sediments, air, and biota. The spatial and temporal patterns of PCDD/F distributions in the Baltic Sea during the 20th century were studied in Swedish coastal and offshore sediment cores. The results showed that PCDD/F levels peaked in 1975 (± 7 years) in coastal and 1991 (± 5 years) in offshore areas. The time trends of PCDD/Fs in the sediment cores also showed that environmental half-lives of these pollutants have been shorter in coastal than in offshore areas (15 ± 5 and 29 ± 14 years, respectively). Consequently, there have been remarkable recoveries in coastal areas, but slower recovery in offshore areas with 81 ± 12% and 38 ± 11% reductions from peak levels, respectively. Source-to-receptor multivariate modeling by Positive Matrix Factorization (PMF) showed that six types of PCDD/F sources are and have been important for the Baltic Sea environment: PCDD/Fs related to i) atmospheric background, ii) thermal processes, iii) manufacture and use of tetra-chlorophenol (TCP) and iv) penta-chlorophenol (PCP), v) industrial use of elementary chlo- rine and the chloralkali-process (Chl), and vi) hexa-CDD sources. The results showed that diffuse sources (i and ii) have consistently contributed &gt;80% of the total amounts in the Southern Baltic Sea. In the Northern Baltic Sea, where the biota is most heavily contaminated, impacts of local sources (TCP, PCP and Chl) have been higher, contributing ca. 50% of total amounts. Among the six sources, only Thermal and chlorophenols (ii-iv) have had major impacts on biota. The impact of thermal sources has, however, been declining as shown from source apportioned time-trend data of PCDD/Fs in Baltic herring. In contrast, impacts of chlorophenol-associated sources generally increased, remained at steady-state or slowly decreased during 1990-2010, suggesting that these sources have substantially contributed to the persistently high levels of PCDD/Fs in Baltic biota. Atmospheric sources of PCDD/Fs for the Baltic region (Northern Europe) were also investigated, and specifically whether the inclusion of parallel measurements of metals in the analysis of air would help back-tracking sources. PCDD/Fs and metals in high-volume air samples from a rural field station near the shore of the central Baltic Sea were measured. The study focused on the winter season and air from the S and E sectors, as these samples showed elevated levels of PCDD/Fs, particularly PCDFs. Several metals were found to correlate significantly with the PCDFs. The wide range of candidate metals as source markers for PCDD/F emissions, and the lack of an up-to-date extensive compilation of source characteristics for metal emission from vari- ous sources, limited the use of the metals as source markers. The study was not able to pin-point primary PCDD/F sources for Baltic air, but it demonstrated a new promising approach for source tracing of air emissions. The best leads for back-tracking primary sources of atmospheric PCDD/Fs in Baltic air were seasonal trends and PCDD/F congener patterns, pointing at non-industrial related thermal sources related to heating. The non-localized natures of the sources raise challenges for managing the emissions and thus societal efforts are required to better control atmospheric emissions of PCDD/Fs. / EcoChange / BalticPOPs
698

新版國際會計準則對壽險公司財務報表影響分析 / The impact of IFRS 9 / IFRS 17 on financial statement of life insurer

張蕙茹, Chang, Hui Ju Unknown Date (has links)
金融風暴喚起各界改革財務報表未能反映實際虧損的缺失,因此,新版國際財務報導準則第9號及第17號公報應運而生,未來正式接軌後,對於壽險業的財報將產生重大衝擊,更突顯其資產負債管理之重要性,故本研究係採用主成分分析建構極端利率情境,並考量折現率需反映現時狀況下,於資產面分別以攤銷後成本或公允價值衡量、負債面採公允價值評價,欲探討資產負債配置及攤銷後成本比重不同時,利率變動對於壽險公司股東權益波動度之影響,以供壽險業參考。 研究結果發現攤銷後成本比重能夠有效控制股東權益波動度。再者,壽險公司應審慎評估海外投資比例,並配合其壽險商品外幣保單之銷售策略加以布局,同時謹慎考量會計決策,適當選擇攤銷後成本權重,方能有效控制資產負債表之波動。 / The financial crisis has caused wide public concern since it is failed to reflect the actual losses in financial statements. As a result, International Accounting Standards Board (IASB) issued new International Financial Reporting Standards, IFRS 9 and IFRS 17. The surplus of life insurers may fluctuate sharply if assets and liabilities don’t match appropriately under these new IFRS Standards. We follow the international regulation standard by using principal component analysis to generate extreme interest rate shock scenarios. This study examines the volatility of surplus under extreme interest rate shock scenarios for different combinations of liabilities, fair-valued assets, and amortized cost assets. In particular, the assets are measured at amortized cost or fair value, and all liabilities were acquired at fair value approach. In the numerical analysis, we showed that it is one of the most effective methods to control the surplus volatility by adjusting the percentage of amortized cost assets. Furthermore, life insurer should adjust the percentage of foreign investments and insurance policies carefully in order to reduce the fluctuation in shareholders’ equity.
699

Imputation multiple par analyse factorielle : Une nouvelle méthodologie pour traiter les données manquantes / Multiple imputation using principal component methods : A new methodology to deal with missing values

Audigier, Vincent 25 November 2015 (has links)
Cette thèse est centrée sur le développement de nouvelles méthodes d'imputation multiples, basées sur des techniques d'analyse factorielle. L'étude des méthodes factorielles, ici en tant que méthodes d'imputation, offre de grandes perspectives en termes de diversité du type de données imputées d'une part, et en termes de dimensions de jeux de données imputés d'autre part. Leur propriété de réduction de la dimension limite en effet le nombre de paramètres estimés.Dans un premier temps, une méthode d'imputation simple par analyse factorielle de données mixtes est détaillée. Ses propriétés sont étudiées, en particulier sa capacité à gérer la diversité des liaisons mises en jeu et à prendre en compte les modalités rares. Sa qualité de prédiction est éprouvée en la comparant à l'imputation par forêts aléatoires.Ensuite, une méthode d'imputation multiple pour des données quantitatives basée sur une approche Bayésienne du modèle d'analyse en composantes principales est proposée. Elle permet d'inférer en présence de données manquantes y compris quand le nombre d'individus est petit devant le nombre de variables, ou quand les corrélations entre variables sont fortes.Enfin, une méthode d'imputation multiple pour des données qualitatives par analyse des correspondances multiples (ACM) est proposée. La variabilité de prédiction des données manquantes est reflétée via un bootstrap non-paramétrique. L'imputation multiple par ACM offre une réponse au problème de l'explosion combinatoire limitant les méthodes concurrentes dès lors que le nombre de variables ou de modalités est élev / This thesis proposes new multiple imputation methods that are based on principal component methods, which were initially used for exploratory analysis and visualisation of continuous, categorical and mixed multidimensional data. The study of principal component methods for imputation, never previously attempted, offers the possibility to deal with many types and sizes of data. This is because the number of estimated parameters is limited due to dimensionality reduction.First, we describe a single imputation method based on factor analysis of mixed data. We study its properties and focus on its ability to handle complex relationships between variables, as well as infrequent categories. Its high prediction quality is highlighted with respect to the state-of-the-art single imputation method based on random forests.Next, a multiple imputation method for continuous data using principal component analysis (PCA) is presented. This is based on a Bayesian treatment of the PCA model. Unlike standard methods based on Gaussian models, it can still be used when the number of variables is larger than the number of individuals and when correlations between variables are strong.Finally, a multiple imputation method for categorical data using multiple correspondence analysis (MCA) is proposed. The variability of prediction of missing values is introduced via a non-parametric bootstrap approach. This helps to tackle the combinatorial issues which arise from the large number of categories and variables. We show that multiple imputation using MCA outperforms the best current methods.
700

Combined Computational-Experimental Design of High-Temperature, High-Intensity Permanent Magnetic Alloys with Minimal Addition of Rare-Earth Elements

Jha, Rajesh 20 May 2016 (has links)
AlNiCo magnets are known for high-temperature stability and superior corrosion resistance and have been widely used for various applications. Reported magnetic energy density ((BH) max) for these magnets is around 10 MGOe. Theoretical calculations show that ((BH) max) of 20 MGOe is achievable which will be helpful in covering the gap between AlNiCo and Rare-Earth Elements (REE) based magnets. An extended family of AlNiCo alloys was studied in this dissertation that consists of eight elements, and hence it is important to determine composition-property relationship between each of the alloying elements and their influence on the bulk properties. In the present research, we proposed a novel approach to efficiently use a set of computational tools based on several concepts of artificial intelligence to address a complex problem of design and optimization of high temperature REE-free magnetic alloys. A multi-dimensional random number generation algorithm was used to generate the initial set of chemical concentrations. These alloys were then examined for phase equilibria and associated magnetic properties as a screening tool to form the initial set of alloy. These alloys were manufactured and tested for desired properties. These properties were fitted with a set of multi-dimensional response surfaces and the most accurate meta-models were chosen for prediction. These properties were simultaneously extremized by utilizing a set of multi-objective optimization algorithm. This provided a set of concentrations of each of the alloying elements for optimized properties. A few of the best predicted Pareto-optimal alloy compositions were then manufactured and tested to evaluate the predicted properties. These alloys were then added to the existing data set and used to improve the accuracy of meta-models. The multi-objective optimizer then used the new meta-models to find a new set of improved Pareto-optimized chemical concentrations. This design cycle was repeated twelve times in this work. Several of these Pareto-optimized alloys outperformed most of the candidate alloys on most of the objectives. Unsupervised learning methods such as Principal Component Analysis (PCA) and Heirarchical Cluster Analysis (HCA) were used to discover various patterns within the dataset. This proves the efficacy of the combined meta-modeling and experimental approach in design optimization of magnetic alloys.

Page generated in 0.1056 seconds