• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 462
  • 121
  • 57
  • 49
  • 36
  • 23
  • 23
  • 11
  • 10
  • 10
  • 8
  • 7
  • 7
  • 7
  • 7
  • Tagged with
  • 966
  • 423
  • 135
  • 89
  • 74
  • 72
  • 71
  • 68
  • 66
  • 58
  • 57
  • 55
  • 53
  • 50
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
851

Geophysical constraints on mantle viscosity and its influence on Antarctic glacial isostatic adjustment

Darlington, Andrea 29 May 2012 (has links)
Glacial isostatic adjustment (GIA) is the process by which the solid Earth responds to past and present-day changes in glaciers, ice caps, and ice sheets. This thesis focuses on vertical crustal motion of the Earth caused by GIA, which is influenced by several factors including lithosphere thickness, mantle viscosity profile, and changes to the thickness and extent of surface ice. The viscosity of the mantle beneath Antarctica is a poorly constrained quantity due to the rarity of relative sea-level and heat flow observations. Other methods for obtaining a better-constrained mantle viscosity model must be investigated to obtain more accurate GIA model predictions. The first section of this study uses seismic wave tomography to determine mantle viscosity. By calculating the deviation of the P- and S-wave velocities relative to a reference Earth model (PREM), the viscosity can be determined. For Antarctica mantle viscosities obtained from S20A (Ekstrom and Dziewonski, 1998) seismic tomography in the asthenosphere range from 1016 Pa∙s to 1023 Pa∙s, with smaller viscosities beneath West Antarctica and higher viscosities beneath East Antarctica. This agrees with viscosity expectations based on findings from the Basin and Range area of North America, which is an analogue to the West Antarctic Rift System. Section two compares bedrock elevations in Antarctica to crustal thicknesses, to infer mantle temperatures and draw conclusions about mantle viscosity. Data from CRUST 2.0 (Bassin et al., 2000), BEDMAP (Lythe and Vaughan, 2001) and specific studies of crustal thickness in Antarctica were examined. It was found that the regions of Antarctica that are expected to have low viscosities agree with the hot mantle trend found by Hyndman (2010) while the regions expected to have high viscosity are in better agreement with the trend for cold mantle. Bevis et al. (2009) described new GPS observations of crustal uplift in Antarctica and compared the results to GIA model predictions, including IJ05 (Ivins and James, 2005). Here, we have generated IJ05 predictions for a three layered mantle (viscosities ranging over more than four orders of magnitude) and compared them to the GPS observations using a χ2 measure of goodness-of-fit. The IJ05 predictions that agree best with the Bevis et al. observations have a χ2 of 16, less than the null hypothesis value of 42. These large values for the best-fit model indicate the need for model revisions and/or that uncertainties are too optimistic. Equally important, the mantle viscosities of the best-fit models are much higher than expected for West Antarctica. The smallest χ2 values are found for an asthenosphere viscosity of 1021 Pa•s, transition zone viscosity of 1023 Pa∙s and lower mantle viscosity of 2 x 1023 Pa∙s, whereas the expected viscosity of the asthenosphere beneath West Antarctica is probably less than 1020 Pa∙s. This suggests that revisions to the IJ05 ice sheet history are required. Simulated annealing was performed on the ice sheet history and it was found that changes to the recent ice load history have the strongest effect on GIA predictions. / Graduate
852

Process capability assessment for univariate and multivariate non-normal correlated quality characteristics

Ahmad, Shafiq, Shafiq.ahmad@rmit.edu.au January 2009 (has links)
In today's competitive business and industrial environment, it is becoming more crucial than ever to assess precisely process losses due to non-compliance to customer specifications. To assess these losses, industry is extensively using Process Capability Indices for performance evaluation of their processes. Determination of the performance capability of a stable process using the standard process capability indices such as and requires that the underlying quality characteristics data follow a normal distribution. However it is an undisputed fact that real processes very often produce non-normal quality characteristics data and also these quality characteristics are very often correlated with each other. For such non-normal and correlated multivariate quality characteristics, application of standard capability measures using conventional methods can lead to erroneous results. The research undertaken in this PhD thesis presents several capability assessment methods to estimate more precisely and accurately process performances based on univariate as well as multivariate quality characteristics. The proposed capability assessment methods also take into account the correlation, variance and covariance as well as non-normality issues of the quality characteristics data. A comprehensive review of the existing univariate and multivariate PCI estimations have been provided. We have proposed fitting Burr XII distributions to continuous positively skewed data. The proportion of nonconformance (PNC) for process measurements is then obtained by using Burr XII distribution, rather than through the traditional practice of fitting different distributions to real data. Maximum likelihood method is deployed to improve the accuracy of PCI based on Burr XII distribution. Different numerical methods such as Evolutionary and Simulated Annealing algorithms are deployed to estimate parameters of the fitted Burr XII distribution. We have also introduced new transformation method called Best Root Transformation approach to transform non-normal data to normal data and then apply the traditional PCI method to estimate the proportion of non-conforming data. Another approach which has been introduced in this thesis is to deploy Burr XII cumulative density function for PCI estimation using Cumulative Density Function technique. The proposed approach is in contrast to the approach adopted in the research literature i.e. use of best-fitting density function from known distributions to non-normal data for PCI estimation. The proposed CDF technique has also been extended to estimate process capability for bivariate non-normal quality characteristics data. A new multivariate capability index based on the Generalized Covariance Distance (GCD) is proposed. This novel approach reduces the dimension of multivariate data by transforming correlated variables into univariate ones through a metric function. This approach evaluates process capability for correlated non-normal multivariate quality characteristics. Unlike the Geometric Distance approach, GCD approach takes into account the scaling effect of the variance-covariance matrix and produces a Covariance Distance variable that is based on the Mahanalobis distance. Another novelty introduced in this research is to approximate the distribution of these distances by a Burr XII distribution and then estimate its parameters using numerical search algorithm. It is demonstrates that the proportion of nonconformance (PNC) using proposed method is very close to the actual PNC value.
853

Damage identification and condition assessment of civil engineering structures through response measurement

Bayissa, Wirtu Unknown Date (has links) (PDF)
This research study presents a new vibration-based non-destructive global structural damage identification and condition monitoring technique that can be used for detection, localization and quantification of damage. A two-stage damage identification process that combines non-model based and model-based damage identification approaches is proposed to overcome the main difficulties associated with the solution of structural damage identification problems. In the first stage, performance assessment of various response parameters obtained from the time-domain, frequency-domain and spectral-domain analysis is conducted using a non model-based damage detection and localization approach. In addition, vibration response parameters that are sensitive to local and global damage and that possess strong physical relationships with key structural dynamic properties are identified. Moreover, in order to overcome the difficulties associated with damage identification in the presence of structural nonlinearity and response nonstationarity, a wavelet transform based damage-sensitive parameter is presented for detection and localization of damage in the space domain. The level of sensitivity and effectiveness of these parameters for detection and localization of damage are demonstrated using various numerical experimental data determined from one-dimensional and two-dimensional plate-like structures.
854

De l'élaboration de nanoparticules ferromagnétiques en alliage FePt à leur organisation médiée par autoassemblage de copolymères à blocs / From elaboration of ferromagnetic nanoparticles made of FePt alloy to their organization mediated by block copolymers self-assembly

Alnasser, Thomas 21 October 2013 (has links)
En raison de leur constante d’anisotropie magnétocristalline particulièrement élevée,les nanoparticules de FePt cristallisant dans la phase « chimiquement » ordonnée L10présentent un grand intérêt pour la réalisation de média magnétiques discrets à très hautedensité (>1 Tb/in2) jusqu’à un diamètre limite de 3,5 nm. Nos travaux portent sur la synthèsepar voie chimique (thermolyse) de nanoparticules de FePt-ɣ, calibrées en taille (4 ≤ Ø ≤ 8 nm)et de composition chimique proche de Fe50Pt50. Par la suite, leur transition vers la variété L10est réalisée afin de leur assurer un comportement ferromagnétique fort à 300 K. En dépitd’une composition non homogène en fer au sein de chaque nanoparticule (coeur riche enplatine et surface davantage riche en fer), la phase L10 est obtenue après un recuit sousatmosphère réductrice (Ar/H2 5%) à des températures supérieures à 650°C. Par ailleurs, afinde prévenir la coalescence des nanoparticules lors du recuit, trois méthodes de protectionsdistinctes ont montré leur efficacité : une matrice de NaCl, des écorces de silice amorphe etde MgO cristallisé. Cette dernière méthode de protection a permis, une fois les recuitsréalisés, de redisperser les nanoparticules de FePt-L10 par le biais d’une modification de leursurface par des chaînes de Polyoxyde d’éthylène-thiol (Mn =2000 g.mol-1). Une encremagnétique est obtenue une fois ces nanoparticules mises en solution avec desmacromolécules de copolymères à blocs Polystyrène-b-Polyoxyde d’éthylène. Le dépôt decette encre sur un substrat permet de former, après auto-assemblage supramoléculaire desmacromolécules, un film hybride contenant les nanoparticules ferromagnétiques FePt-L10localisées sélectivement dans les domaines cylindriques de POE. / Nanoparticles made of FePt alloy in a face-centered-tetragonal (fct) structure have agreat interest for the enhancement of data density (> 1 Tbit/in²) in magnetic recordingmedia due to their high magneto-crystalline anisotropy and low critical diameters (3.5 nm).Our works lie in the synthesis of ɣ-FePt nanoparticles controlled in size (4 ≤ Ø ≤ 8 nm) andchemical composition (≈ Fe50Pt50) by thermal decomposition of organometallic precursors.Following ɣ-FePt NPs synthesis, annealing at high temperature is required for a completetransition from fcc to fct structure (L10) that ensure a ferromagnetic behavior at ambient.Despite a non-homogenous chemical composition on each nanoparticles (platinum-rich coreand iron-rich surface), L10 structure has been obtained after annealing under atmosphereAr/H2 (5%), at temperature up to 650°C. To prevent coalescence of FePt NPs duringannealing, tree distinct protection routes have shown their effectiveness: an inert NaClmatrix, an amorphous silica shell or a crystalline MgO shell. This last method shows bestresults in redispersion of L10-FePt nanoparticles after annealing via surface modification ofnanoparticles by PEO-thiol chains (Mn =2000 g.mol-1). A magnetic ink is then formulated inpresence of PS-b-PEO macromolecules. At least, this as-made ink is deposited on a substrateto obtain, after copolymer self-assembly, a hybrid film containing ferromagnetic L10-FePtnanoparticles selectively located into PEO cylindrical domains.
855

Estabilização de filmes finos de óxido de germânio por incorporação de nitrogênio visando aplicações em nanoeletrônica / Stabilization of germanium oxide films by nitrogen incorporation aiming at applications in nanoelectronics

Kaufmann, Ivan Rodrigo January 2013 (has links)
De maneira a melhorar o desempenho de um Transistor de Efeito de Campo Metal-Óxido-Semicondutor (MOSFET), o germânio (Ge) é um forte candidato para substituir o silício (Si) como semicondutor, devido a sua alta mobilidade dos portadores de carga. Contudo, o filme de dióxido de germânio (GeO2) sobre Ge é solúvel em água e suas propriedades elétricas inferiores. Nesse sentido, a proposta desta dissertação de Mestrado é oxinitretar termicamente filmes de GeO2 em atmosfera de óxido nítrico (15NO), de maneira a melhorar as propriedades elétricas e físico-químicas dessas estruturas. Inicialmente, as amostras foram limpas quimicamente usando uma mistura de peróxido de hidrogênio (H2O2) e ácido clorídrico + água (HCl + H2O, 4:1). Os filmes de GeO2 foram crescidos termicamente sobre Ge usando atmosfera de oxigênio enriquecido 97% no isótopo de massa 18 (18O), com parâmetros na qual geraram um filme com espessura de ~5 nm. As oxinitretações foram realizadas em um forno térmico rápido com atmosfera de 15NO, nas temperaturas variando de 400-600°C, nos tempos de 1 a 5 minutos. O objetivo da oxinitretação foi criar um filme de oxinitreto de germânio (GeOxNy) com propriedades físico-químicas satisfatórias para a indústria de microeletrônica. Também foram realizados recozimentos térmicos em atmosfera inerte com objetivo de testar a estabilidade térmicas dos filmes de GeOxNy. Análise com Reação Nuclear (NRA) e Espectrometria de Retroespalhamento Rutherford em geometria de canalização (RBS-c) foram utilizadas para quantificar a quantidade total de oxigênio 18O e 16O, respectivamente. NRP também foi utilizada de modo a determinar o perfil de concentração em função da profundidade para as espécies de 18O e 15N. De modo a investigar a composição química das amostras, Espectroscopia de Fotoelétrons induzidos por raio-X (XPS) foi utilizada. Pelas análises por RBS e NRA do 18O, podemos observar que ocorre troca entre os isótopos de 18O e 16O para todas das temperaturas de oxinitretação. Este resultado corrobora com estudos recentes da literatura. Para as amostras oxinitretadas em 5 minutos a 500°C e todas as amostras oxinitretadas a 550°C e 600°C, ocorre troca isotópica completa. Observamos ainda por NRP que o 15N é incorporado mais superficialmente para as temperaturas de oxinitretação até 550°C. Resultados de XPS indicam formação maior de GeOxNy próximos da superfície das amostras e para temperaturas e/ou tempos maiores. Testes de estabilidade térmica indicam que a incorporação de nitrogênio mais próximo das superfície da amostra inibe a dessorção das espécies de GeO. As amostras que não foram oxinitretadas acabam dessorvendo quase por completo o filme de GeO2 quando realizados os recozimentos térmicos. Este efeito do nitrogênio incorporado próximo da superfície tem grande potencial para uso em camadas interfaciais entre semicondutor e dielétricos de porta. / In order to improve the performance of Metal-Oxide-Semiconductor Field Effect Transistor (MOSFET), germanium is a good candidate to replace silicon as semiconductor due to its higher charge carrier mobility. However, the germanium dioxide (GeO2) film over Ge is water soluble and produces poor electrical characteristics. In this way, this Master dissertation proposes thermal oxinitridation of the GeO2 films in nitric oxide (15NO) atmosphere in order to improve its electrical and physico-chemical characteristics. Samples were first cleaned using a mixture of hydrogen peroxide (H2O2) and hydrogen chloride + water (HCl + H2O, 4:1). GeO2 films were thermally grown on Ge using oxygen enriched in 97% in the isotope of mass 18, which generated ~5 nm thick film. Oxinitridation was performed in a rapid thermal furnace under 15NO atmosphere, at the 400-600°C temperature range, and 1-5 minutes time range. The goal was to form a germanium oxinitride film (GeOxNy) with physico-chemical properties that are satisfactory for microelectronics industry. We also performed thermal annealing in inert atmosphere to test the thermal stability of GeOxNy films. Nuclear Reaction Analysis (NRA) and Rutherford Backscattering Spectrometry in channeled geometry (RBS-c) were used to quantify the total amount of oxygen 18O and 16O, respectively. NRP was also performed to determine the 18O and 15N depth distribution. In order to investigate the chemical composition of the samples, X-ray Photoelectron Spectroscopy (XPS) was performed. RBS and NRA analysis showed isotopic exchange between 18O and 16O for all temperatures investigated. This result corroborates previous literature studies. Samples oxynitrided in 5 minutes at 500°C and all the samples oxinitrided at 550-600°C showed complete isotopic exchange. We also observed by NRP that nitrogen incorporation occurs more superficially until 550°C. XPS results indicate more formation of GeOxNy near the surface of the samples and for higher temperatures and/or time of oxinitredation. Thermal stability results indicated that the nitrogen incorporation near the sample surface inhibit the GeO desorption. On the other hand, samples that were not oxynitrided have almost all the GeO2 desorbed when thermal annealing is performed.
856

Contribution à la conception des filtres bidimensionnels non récursifs en utilisant les techniques de l’intelligence artificielle : application au traitement d’images / Contribution to the design of two-dimensional non-recursive filters using artificial intelligence techniques : application to image processing

Boudjelaba, Kamal 11 June 2014 (has links)
La conception des filtres a réponse impulsionnelle finie (RIF) peut être formulée comme un problème d'optimisation non linéaire réputé pour être difficile sa résolution par les approches conventionnelles. Afin d'optimiser la conception des filtres RIF, nous explorons plusieurs méthodes stochastiques capables de traiter de grands espaces. Nous proposons un nouvel algorithme génétique dans lequel certains concepts innovants sont introduits pour améliorer la convergence et rendre son utilisation plus facile pour les praticiens. Le point clé de notre approche découle de la capacité de l'algorithme génétique (AG) pour adapter les opérateurs génétiques au cours de la vie génétique tout en restant simple et facile à mettre en oeuvre. Ensuite, l’optimisation par essaim de particules (PSO) est proposée pour la conception de filtres RIF. Finalement, un algorithme génétique hybride (HGA) est proposé pour la conception de filtres numériques. L'algorithme est composé d'un processus génétique pur et d’une approche locale dédiée. Notre contribution vise à relever le défi actuel de démocratisation de l'utilisation des AG’s pour les problèmes d’optimisation. Les expériences réalisées avec différents types de filtres mettent en évidence la contribution récurrente de l'hybridation dans l'amélioration des performances et montrent également les avantages de notre proposition par rapport à d'autres approches classiques de conception de filtres et d’autres AG’s de référence dans ce domaine d'application. / The design of finite impulse response (FIR) filters can be formulated as a non-linear optimization problem reputed to be difficult for conventional approaches. In order to optimize the design of FIR filters, we explore several stochastic methodologies capable of handling large spaces. We propose a new genetic algorithm in which some innovative concepts are introduced to improve the convergence and make its use easier for practitioners. The key point of our approach stems from the capacity of the genetic algorithm (GA) to adapt the genetic operators during the genetic life while remaining simple and easy to implement. Then, the Particle Swarm Optimization (PSO) is proposed for FIR filter design. Finally, a hybrid genetic algorithm (HGA) is proposed for the design of digital filters. The algorithm is composed of a pure genetic process and a dedicated local approach. Our contribution seeks to address the current challenge of democratizing the use of GAs for real optimization problems. Experiments performed with various types of filters highlight the recurrent contribution of hybridization in improving performance. The experiments also reveal the advantages of our proposal compared to more conventional filter design approaches and some reference GAs in this field of application.
857

Estabilização de filmes finos de óxido de germânio por incorporação de nitrogênio visando aplicações em nanoeletrônica / Stabilization of germanium oxide films by nitrogen incorporation aiming at applications in nanoelectronics

Kaufmann, Ivan Rodrigo January 2013 (has links)
De maneira a melhorar o desempenho de um Transistor de Efeito de Campo Metal-Óxido-Semicondutor (MOSFET), o germânio (Ge) é um forte candidato para substituir o silício (Si) como semicondutor, devido a sua alta mobilidade dos portadores de carga. Contudo, o filme de dióxido de germânio (GeO2) sobre Ge é solúvel em água e suas propriedades elétricas inferiores. Nesse sentido, a proposta desta dissertação de Mestrado é oxinitretar termicamente filmes de GeO2 em atmosfera de óxido nítrico (15NO), de maneira a melhorar as propriedades elétricas e físico-químicas dessas estruturas. Inicialmente, as amostras foram limpas quimicamente usando uma mistura de peróxido de hidrogênio (H2O2) e ácido clorídrico + água (HCl + H2O, 4:1). Os filmes de GeO2 foram crescidos termicamente sobre Ge usando atmosfera de oxigênio enriquecido 97% no isótopo de massa 18 (18O), com parâmetros na qual geraram um filme com espessura de ~5 nm. As oxinitretações foram realizadas em um forno térmico rápido com atmosfera de 15NO, nas temperaturas variando de 400-600°C, nos tempos de 1 a 5 minutos. O objetivo da oxinitretação foi criar um filme de oxinitreto de germânio (GeOxNy) com propriedades físico-químicas satisfatórias para a indústria de microeletrônica. Também foram realizados recozimentos térmicos em atmosfera inerte com objetivo de testar a estabilidade térmicas dos filmes de GeOxNy. Análise com Reação Nuclear (NRA) e Espectrometria de Retroespalhamento Rutherford em geometria de canalização (RBS-c) foram utilizadas para quantificar a quantidade total de oxigênio 18O e 16O, respectivamente. NRP também foi utilizada de modo a determinar o perfil de concentração em função da profundidade para as espécies de 18O e 15N. De modo a investigar a composição química das amostras, Espectroscopia de Fotoelétrons induzidos por raio-X (XPS) foi utilizada. Pelas análises por RBS e NRA do 18O, podemos observar que ocorre troca entre os isótopos de 18O e 16O para todas das temperaturas de oxinitretação. Este resultado corrobora com estudos recentes da literatura. Para as amostras oxinitretadas em 5 minutos a 500°C e todas as amostras oxinitretadas a 550°C e 600°C, ocorre troca isotópica completa. Observamos ainda por NRP que o 15N é incorporado mais superficialmente para as temperaturas de oxinitretação até 550°C. Resultados de XPS indicam formação maior de GeOxNy próximos da superfície das amostras e para temperaturas e/ou tempos maiores. Testes de estabilidade térmica indicam que a incorporação de nitrogênio mais próximo das superfície da amostra inibe a dessorção das espécies de GeO. As amostras que não foram oxinitretadas acabam dessorvendo quase por completo o filme de GeO2 quando realizados os recozimentos térmicos. Este efeito do nitrogênio incorporado próximo da superfície tem grande potencial para uso em camadas interfaciais entre semicondutor e dielétricos de porta. / In order to improve the performance of Metal-Oxide-Semiconductor Field Effect Transistor (MOSFET), germanium is a good candidate to replace silicon as semiconductor due to its higher charge carrier mobility. However, the germanium dioxide (GeO2) film over Ge is water soluble and produces poor electrical characteristics. In this way, this Master dissertation proposes thermal oxinitridation of the GeO2 films in nitric oxide (15NO) atmosphere in order to improve its electrical and physico-chemical characteristics. Samples were first cleaned using a mixture of hydrogen peroxide (H2O2) and hydrogen chloride + water (HCl + H2O, 4:1). GeO2 films were thermally grown on Ge using oxygen enriched in 97% in the isotope of mass 18, which generated ~5 nm thick film. Oxinitridation was performed in a rapid thermal furnace under 15NO atmosphere, at the 400-600°C temperature range, and 1-5 minutes time range. The goal was to form a germanium oxinitride film (GeOxNy) with physico-chemical properties that are satisfactory for microelectronics industry. We also performed thermal annealing in inert atmosphere to test the thermal stability of GeOxNy films. Nuclear Reaction Analysis (NRA) and Rutherford Backscattering Spectrometry in channeled geometry (RBS-c) were used to quantify the total amount of oxygen 18O and 16O, respectively. NRP was also performed to determine the 18O and 15N depth distribution. In order to investigate the chemical composition of the samples, X-ray Photoelectron Spectroscopy (XPS) was performed. RBS and NRA analysis showed isotopic exchange between 18O and 16O for all temperatures investigated. This result corroborates previous literature studies. Samples oxynitrided in 5 minutes at 500°C and all the samples oxinitrided at 550-600°C showed complete isotopic exchange. We also observed by NRP that nitrogen incorporation occurs more superficially until 550°C. XPS results indicate more formation of GeOxNy near the surface of the samples and for higher temperatures and/or time of oxinitredation. Thermal stability results indicated that the nitrogen incorporation near the sample surface inhibit the GeO desorption. On the other hand, samples that were not oxynitrided have almost all the GeO2 desorbed when thermal annealing is performed.
858

Diferentes métodos de aglutinação para melhoria de processos com múltiplas respostas / Different agglutination methods for optmize a process whit multiple responses

Gomes, Fabrício Maciel [UNESP] 15 December 2015 (has links)
Submitted by FABRÍCIO MACIEL GOMES null (fabricio@dequi.eel.usp.br) on 2016-01-04T00:06:19Z No. of bitstreams: 1 Tese_Fabricio_Maciel_Gomes.pdf: 1836829 bytes, checksum: 3ec7860a9d87ebfeaef21b25dc157d25 (MD5) / Approved for entry into archive by Juliano Benedito Ferreira (julianoferreira@reitoria.unesp.br) on 2016-01-06T16:12:19Z (GMT) No. of bitstreams: 1 gomes_fm_dr_guara.pdf: 1836829 bytes, checksum: 3ec7860a9d87ebfeaef21b25dc157d25 (MD5) / Made available in DSpace on 2016-01-06T16:12:19Z (GMT). No. of bitstreams: 1 gomes_fm_dr_guara.pdf: 1836829 bytes, checksum: 3ec7860a9d87ebfeaef21b25dc157d25 (MD5) Previous issue date: 2015-12-15 / Empresas não medem esforços para aperfeiçoar seus processos e produtos de acordo com diferentes critérios para satisfazer as exigências e necessidades dos clientes em busca de um padrão de competitividade superior ao de suas concorrentes. Neste cenário é muito comum a necessidade de se estabelecer condições que resultem na melhoria de mais de um critério de forma simultânea. Neste trabalho foi realizada uma avaliação da utilização de quatro métodos que utilizam as Meta-heurísticas Recozimento Simulado, Algoritmo Genético, Recozimento Simulado combinado com o método Nelder Mead Simplex e algoritmo genético combinado com o método Nelde-Mead simplex para o estabelecimento de melhoria das condições de processos com múltiplas respostas. Para a avaliação dos métodos propostos foram utilizados problemas-teste criteriosamente selecionados na literatura de forma a serem analisados casos com diferente número de variáveis, número de respostas e tipos de resposta. A aglutinação das respostas foi realizada por quatro métodos diferentes: Desirability, Desvio Médio Percentual, Programação por Compromisso e Programação por Compromisso normalizada pela distância euclidiana. A avaliação dos métodos foi realizada por meio de comparação entre os resultados obtidos na utilização de um mesmo método de aglutinação, determinando assim a eficiência do método de busca. Os resultados obtidos na avaliação dos métodos sugerem a aplicação do método do algoritmo genético quando se pretende estabelecer parâmetros que resultem na melhoria de processos com múltiplas respostas, em particular quando essas respostas são modeladas por equações com termos cúbicos, independentemente do número de termos que possam conter, do tipo de respostas e do número de variáveis. / Companies go to great lengths to improve its processes and products according to different criteria to meet the demands and needs of customers looking for a higher standard of competitiveness to that of their competitors. This scenario is very common the need to establish conditions that result in the improvement of more than one criterion simultaneously. This work was carried out an evaluation of the use of four methods that use Metaheuristics Simulated Annealing, Genetic Algorithms, Simulated Annealing combined with the Nelder Mead Simplex method and genetic algorithm combined with Nelde Mead simplex method for the improvement of establishing the conditions of processes with multiple answers. For the evaluation of the proposed test methods were used in the literature problems carefully selected in order to be analyzed cases with different numbers of variables, response numbers and types of responses. In this research we used the average percentage deviation function as a way to bring together the answers. The agglutination of the answers was performed by four different methods: Desirability, Average Percentage Deviation, Compromise Programming and Compromise Programming normalized by Euclidean distance. The evaluation method was performed by comparison between the results obtained in using the same bonding method, thereby determining the efficiency of the search method. The results obtained in the evaluation of the methods suggest the application of the genetic algorithm method when you want to set parameters that result in the improvement of processes with multiple answers, particularly when these responses are modeled by equations with cubic terms, regardless of the number of terms that can contain the type of responses and the number of variables.
859

Optimisation de stratégies de fusion pour la reconnaissance de visages 3D.

Ben Soltana, Wael 11 December 2012 (has links)
La reconnaissance faciale (RF) est un domaine de recherche très actif en raison de ses nombreuses applications dans le domaine de la vision par ordinateur en général et en biométrie en particulier. Cet intérêt est motivé par plusieurs raisons. D’abord, le visage est universel. Ensuite, il est le moyen le plus naturel par les êtres humains de s’identifier les uns des autres. Enfin, le visage en tant que modalité biométrique est présente un caractère non intrusif, ce qui le distingue d’autres modalités biométriques comme l’iris ou l’emprunte digitale. La RF représente aussi des défis scientifiques importants. D’abord parce que tous les visages humains ont des configurations similaires. Ensuite, avec les images faciales 2D que l’on peut acquérir facilement, la variation intra-classe, due à des facteurs comme le changement de poses et de conditions d’éclairage, les variations d’expressions faciales, le vieillissement, est bien plus importante que la variation inter-classe.Avec l’arrivée des systèmes d’acquisition 3D capables de capturer la profondeur d’objets, la reconnaissance faciale 3D (RF 3D) a émergé comme une voie prometteuse pour traiter les deux problèmes non résolus en 2D, à savoir les variations de pose et d’éclairage. En effet, les caméras 3D délivrent généralement les scans 3D de visages avec leurs images de texture alignées. Une solution en RF 3D peut donc tirer parti d’une fusion avisée d’informations de forme en 3D et celles de texture en 2D. En effet, étant donné que les scans 3D de visage offrent à la fois les surfaces faciales pour la modalité 3D pure et les images de texture 2D alignées, le nombre de possibilités de fusion pour optimiser le taux de reconnaissance est donc considérable. L’optimisation de stratégies de fusion pour une meilleure RF 3D est l’objectif principal de nos travaux de recherche menés dans cette thèse.Dans l’état d’art, diverses stratégies de fusion ont été proposées pour la reconnaissance de visages 3D, allant de la fusion précoce "early fusion" opérant au niveau de caractéristiques à la fusion tardive "late fusion" sur les sorties de classifieurs, en passant par de nombreuses stratégies intermédiaires. Pour les stratégies de fusion tardive, nous distinguons encore des combinaisons en parallèle, en cascade ou multi-niveaux. Une exploration exhaustive d’un tel espace étant impossible, il faut donc recourir à des solutions heuristiques qui constituent nos démarches de base dans le cadre des travaux de cette thèse.En plus, en s’inscrivant dans un cadre de systèmes biométriques, les critères d’optimalité des stratégies de fusion restent des questions primordiales. En effet, une stratégie de fusion est dite optimisée si elle est capable d’intégrer et de tirer parti des différentes modalités et, plus largement, des différentes informations extraites lors du processus de reconnaissance quelque soit leur niveau d’abstraction et, par conséquent, de difficulté.Pour surmonter toutes ces difficultés et proposer une solution optimisée, notre démarche s’appuie d’une part sur l’apprentissage qui permet de qualifier sur des données d’entrainement les experts 2D ou 3D, selon des critères de performance comme ERR, et d’autre part l’utilisation de stratégie d’optimisation heuristique comme le recuit simulé qui permet d’optimiser les mélanges des experts à fusionner. [...] / Face recognition (FR) was one of the motivations of computer vision for a long time, but only in recent years reliable automatic face recognition has become a realistic target of biometrics research. This interest is motivated by several reasons. First, the face is one of the most preferable biometrics for person identification and verification related applications, because it is natural, non-intrusive, and socially well accepted. The second reason relates to the challenges encountered in the FR domain, in which all human faces are similar to each other and hence offer low distinctiveness as compared with other biometrics, e.g., fingerprints and irises. Furthermore, when employing facial texture images, intra-class variations due to various factors as illumination and pose changes are usually greater than inter-class ones, preventing 2D face recognition systems from being completely reliable in real conditions.Recent, 3D acquisition systems are capable to capture the shape information of objects. Thus, 3D face recognition (3D FR) has been extensively investigated by the research community to deal with the unsolved issues in 2D face recognition, i.e., illumination and pose changes. Indeed, 3D cameras generally deliver the 3D scans of faces with their aligned texture images. 3D FR can benefit from the fusion of 2D texture and 3D shape information.This Ph.D thesis is dedicated to the optimization of fusion strategies based on three dimensional data. However, there are some problems. Indeed, since the 3D face scans provide both the facial surfaces for the 3D model and 2D texture images, the number of fusion method is high.In the literature, many fusion strategies exist that have been proposed for 3D face recognition. We can roughly classify the fusion strategies into two categories: early fusion and late fusion. Some intermediate strategies such as serial fusion and multi-level fusion have been proposed as well. Meanwhile, the search for an optimal fusion scheme remains extraordinarily complex because the cardinality of the space of possible fusion strategies. It is exponentially proportional to the number of competing features and classifiers. Thus, we require fusion technique to efficiently manage all these features and classifiers that constitute our contribution in this work. In addition, the optimality criteria of fusion strategies remain critical issues. By definition, an optimal fusion strategy is able to integrate and take advantage from different data.To overcome all these difficulties and propose an optimized solution, we adopted the following reflection. [...]
860

Machine Learning for Market Prediction : Soft Margin Classifiers for Predicting the Sign of Return on Financial Assets

Abo Al Ahad, George, Salami, Abbas January 2018 (has links)
Forecasting procedures have found applications in a wide variety of areas within finance and have further shown to be one of the most challenging areas of finance. Having an immense variety of economic data, stakeholders aim to understand the current and future state of the market. Since it is hard for a human to make sense out of large amounts of data, different modeling techniques have been applied to extract useful information from financial databases, where machine learning techniques are among the most recent modeling techniques. Binary classifiers such as Support Vector Machines (SVMs) have to some extent been used for this purpose where extensions of the algorithm have been developed with increased prediction performance as the main goal. The objective of this study has been to develop a process for improving the performance when predicting the sign of return of financial time series with soft margin classifiers. An analysis regarding the algorithms is presented in this study followed by a description of the methodology that has been utilized. The developed process containing some of the presented soft margin classifiers, and other aspects of kernel methods such as Multiple Kernel Learning have shown pleasant results over the long term, in which the capability of capturing different market conditions have been shown to improve with the incorporation of different models and kernels, instead of only a single one. However, the results are mostly congruent with earlier studies in this field. Furthermore, two research questions have been answered where the complexity regarding the kernel functions that are used by the SVM have been studied and the robustness of the process as a whole. Complexity refers to achieving more complex feature maps through combining kernels by either adding, multiplying or functionally transforming them. It is not concluded that an increased complexity leads to a consistent improvement, however, the combined kernel function is superior during some of the periods of the time series used in this thesis for the individual models. The robustness has been investigated for different signal-to-noise ratio where it has been observed that windows with previously poor performance are more exposed to noise impact.

Page generated in 0.0818 seconds