• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 252
  • 58
  • 58
  • 56
  • 21
  • 12
  • 10
  • 9
  • 8
  • 7
  • 6
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 561
  • 225
  • 180
  • 175
  • 170
  • 169
  • 148
  • 81
  • 75
  • 71
  • 68
  • 67
  • 64
  • 64
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Modélisation dynamique par réseaux de neurones et machines à vecteurs supports: contribution à la maîtrise des émissions polluantes de véhicules automobiles.

Lucea, Marc 22 September 2006 (has links) (PDF)
La complexité croissante des systèmes employés dans l'industrie automobile, en termes de fonctions réalisées et de méthodes de mise en œuvre, mais aussi en terme de norme d'homologation, amène à envisager des outils toujours plus innovants lors de la conception d'un véhicule. On observe d'ailleurs depuis quelques années une forte augmentation du nombre de brevets déposés, en particulier dans le domaine des systèmes électroniques, dont l'importance ne cesse de croître au sein d'un véhicule automobile moderne. Cette complexité croissante des fonctions réalisées requiert une précision de description accrue pour les dispositifs impliqués, notamment pour les systèmes complexes où une approche analytique est difficilement envisageable. Aux impératifs de précision de la description, qui imposent souvent de prendre en considération les non-linéarités des processus, s'ajoute donc la complexité d'analyse des phénomènes physiques à l'origine des observations que l'on souhaite modéliser. Les développements qu'ont connus ces dernières années les techniques de modélisation non linéaires par apprentissage (notamment les réseaux de neurones formels et les machines à vecteurs supports), alliés à la croissance de la capacité des ordinateurs et des calculateurs embarqués dans les véhicules automobiles, justifient donc l'intérêt porté par Renault à ces outils. C'est dans cette optique qu'a été envisagée une étude portant sur les méthodes de modélisation non linéaire par apprentissage, dont l'objectif était d'en tester les secteurs d'applications possibles dans le contexte automobile, et d'en évaluer les difficultés de mise en œuvre ainsi que les gains attendus. Cette étude a fait l'objet d'une collaboration, sous forme d'un contrat de thèse CIFRE, avec le Laboratoire d'Electronique de l'Ecole Supérieure de Physique et de Chimie Industrielles de la Ville de Paris (ESPCI), dirigé par le Professeur Gérard Dreyfus. De manière générale, les techniques de modélisation par apprentissage permettent d'aborder la modélisation de phénomènes physiques dont la description est ardue, élargissant ainsi le champ des possibles en matière de modélisation, mais également de s'affranchir d'une description physique détaillée pour des processus connus, réduisant ainsi le temps de développement d'un modèle particulier. En contrepartie, l'élaboration de tels modèles par apprentissage requiert la réalisation de mesures sur ledit processus, ce qui implique des coûts qui sont parfois loin d'être négligeables. Notre objectif a donc été d'identifier certains problèmes correspondant à la première approche, c'est-à-dire pour lesquels la réalisation de modèles de connaissance est soit inenvisageable soit particulièrement ardue. Le premier chapitre de ce mémoire s'attache à rappeler les concepts de base relatifs à la modélisation de processus par apprentissage. Nous y introduirons les notions essentielles que nous serons amenés à employer par la suite. Dans le deuxième chapitre, nous décrivons les principaux outils d'optimisation nécessaires à l'élaboration de modèles par apprentissage. Le troisième chapitre regroupe l'ensemble des travaux menés, au cours de cette thèse, sur le thème des réseaux de neurones. Après avoir rappelé la méthodologie d'élaboration de modèles neuronaux, en particulier dans le cas récurrent, nous présentons les résultats obtenus sur deux applications industrielles: l'estimation de la température en un point particulier de la ligne d'échappement, et l'estimation des émissions de différents polluants en sortie d'échappement. Ces deux applications participent à la maîtrise des émissions polluantes, soit durant l'utilisation habituelle d'un véhicule, car la connaissance de cette température est indispensable à la mise en œuvre des stratégies de dépollution actives, soit au stade de la mise au point du moteur, qui sera facilitée par l'utilisation d'un modèle de prédiction des débits de polluants en fonction des réglages du moteur. Nous décrivons également un système de commande optimale en boucle ouverte, associé à un modèle neuronal, et destiné à réduire les variations rapides de la sortie d'un processus: ce système est susceptible d'être utilisé pour contrôler les à-coups de couple d'un véhicule, consécutifs à une variation rapide de l'enfoncement de la pédale d'accélérateur. Une méthode de calcul exact de la matrice Hessienne, dans le cas de modèles décrits par des équations récurrentes, est alors introduite pour permettre l'utilisation de ce système de commande dans le cas de processus dynamiques. Dans le quatrième chapitre, nous nous intéressons aux méthodes de modélisation par noyaux, dont font partie les machines à vecteurs supports, et tentons de les adapter à la modélisation de processus dynamiques, d'abord par un traitement analytique (pour l'une particulière de ces méthodes), avant de proposer un approche itérative du problème d'apprentissage, inspirée de l'algorithme d'apprentissage semi dirigé utilisé pour les réseaux de neurones récurrents.
172

FPGA baserad PWM-styrning av BLDC-motorer / FPGA based PWM-control of BLDC motors

Johansson, Andreas January 2003 (has links)
<p>This thesis work contains a litterature study about electrical motors in general and how PWM-patterns for brushless DC-motors can be made. A suitable method has been implemented as a simulation model in VHDL. A simulation model of a brushless DC-motor which describes the phasecurrents, torque and angular velocity has also been made. The motor model made simulations easier for the complete PWM-system. </p><p>The design was synthesised and tested with a prototypeboard including a SPARTAN II FPGA. In order to test the design, a powerstage and a motor was included. The tests showed that the design was working as expected according to the previous simulations. </p><p>A study about an alternative way to control a brushless DC-motor has also been made. This alternative is best suited when the generated back-EMK for the motor is sinusoidal. A simulation model for a part of a system like this has been made, and it has been synthesised in order to examine if it is possible to implement using a FPGA availible today.</p>
173

Modellering och styrning av flis till en sulfatkokare / Modelling and control of wooden chips to a sulphate digester

Ohlsson, Staffan January 2005 (has links)
<p>At the Skoghall pappermill, sulphatepaper pulp is produced in a continuous digester originally from 1969. To be able to maintain a high level of production there is a need for a process with few disturbances. Variations in how well the wooden chips are packed in the digester is one form of disturbance. Today there are no available measurements on how well the chips are packed. Instead this is regarded as being constant. </p><p>The variation in the so called bulk density of the chips is mainly due to variations in the percentage with small dimensions. Chips are classified in relation to their size and one of the smallest classes is referred to as pin chips. These are believed to have a big impact on the bulk density. The amount of pin chips fluctuate more then the other classes, there by causing disturbances. </p><p>The Skoghall pappermill has invested in a ScanChip. This is an instrument that measures the dimensions of the chips optically. ScanChip presents figures on chip quality, including a measurement of the bulk density. However, it has been shown that this measurement is not valid for the Skoghall pappermill. By using data from ScanChip a model that predicts how well the chips are packed has been devised. This value is the bulk density divided by the basic density. The model has proved to yield good results, despite a relatively small amount of data. </p><p>A theoretical value of the amount of produced pulp has been computed based on the revolutions of the production screw that feeds chips into the digester. This value takes in consideration how well the chips are packed. The value has shown great similarities with the empirical measurements that are used today. A simulation during one month has shown that differences in the mixture of chips have effected the measurement of produced pulp with up to 7 ton/h. </p><p>Chips are stored in open pile storages before they are being used in the process of transforming them into pulp. Four screws are used to move chips from the piles to conveyer belts. It has been shown in work done previously, that the movement of the screws contributes to variations in the amount of pin chips measured by ScanChip. </p><p>During the work with this master’s thesis I have found that there are variations in the piles that make it difficult to predict the amount of pin chips accordingly. However by filtering the measurements of pin chips to remove these variations, the results are improved. A new way of controlling the movements of the screws was operational on the 10 of March and this improved the results. </p><p>The direction in which the screws are moving influence the speed of the screws, mainly in the pile with the so called sawmill chips. By changing the amount of chips that each screw puts out, the differences in speed have been reduced. The mixtures found in the two piles are not completely homogenous. There are a greater amount of pin chips in the northern parts compared with the southern parts. This could be an effect of the wind direction, and will still cause variations.</p>
174

Applying Discriminant Functions with One-Class SVMs for Multi-Class Classification

Lee, Zhi-Ying 09 August 2007 (has links)
AdaBoost.M1 has been successfully applied to improve the accuracy of a learning algorithm for multi-class classification problems. However, it assumes that the performance of each base classifier must be better than 1/2, and this may be hard to achieve in practice for a multi-class problem. A new algorithm called AdaBoost.MK only requiring base classifiers better than a random guessing (1/k) is thus designed. Early SVM-based multi-class classification algorithms work by splitting the original problem into a set of two-class sub-problems. The time and space required by these algorithms are very demanding. In order to have low time and space complexities, we develop a base classifier that integrates one-class SVMs with discriminant functions. In this study, a hybrid method that integrates AdaBoost.MK and one-class SVMs with improved discriminant functions as the base classifiers is proposed to solve a multi-class classification problem. Experimental results on data sets from UCI and Statlog show that the proposed approach outperforms many popular multi-class algorithms including support vector clustering and AdaBoost.M1 with one-class SVMs as the base classifiers.
175

Intelligent Recognition of Texture and Material Properties of Fabrics

Wang, Xin 02 November 2011 (has links)
Fabrics are unique materials which consist of various properties affecting their performance and end-uses. A computerized fabric property evaluation and analysis method plays a crucial role not only in textile industry but also in scientific research. An accurate analysis and measurement of fabric property provides a powerful tool for gauging product quality, assuring regulatory compliance and assessing the performance of textile materials. This thesis investigated the solutions for applying computerized methods to evaluate and intelligently interpret the texture and material properties of fabric in an inexpensive and efficient way. Firstly, a method which allows automatic recognition of basic weave pattern and precisely measuring the yarn count is proposed. The yarn crossed-areas are segmented by a spatial domain integral projection approach. Combining fuzzy c-means (FCM) and principal component analysis (PCA) on grey level co-occurrence matrix (GLCM) feature vectors extracted from the segments enables to classify detected segments into two clusters. Based on the analysis on texture orientation features, the yarn crossed-area states are automatically determined. An autocorrelation method is used to find weave repeats and correct detection errors. The method was validated by using computer simulated woven samples and real woven fabric images. The test samples have various yarn counts, appearance, and weave types. All weave patterns of tested fabric samples are successfully recognized and computed yarn counts are consistent to the manual counts. Secondly, we present a methodology for using the high resolution 3D surface data of fabric samples to measure surface roughness in a nondestructive and accurate way. A parameter FDFFT, which is the fractal dimension estimation from 2DFFT of 3D surface scan, is proposed as the indicator of surface roughness. The robustness of FDFFT, which consists of the rotation-invariance and scale-invariance, is validated on a number of computer simulated fractal Brownian images. Secondly, in order to evaluate the usefulness of FDFFT, a novel method of calculating standard roughness parameters from 3D surface scan is introduced. According to the test results, FDFFT has been demonstrated as a fast and reliable parameter for measuring the fabric roughness from 3D surface data. We attempt a neural network model using back propagation algorithm and FDFFT for predicting the standard roughness parameters. The proposed neural network model shows good performance experimentally. Finally, an intelligent approach for the interpretation of fabric objective measurements is proposed using supported vector machine (SVM) techniques. The human expert assessments of fabric samples are used during the training phase in order to adjust the general system into an applicable model. Since the target output of the system is clear, the uncertainty which lies in current subjective fabric evaluation does not affect the performance of proposed model. The support vector machine is one of the best solutions for handling high dimensional data classification. The complexity problem of the fabric property has been optimally dealt with. The generalization ability shown in SVM allows the user to separately implement and design the components. Sufficient cross-validations are performed and demonstrate the performance test of the system.
176

Protein Secondary Structure Prediction Using Support Vector Machines, Nueral Networks and Genetic Algorithms

Reyaz-Ahmed, Anjum B 03 May 2007 (has links)
Bioinformatics techniques to protein secondary structure prediction mostly depend on the information available in amino acid sequence. Support vector machines (SVM) have shown strong generalization ability in a number of application areas, including protein structure prediction. In this study, a new sliding window scheme is introduced with multiple windows to form the protein data for training and testing SVM. Orthogonal encoding scheme coupled with BLOSUM62 matrix is used to make the prediction. First the prediction of binary classifiers using multiple windows is compared with single window scheme, the results shows single window not to be good in all cases. Two new classifiers are introduced for effective tertiary classification. This new classifiers use neural networks and genetic algorithms to optimize the accuracy of the tertiary classifier. The accuracy level of the new architectures are determined and compared with other studies. The tertiary architecture is better than most available techniques.
177

Identification et caractérisation des perturbations affectant les réseaux électriques HTA.

Caujolle, Mathieu 27 September 2011 (has links) (PDF)
La reconnaissance des perturbations survenant sur les réseaux HTA est une problématique essentielle pour les clients industriels comme pour le gestionnaire du réseau. Ces travaux de thèse ont permis de développer un système d'identification automatique. Il s'appuie sur des méthodes de segmentation qui décomposent de manière précise et efficace les régimes transitoires et permanents des perturbations. Elles utilisent des filtres de types Kalman linéaire ou anti-harmoniques pour extraire les régimes transitoires. La prise en compte des variations harmoniques et de la présence de transitoires proches se fait à l'aide de seuils adaptatifs. Des méthodes de correction du retard a posteriori permettent d'améliorer la précision de la décomposition. Des indicateurs adaptés à la dynamique des régimes de fonctionnement analysés sont utilisés pour caractériser les perturbations. Peu sensibles aux erreurs de segmentation et aux perturbations harmoniques, ils permettent une description fiable des phases des perturbations. Deux types de systèmes de décision ont également été étudiés : des systèmes experts et des classifieurs SVM. Ces systèmes ont été mis au point à partir d'une large base de perturbations simulées. Leurs performances ont été évaluées sur une base de perturbations réelles : ils déterminent efficacement le type et la direction des perturbations observées (taux de reconnaissance moyen > 98%).
178

Intelligent Recognition of Texture and Material Properties of Fabrics

Wang, Xin 02 November 2011 (has links)
Fabrics are unique materials which consist of various properties affecting their performance and end-uses. A computerized fabric property evaluation and analysis method plays a crucial role not only in textile industry but also in scientific research. An accurate analysis and measurement of fabric property provides a powerful tool for gauging product quality, assuring regulatory compliance and assessing the performance of textile materials. This thesis investigated the solutions for applying computerized methods to evaluate and intelligently interpret the texture and material properties of fabric in an inexpensive and efficient way. Firstly, a method which allows automatic recognition of basic weave pattern and precisely measuring the yarn count is proposed. The yarn crossed-areas are segmented by a spatial domain integral projection approach. Combining fuzzy c-means (FCM) and principal component analysis (PCA) on grey level co-occurrence matrix (GLCM) feature vectors extracted from the segments enables to classify detected segments into two clusters. Based on the analysis on texture orientation features, the yarn crossed-area states are automatically determined. An autocorrelation method is used to find weave repeats and correct detection errors. The method was validated by using computer simulated woven samples and real woven fabric images. The test samples have various yarn counts, appearance, and weave types. All weave patterns of tested fabric samples are successfully recognized and computed yarn counts are consistent to the manual counts. Secondly, we present a methodology for using the high resolution 3D surface data of fabric samples to measure surface roughness in a nondestructive and accurate way. A parameter FDFFT, which is the fractal dimension estimation from 2DFFT of 3D surface scan, is proposed as the indicator of surface roughness. The robustness of FDFFT, which consists of the rotation-invariance and scale-invariance, is validated on a number of computer simulated fractal Brownian images. Secondly, in order to evaluate the usefulness of FDFFT, a novel method of calculating standard roughness parameters from 3D surface scan is introduced. According to the test results, FDFFT has been demonstrated as a fast and reliable parameter for measuring the fabric roughness from 3D surface data. We attempt a neural network model using back propagation algorithm and FDFFT for predicting the standard roughness parameters. The proposed neural network model shows good performance experimentally. Finally, an intelligent approach for the interpretation of fabric objective measurements is proposed using supported vector machine (SVM) techniques. The human expert assessments of fabric samples are used during the training phase in order to adjust the general system into an applicable model. Since the target output of the system is clear, the uncertainty which lies in current subjective fabric evaluation does not affect the performance of proposed model. The support vector machine is one of the best solutions for handling high dimensional data classification. The complexity problem of the fabric property has been optimally dealt with. The generalization ability shown in SVM allows the user to separately implement and design the components. Sufficient cross-validations are performed and demonstrate the performance test of the system.
179

Desarrollo de diferentes métodos de selección de variables para sistemas multisensoriales

Gualdron Guerrero, Oscar Eduardo 13 July 2006 (has links)
Los sistemas de olfato electrónico son instrumentos que han sido desarrollados para emular a los sistemas de olfato biológicos. A este tipo de ingenios se les ha conocido popularmente como narices electrónicas (NE). Los científicos e ingenieros que siguen perfeccionando este tipo de instrumento trabajan en diferentes frentes, como son el del desarrollo de nuevos sensores de gases (con mejor discriminación y mayor sensibilidad), el de la adaptación de técnicas analíticas como la espectrometría de masas (MS) en substitución de la tradicional matriz de sensores químicos, la extracción de nuevos parámetros de la respuesta de los sensores (preprocesado) o incluso en el desarrollo de técnicas más sofisticadas para el procesado de datos.Uno de los principales inconvenientes que en la actualidad presentan los sistemas de olfato artificial es la alta dimensionalidad de los conjuntos a analizar, debido a la gran cantidad de parámetros que se obtienen de cada medida. El principal objetivo de esta tesis ha sido estudiar y desarrollar nuevos métodos de selección de variables con el fin de reducir la dimensionalidad de los datos y así poder optimizar los procesos de reconocimiento en sistemas de olfato electrónico basados en sensores de gases o en espectrometría de masas.Para poder evaluar la importancia de los métodos y comprobar si ayudan realmente a solucionar la problemática de la dimensionalidad se han utilizado cuatro conjuntos de datos pertenecientes a aplicaciones reales que nos permitieron comprobar y comparar los diferentes métodos implementados de forma objetiva. Estos cuatro conjuntos de datos se han utilizado en tres estudios cuyas conclusiones repasamos a continuación:En el primero de los estudios se ha demostrado que diferentes métodos (secuenciales o estocásticos) pueden ser acoplados a clasificadores fuzzy ARTMAP o PNN y ser usados para la selección de variables en problemas de análisis de gases en sistemas multisensoriales. Los métodos fueron aplicados simultáneamente para identificar y cuantificar tres compuestos orgánicos volátiles y sus mezclas binarias construyendo sus respectivos modelos neuronales de clasificación.El segundo trabajo que se incluye en esta tesis propone una nueva estrategia para la selección de variables que se ha mostrado eficaz ante diferentes conjuntos de datos provenientes de sistemas olfativos basados en espectrometría de masas (MS). La estrategia ha sido aplicada inicialmente a un conjunto de datos consistente de mezclas sintéticas de compuestos volátiles. Este conjunto ha sido usado para mostrar que el proceso de selección es viable para identificar un mínimo número de fragmentos que permiten la discriminación correcta entre mezclas usando clasificadores fuzzy ARTMAP. Además, dada la naturaleza simple del problema planteado, fue posible mostrar que los fragmentos seleccionados, son fragmentos de ionización característicos de las especies presentes en las mezclas a ser discriminadas. Una vez demostrado el correcto funcionamiento de esta estrategia, se aplicó esta metodología a otros dos conjuntos de datos (aceite de oliva y jamones ibéricos, respectivamente).El tercer estudio tratado en esta tesis ha girado en torno al desarrollo de un nuevo método de selección de variables inspirado en la concatenación de varios procesos de "backward selection". El método está especialmente diseñado para trabajar con Support Vector machines (SVM) en problemas de clasificación o de regresión. La utilidad del método ha sido evaluada usando dos de los conjuntos de datos ya utilizados anteriormente.Como conclusión se puede decir que para los diferentes conjuntos estudiados, la inclusión de un proceso previo de selección de variables da como resultado una reducción drástica en la dimensionalidad y un aumento significativo en los correspondientes resultados de clasificación. Los métodos introducidos aquí no solo son útiles para resolver problemas de narices electrónicas basadas en MS, sino también para cualquier aplicación de sistemas de olfato artificial que presenten problemas de alta dimensionalidad como en el caso de los conjuntos de datos estudiados en este trabajo. / The electronic noses systems are instruments that have been developed to emulate olfactory biologic systems. These systems are known as electronic noses (EN).Nowadays, researchers and engineers working in this area are trying to optimize these systems considering different directions, such as: development of new gas sensors (with better discrimination and greater sensitivity), adaptation of analytical techniques such as mass spectrometry (MS) in substitution of chemical sensors matrix and extraction of new parameters of the sensors responses (pre-processing) or even development of sophisticated techniques for the data processing.One of the main disadvantages that have artificial olfactory systems is high dimensionality of sets to analyze. The main objective of this thesis have been study and development of new variable selection methods with the purpose of reducing dimensionality of data and thus to be able to optimize recognition processes in electronic olfactory systems based on gas sensors or mass spectrometry.These methods have been used with four datasets which belong to real applications.They allowed us to verify and to compare different implemented methods. These four datasets have been used in three studies whose conclusions are reviewed as follows.The first study has demonstrated that different methods (either deterministic or stochastic) can be coupled to a fuzzy ARTMAP or a PNN classifier and be used for variable selection in gas analysis problems by multisensor systems. The methods were applied to simultaneously identify and quantify three volatile organic compounds and their binary mixtures by building neural classification models.The second study, proposes a new strategy for feature selection in dataset of system olfactory based on mass spectrometry (MS). This strategy has been introduced and its good performance demonstrated using different MS e-nose databases. The strategy has been applied initially to a database consisting of synthetic mixtures of volatile compounds. This simple database has been used to show that the feature selection process is able to identify a minimal set of fragments that enables the correct discrimination between mixtures using a simple fuzzy ARTMAP classifier.Furthermore, given the simple nature of the problem envisaged, it was possible to show that the fragments selected 'made sense' were characteristic ionisation fragments of the species present in the mixtures which were discriminated. Once demonstrated the correct operation of this strategy, this methodology was applied to other two data sets (olive oil, Iberian ham).In the third study of this thesis has been introduced a new method of variable selection based on sequential backward selection. The method is specifically designed to work with Support vector machines (SVM) either for classification or regression. The usefulness of the method has been assessed using two multisensor system databases (measurements of vapour simples and vapour mixtures performed using an array of metal oxide gas sensors and measurement of Iberian ham).For different databases studied, dramatic decrease in dimensionality of model and an increase in classification performance is result of using variable selection. The methods introduced here are useful not only to solve MS-based electronic nose problems, but are of interest for any electronic nose application suffering from highdimensionality problems, no matter which sensing technology is used.
180

FPGA baserad PWM-styrning av BLDC-motorer / FPGA based PWM-control of BLDC motors

Johansson, Andreas January 2003 (has links)
This thesis work contains a litterature study about electrical motors in general and how PWM-patterns for brushless DC-motors can be made. A suitable method has been implemented as a simulation model in VHDL. A simulation model of a brushless DC-motor which describes the phasecurrents, torque and angular velocity has also been made. The motor model made simulations easier for the complete PWM-system. The design was synthesised and tested with a prototypeboard including a SPARTAN II FPGA. In order to test the design, a powerstage and a motor was included. The tests showed that the design was working as expected according to the previous simulations. A study about an alternative way to control a brushless DC-motor has also been made. This alternative is best suited when the generated back-EMK for the motor is sinusoidal. A simulation model for a part of a system like this has been made, and it has been synthesised in order to examine if it is possible to implement using a FPGA availible today.

Page generated in 0.5125 seconds