• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 461
  • 121
  • 57
  • 49
  • 36
  • 23
  • 23
  • 11
  • 10
  • 10
  • 8
  • 7
  • 7
  • 7
  • 7
  • Tagged with
  • 965
  • 423
  • 135
  • 89
  • 74
  • 72
  • 71
  • 68
  • 66
  • 58
  • 57
  • 55
  • 53
  • 50
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Fundamentals of substructure dynamics : In-situ experiments and numerical simulation

Borthwick, Verity January 2010 (has links)
Substructure dynamics incorporate all features occurring on a subgrain-scale. The substructure governs the rheology of a rock, which in turn determines how it will respond to different processes during tectonic changes. This project details an in-depth study of substructural dynamics during post-deformational annealing, using single-crystal halite as an analogue for silicate materials. The study combines three different techniques; in-situ annealing experiments conducted inside the scanning electron microscope and coupled with electron backscatter diffraction, 3D X-ray diffraction coupled with in-situ heating conducted at the European Radiation Synchrotron Facility and numerical simulation using the microstructural modelling platform Elle. The main outcome of the project is a significantly refined model for recovery at annealing temperatures below that of deformation preceding annealing. Behaviour is highly dependent on the temperature of annealing, particularly related to the activation temperature of climb and is also strongly reliant on short versus long range dislocation effects. Subgrain boundaries were categorised with regard to their behaviour during annealing, orientation and morphology and it was found that different types of boundaries have different behaviour and must be treated as such. Numerical simulation of the recovery process supported these findings, with much of the subgrain boundary behaviour reproduced with small variation to the mobilities on different rotation axes and increase of the size of the calculation area to imitate long-range dislocation effects. Dislocations were found to remain independent to much higher misorientation angles than previously thought, with simulation results indicating that change in boundary response occurs at ~7º for halite. Comparison of 2D experiments to 3D indicated that general boundary behaviour was similar within the volume and was not significantly influenced by effects from the free surface. Boundary migration, however, occurred more extensively in the 3D experiment. This difference is interpreted to be related to boundary drag on thermal grooves on the 2D experimental surface. While relative boundary mobilities will be similar, absolute values must therefore be treated with some care when using a 2D analysis. / At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 3: Manuscript. Paper 4: Manuscript.
342

Motion planning for multi-link robots with artificial potential fields and modified simulated annealing

Yagnik, Deval 01 December 2010 (has links)
In this thesis we present a hybrid control methodology using Artificial Potential Fields (APF) integrated with a modified Simulated Annealing (SA) optimization algorithm for motion planning of a multi-link robots team. The principle of this work is based on the locomotion of a snake where subsequent links follow the trace of the head. The proposed algorithm uses the APF method which provides simple, efficient and effective path planning and the modified SA is applied in order for the robots to recover from a local minima. Modifications to the SA algorithm improve the performance of the algorithm and reduce convergence time. Validation on a three-link snake robot shows that the derived control laws from the motion planning algorithm that combine APF and SA can successfully navigate the robot to reach its destination, while avoiding collisions with multiple obstacles and other robots in its path as well as recover from local minima. To improve the performance of the algorithm, the gradient descent method is replaced by Newton’s method which helps in reducing the zigzagging phenomenon in gradient descent method while the robot moves in the vicinity of an obstacle. / UOIT
343

Upper Bound Analysis and Routing in Optical Benes Networks

Zhong, Jiling 12 January 2006 (has links)
Multistage Interconnection Networks (MIN) are popular in switching and communication applications. It has been used in telecommunication and parallel computing systems for many years. The new challenge facing optical MIN is crosstalk, which is caused by coupling two signals within a switching element. Crosstalk is not too big an issue in the Electrical Domain, but due to the stringent Bit Error Rate (BER) constraint, it is a big major concern in the Optical Domain. In this research dissertation, we will study the blocking probability in the optical network and we will study the deterministic conditions for strictly non-blocking Vertical Stacked Optical Benes Networks (VSOBN) with and without worst-case scenarios. We will establish the upper bound on blocking probability of Vertical Stacked Optical Benes Networks with respect to the number of planes used when the non-blocking requirement is not met. We will then study routing in WDM Benes networks and propose a new routing algorithm so that the number of wavelengths can be reduced. Since routing in WDM optical network is an NP-hard problem, many heuristic algorithms are designed by many researchers to perform this routing. We will also develop a genetic algorithm, simulated annealing algorithm and ant colony technique and apply these AI algorithms to route the connections in WDM Benes network.
344

Growth And Morphological Characterization Of Intrinsic Hydrogenated Amorphous Silicon Thin Film For A-si:h/c-si Heterojunction Solar Cells

Pehlivan, Ozlem 01 February 2013 (has links) (PDF)
Passivation of the crystalline silicon (c-Si) wafer surface and decreasing the number of interface defects are basic requirements for development of high efficiency a-Si:H/c-Si heterojunction solar cells. Surface passivation is generally achieved by development of detailed silicon wafer cleaning processes and the optimization of PECVD parameters for the deposition of intrinsic hydrogenated amorphous silicon layer. a-Si:H layers are grown in UHV-PECVD system. Solar cells were deposited on the p type Cz-silicon substrates in the structure of Al front contact/a-Si:H(n)/a-Si:H(i)/c-Si(p)/Al back contact. Solar cell parameters were determined under standard test conditions namely, using 1000 W/m2, AM 1.5G illumination at 25 oC. Growth of (i) a-Si:H, films on the clean wafer surface was investigated as a function of substrate temperature, RF power density, gas flow rate, hydrogen dilution ratio and deposition time and was characterized using SEM, HRTEM, AFM, SE, ATR-FTIR and I/V measurements. Structural properties of the films deposited on silicon wafer surface are directly effective on the solar cell efficiency. Morphological characterization of the grown films on the crystalline surface was found to be very complex depending on the deposition parameters and may even change during the deposition time. At 225 oC substrate temperature, at the beginning of the deposition, (i) a-Si:H films was found grown in epitaxial structure, followed by a simultaneous growth of crystalline and amorphous structure, and finally transforming to complete amorphous structure. Despite this complex structure, an efficiency of 9.2% for solar cells with total area of 72 cm2 was achieved. In this cell structure, TCO and back surface passivation do not exist. In the
345

Desarrollo de diferentes métodos de selección de variables para sistemas multisensoriales

Gualdron Guerrero, Oscar Eduardo 13 July 2006 (has links)
Los sistemas de olfato electrónico son instrumentos que han sido desarrollados para emular a los sistemas de olfato biológicos. A este tipo de ingenios se les ha conocido popularmente como narices electrónicas (NE). Los científicos e ingenieros que siguen perfeccionando este tipo de instrumento trabajan en diferentes frentes, como son el del desarrollo de nuevos sensores de gases (con mejor discriminación y mayor sensibilidad), el de la adaptación de técnicas analíticas como la espectrometría de masas (MS) en substitución de la tradicional matriz de sensores químicos, la extracción de nuevos parámetros de la respuesta de los sensores (preprocesado) o incluso en el desarrollo de técnicas más sofisticadas para el procesado de datos.Uno de los principales inconvenientes que en la actualidad presentan los sistemas de olfato artificial es la alta dimensionalidad de los conjuntos a analizar, debido a la gran cantidad de parámetros que se obtienen de cada medida. El principal objetivo de esta tesis ha sido estudiar y desarrollar nuevos métodos de selección de variables con el fin de reducir la dimensionalidad de los datos y así poder optimizar los procesos de reconocimiento en sistemas de olfato electrónico basados en sensores de gases o en espectrometría de masas.Para poder evaluar la importancia de los métodos y comprobar si ayudan realmente a solucionar la problemática de la dimensionalidad se han utilizado cuatro conjuntos de datos pertenecientes a aplicaciones reales que nos permitieron comprobar y comparar los diferentes métodos implementados de forma objetiva. Estos cuatro conjuntos de datos se han utilizado en tres estudios cuyas conclusiones repasamos a continuación:En el primero de los estudios se ha demostrado que diferentes métodos (secuenciales o estocásticos) pueden ser acoplados a clasificadores fuzzy ARTMAP o PNN y ser usados para la selección de variables en problemas de análisis de gases en sistemas multisensoriales. Los métodos fueron aplicados simultáneamente para identificar y cuantificar tres compuestos orgánicos volátiles y sus mezclas binarias construyendo sus respectivos modelos neuronales de clasificación.El segundo trabajo que se incluye en esta tesis propone una nueva estrategia para la selección de variables que se ha mostrado eficaz ante diferentes conjuntos de datos provenientes de sistemas olfativos basados en espectrometría de masas (MS). La estrategia ha sido aplicada inicialmente a un conjunto de datos consistente de mezclas sintéticas de compuestos volátiles. Este conjunto ha sido usado para mostrar que el proceso de selección es viable para identificar un mínimo número de fragmentos que permiten la discriminación correcta entre mezclas usando clasificadores fuzzy ARTMAP. Además, dada la naturaleza simple del problema planteado, fue posible mostrar que los fragmentos seleccionados, son fragmentos de ionización característicos de las especies presentes en las mezclas a ser discriminadas. Una vez demostrado el correcto funcionamiento de esta estrategia, se aplicó esta metodología a otros dos conjuntos de datos (aceite de oliva y jamones ibéricos, respectivamente).El tercer estudio tratado en esta tesis ha girado en torno al desarrollo de un nuevo método de selección de variables inspirado en la concatenación de varios procesos de "backward selection". El método está especialmente diseñado para trabajar con Support Vector machines (SVM) en problemas de clasificación o de regresión. La utilidad del método ha sido evaluada usando dos de los conjuntos de datos ya utilizados anteriormente.Como conclusión se puede decir que para los diferentes conjuntos estudiados, la inclusión de un proceso previo de selección de variables da como resultado una reducción drástica en la dimensionalidad y un aumento significativo en los correspondientes resultados de clasificación. Los métodos introducidos aquí no solo son útiles para resolver problemas de narices electrónicas basadas en MS, sino también para cualquier aplicación de sistemas de olfato artificial que presenten problemas de alta dimensionalidad como en el caso de los conjuntos de datos estudiados en este trabajo. / The electronic noses systems are instruments that have been developed to emulate olfactory biologic systems. These systems are known as electronic noses (EN).Nowadays, researchers and engineers working in this area are trying to optimize these systems considering different directions, such as: development of new gas sensors (with better discrimination and greater sensitivity), adaptation of analytical techniques such as mass spectrometry (MS) in substitution of chemical sensors matrix and extraction of new parameters of the sensors responses (pre-processing) or even development of sophisticated techniques for the data processing.One of the main disadvantages that have artificial olfactory systems is high dimensionality of sets to analyze. The main objective of this thesis have been study and development of new variable selection methods with the purpose of reducing dimensionality of data and thus to be able to optimize recognition processes in electronic olfactory systems based on gas sensors or mass spectrometry.These methods have been used with four datasets which belong to real applications.They allowed us to verify and to compare different implemented methods. These four datasets have been used in three studies whose conclusions are reviewed as follows.The first study has demonstrated that different methods (either deterministic or stochastic) can be coupled to a fuzzy ARTMAP or a PNN classifier and be used for variable selection in gas analysis problems by multisensor systems. The methods were applied to simultaneously identify and quantify three volatile organic compounds and their binary mixtures by building neural classification models.The second study, proposes a new strategy for feature selection in dataset of system olfactory based on mass spectrometry (MS). This strategy has been introduced and its good performance demonstrated using different MS e-nose databases. The strategy has been applied initially to a database consisting of synthetic mixtures of volatile compounds. This simple database has been used to show that the feature selection process is able to identify a minimal set of fragments that enables the correct discrimination between mixtures using a simple fuzzy ARTMAP classifier.Furthermore, given the simple nature of the problem envisaged, it was possible to show that the fragments selected 'made sense' were characteristic ionisation fragments of the species present in the mixtures which were discriminated. Once demonstrated the correct operation of this strategy, this methodology was applied to other two data sets (olive oil, Iberian ham).In the third study of this thesis has been introduced a new method of variable selection based on sequential backward selection. The method is specifically designed to work with Support vector machines (SVM) either for classification or regression. The usefulness of the method has been assessed using two multisensor system databases (measurements of vapour simples and vapour mixtures performed using an array of metal oxide gas sensors and measurement of Iberian ham).For different databases studied, dramatic decrease in dimensionality of model and an increase in classification performance is result of using variable selection. The methods introduced here are useful not only to solve MS-based electronic nose problems, but are of interest for any electronic nose application suffering from highdimensionality problems, no matter which sensing technology is used.
346

Xarxes neuronals per a la generació de dissenys en blocs

Bofill Soliguer, Pau 04 November 1997 (has links)
No description available.
347

Tuning of Metaheuristics for Systems Biology Applications

Höghäll, Anton January 2010 (has links)
In the field of systems biology the task of finding optimal model parameters is a common procedure. The optimization problems encountered are often multi-modal, i.e., with several local optima. In this thesis, a class of algorithms for multi-modal problems called metaheuristics are studied. A downside of metaheuristic algorithms is that they are dependent on algorithm settings in order to yield ideal performance.This thesis studies an approach to tune these algorithm settings using user constructed test functions which are faster to evaluate than an actual biological model. A statistical procedure is constructed in order to distinguish differences in performance between different configurations. Three optimization algorithms are examined closer, namely, scatter search, particle swarm optimization, and simulated annealing. However, the statistical procedure used can be applied to any algorithm that has configurable options.The results are inconclusive in the sense that performance advantages between configurations in the test functions are not necessarily transferred onto real biological models. However, of the algorithms studied a scatter search implementation was the clear top performer in general. The set of test functions used must be studied if any further work is to be made following this thesis.In the field of systems biology the task of finding optimal model parameters is a common procedure. The optimization problems encountered are often multi-modal, i.e., with several local optima. In this thesis, a class of algorithms for multi-modal problems called metaheuristics are studied. A downside of metaheuristic algorithms is that they are dependent on algorithm settings in order to yield ideal performance. This thesis studies an approach to tune these algorithm settings using user constructed test functions which are faster to evaluate than an actual biological model. A statistical procedure is constructed in order to distinguish differences in performance between different configurations. Three optimization algorithms are examined closer, namely, scatter search, particle swarm optimization, and simulated annealing. However, the statistical procedure used can be applied to any algorithm that has configurable options. The results are inconclusive in the sense that performance advantages between configurations in the test functions are not necessarily transferred onto real biological models. However, of the algorithms studied a scatter search implementation was the clear top performer in general. The set of test functions used must be studied if any further work is to be made following this thesis.
348

Optimal Path Searching through Specified Routes using different Algorithms

Farooq, Farhan January 2009 (has links)
To connect different electrical, network and data devices with the minimum cost and shortest path, is a complex job. In huge buildings, where the devices are placed at different locations on different floors and only some specific routes are available to pass the cables and buses, the shortest path search becomes more complex. The aim of this thesis project is, to develop an application which indentifies the best path to connect all objects or devices by following the specific routes.To address the above issue we adopted three algorithms Greedy Algorithm, Simulated Annealing and Exhaustive search and analyzed their results. The given problem is similar to Travelling Salesman Problem. Exhaustive search is a best algorithm to solve this problem as it checks each and every possibility and give the accurate result but it is an impractical solution because of huge time consumption. If no. of objects increased from 12 it takes hours to search the shortest path. Simulated annealing is emerged with some promising results with lower time cost. As of probabilistic nature, Simulated annealing could be non optimal but it gives a near optimal solution in a reasonable duration. Greedy algorithm is not a good choice for this problem. So, simulated annealing is proved best algorithm for this problem. The project has been implemented in C-language which takes input and store output in an Excel Workbook
349

Implementation of a Simulated Annealing algorithm for Matlab

Moins, Stephane January 2002 (has links)
In this report we describe an adaptive simulated annealing method for sizing the devices in analog circuits. The motivation for use an adaptive simulated annealing method for analog circuit design are to increase the efficiency of the design circuit. To demonstrate the functionality and the performance of the approach, an operational transconductance amplifier is simulated. The circuit is modeled with symbolic equations that are derived automatically by a simulator.
350

Simulated Annealing : implementering mot integrerade analoga kretsar / Simulated Annealing : implementation towards integrated analog circuits

Jonsson, Per-Axel January 2004 (has links)
Today electronics becomes more and more complex and to keep low costs and power consumption, both digital and analog parts are implemented on the same chip. The degree of automization for the digital parts have increased fast and is high, but for the analog parts this has not come through. This have created a big gap between the degrees of automization for the two parts and makes the analog parts the bottleneck in electronics develop. Research is ongoing at Electronics systems group at Linköping University target the increase of design automization for analog circuits. An optimizationbased approach for device sizing is developed and for this a good optimization method is needed which can find good solutions and meet the specification parameters. This report contains an evaluation of the optimization method Simulated Annealing. Many test runs have been made to find out good control parameters, both for Adaptiv Simulated Annealing (ASA) and a standard Simulated Annealing method. The result is discussed and all the data is in the enclosures. A popular science and mathematical description is given for Simulated Annealing as well.

Page generated in 0.0824 seconds