Spelling suggestions: "subject:"1article swarm optimization"" "subject:"1article awarm optimization""
161 |
Inteligence skupiny / Swarm IntelligenceWinklerová, Zdenka January 2015 (has links)
The intention of the dissertation is the applied research of the collective ( group ) ( swarm ) intelligence . To demonstrate the applicability of the collective intelligence, the Particle Swarm Optimization ( PSO ) algorithm has been studied in which the problem of the collective intelligence is transferred to mathematical optimization in which the particle swarm searches for a global optimum within the defined problem space, and the searching is controlled according to the pre-defined objective function which represents the solved problem. A new search strategy has been designed and experimentally tested in which the particles continuously adjust their behaviour according to the characteristics of the problem space, and it has been experimentally discovered how the impact of the objective function representing a solved problem manifests itself in the behaviour of the particles. The results of the experiments with the proposed search strategy have been compared to the results of the experiments with the reference version of the PSO algorithm. Experiments have shown that the classical reference solution, where the only condition is a stable trajectory along which the particle moves in the problem space, and where the influence of a control objective function is ultimately eliminated, may fail, and that the dynamic stability of the trajectory of the particle itself is not an indicator of the searching ability nor the convergence of the algorithm to the true global solution of the solved problem. A search strategy solution has been proposed in which the PSO algorithm regulates its stability by continuous adjustment of the particles behaviour to the characteristics of the problem space. The proposed algorithm influenced the evolution of the searching of the problem space, so that the probability of the successful problem solution increased.
|
162 |
Database Tuning using Evolutionary and Search AlgorithmsRaneblad, Erica January 2023 (has links)
Achieving optimal performance of a database can be crucial for many businesses, and tuning its configuration parameters is a necessary step in this process. Many existing tuning methods involve complex machine learning algorithms and require large amounts of historical data from the system being tuned. However, training machine learning models can be problematic if a considerable amount of computational resources and data storage is required. This paper investigates the possibility of using less complex search algorithms or evolutionary algorithms to tune database configuration parameters, and presents a framework that employs Hill Climbing and Particle Swarm Optimization. The performance of the algorithms are tested on a PostgreSQL database using read-only workloads. Particle Swarm Optimization displayed the largest improvement in query response time, improving it by 26.09% compared to using the configuration parameters' default values. Given the improvement shown by Particle Swarm Optimization, evolutionary algorithms may be promising in the field of database tuning.
|
163 |
Detection And Classification Of Buried Radioactive MaterialsWei, Wei 09 December 2011 (has links)
This dissertation develops new approaches for detection and classification of buried radioactive materials. Different spectral transformation methods are proposed to effectively suppress noise and to better distinguish signal features in the transformed space. The contributions of this dissertation are detailed as follows. 1) Propose an unsupervised method for buried radioactive material detection. In the experiments, the original Reed-Xiaoli (RX) algorithm performs similarly as the gross count (GC) method; however, the constrained energy minimization (CEM) method performs better if using feature vectors selected from the RX output. Thus, an unsupervised method is developed by combining the RX and CEM methods, which can efficiently suppress the background noise when applied to the dimensionality-reduced data from principle component analysis (PCA). 2) Propose an approach for buried target detection and classification, which applies spectral transformation followed by noisejusted PCA (NAPCA). To meet the requirement of practical survey mapping, we focus on the circumstance when sensor dwell time is very short. The results show that spectral transformation can alleviate the effects from spectral noisy variation and background clutters, while NAPCA, a better choice than PCA, can extract key features for the following detection and classification. 3) Propose a particle swarm optimization (PSO)-based system to automatically determine the optimal partition for spectral transformation. Two PSOs are incorporated in the system with the outer one being responsible for selecting the optimal number of bins and the inner one for optimal bin-widths. The experimental results demonstrate that using variable bin-widths is better than a fixed bin-width, and PSO can provide better results than the traditional Powell’s method. 4) Develop parallel implementation schemes for the PSO-based spectral partition algorithm. Both cluster and graphics processing units (GPU) implementation are designed. The computational burden of serial version has been greatly reduced. The experimental results also show that GPU algorithm has similar speedup as cluster-based algorithm.
|
164 |
Magnetlagerauslegung unter Nutzung der Particle-Swarm-OptimizationNeumann, Holger, Worlitz, Frank 20 October 2023 (has links)
Die Auslegung von Magnetlagern erfolgt in der Regel durch Fachpersonal in einem iterativen zeitaufwendigen Prozess. Dies stellt einen großen Kostenfaktor bei der Entwicklung magnetgelagerter Maschinen oder der Umrüstung konventionell gelagerter Maschinen dar. Aus diesem Grund wurde ein Softwarewerkzeug entwickelt, welches eine automatisierte, optimale Auslegung von Magnetlagern auf Basis der Particle-Swarm-Optimization ermöglicht. Dabei wurden auch Temperatureinflüsse berücksichtigt, sodass eine Auslegung von Magnetlagern für erweiterte Temperaturbereiche möglich ist (Hochtemperatur-Magnetlager). / The design of magnetic bearings is usually carried out by specialist personnel in an iterative time-consuming process. This represents a major cost factor in the development of machines with magnetic bearings or the retrofitting of machines with conventional bearings. For this reason, a software tool was developed that enables an automated, optimal design of magnetic bearings based on Particle-Swarm Optimization. Temperature influences were also taken into account, so that a design of magnetic bearings for extended temperature ranges is possible (high-temperature magnetic bearings).
|
165 |
Implantable Antennas For Wireless Data Telemetry: Design, Simulation, And Measurement TechniquesKaracolak, Tutku 11 December 2009 (has links)
Recent advances in electrical engineering have let the realization of small size electrical systems for in-body applications. Today’s hybrid implantable systems combine radio frequency and biosensor technologies. The biosensors are intended for wireless medical monitoring of the physiological parameters such as glucose, pressure, temperature etc. Enabling wireless communication with these biosensors is vital to allow continuous monitoring of the patients over a distance via radio frequency (RF) technology. Because the implantable antennas provide communication between the implanted device and the external environment, their efficient design is vital for overall system reliability. However, antenna design for implantable RF systems is a quite challenging problem due to antenna miniaturization, biocompatibility with the body’s physiology, high losses in the tissue, impedance matching, and low-power requirements. This dissertation presents design and measurement techniques of implantable antennas for medical wireless telemetry. A robust stochastic evolutionary optimization method, particle swarm optimization (PSO), is combined with an in-house finite-element boundary-integral (FE-BI) electromagnetic simulation code to design optimum implantable antennas using topology optimization. The antenna geometric parameters are optimized by PSO, and a fitness function is computed by FE-BI simulations to evaluate the performance of each candidate solution. For validating the robustness of the algorithm, in-vitro and in-vivo measurement techniques are also introduced. To illustrate this design methodology, two implantable antennas for wireless telemetry applications are considered. First, a small-size dual medical implant communications service (MICS) (402 MHz – 405 MHz) and industrial, scientific, and medical (ISM) (2.4 GHz – 2.48 GHz) band implantable antenna for human body is designed, followed by a dual band implantable antenna operating also in MICS and ISM bands for animal studies. In order to test the designed antennas in-vitro, materials mimicking the electrical properties of human and rat skins are developed. The optimized antennas are fabricated and measured in the materials. Moreover, the second antenna is in-vivo tested to observe the effects of the live tissue on the antenna performance. Simulation and measurement results regarding antenna parameters of the designed antennas such as return loss and radiation pattern are given and discussed in detail. The development details of the tissue-mimicking materials are also presented.
|
166 |
Particle Swarm Optimization Stability AnalysisDjaneye-Boundjou, Ouboti Seydou Eyanaa January 2013 (has links)
No description available.
|
167 |
A Novel Method for Accurate Evaluation of Size for Cylindrical ComponentsRamaswami, Hemant 13 April 2010 (has links)
No description available.
|
168 |
Computational Intelligence and Data Mining Techniques Using the Fire Data SetStorer, Jeremy J. 04 May 2016 (has links)
No description available.
|
169 |
Calibration and Estimation of Dog Teeth Positions in Synchronizers for Minimizing Noise and Wear during Gear Shifting / Kalibrering och uppskattning av positioner för dog-teeth i synkroniserare för minimering av buller och slitage under växlingKong, Qianyin January 2020 (has links)
Electric motors are used more widely in automotive to reducing emissions in vehicles. Due to the decreased usage of internal combustion engines which used to be the main noise source, impacts from synchronizers cannot be ignored during gear shifting, not only causing noise and wear but also delaying gear shifting completion. To minimize the impacts during gear shifting, a dog teeth position sensor is required but the high calculation frequency leads to a high cost, due to the high velocity of synchronizer portions and the dog teeth number. In this thesis, the gear shifting transmission is being modelled, in order to study the process of gear shifting and engagement. The transmission model, which is expressed with electrics and dynamics formulations. In order to avoid the impact without the dog teeth position sensor, this thesis proposes an estimation algorithm based on the transmission model to approve the gear engagement if the first and second portions of synchronizers are engaged in the mating position without impacts. Two different learning algorithms: direct comparison and particle swarm optimization application, are presented in the thesis as well, which are used to calibrate a parameter in the off-time test as part of the end of the calibration line, the so-called relevant initial phase being used in the real-time estimation. The transmission model is simulated in Simulink and different algorithms are running in MATLAB. All these results are plotted and analyzed for further evaluation in different aspects in the result chapter. The direct comparison algorithm has a simpler structure of computation but the quantity of required actuation is uncertain in this algorithm with a probability of failure to find the solution. The application of particle swarm optimization in this case succeeds in calibrating the objective parameter with a small error than the other algorithm. Actuation quantity affects the accuracy of the solutions and errors but not the failure rate. / Elektriska motorer används i allt större utsträckning inom fordonsindustrin för att minska utsläppen från fordon. Den minskade användningen av förbränningsmotorer, som tidigare varit den främsta bullerkällan, gör att kollisioner från synkroniserare inte kan bli ignorerade under växlingen. Dessa kollisioner orsakar inte bara buller och nötningar utan även fördröjer slutförandet av växlingen. För att minimera kollisioner under växlingen krävs det en positionssensor för dog-teeth, men den höga beräkningsfrekvensen leder till hög kostnad på grund av den höga hastigheten hos synkroniseringsdelarna samt antalet dog-teeth. I den här avhandlingen görs en modell av växellåda för att studera växlingsprocessen och kugghjulsingreppet. Transmissionsmodellen uttrycks med elektriska och dynamiska formuleringar. För att undvika kollisioner utan positionssensor för dog-teeth, föreslås det en uppskattningsalgoritm baserad på transmissionsmodellen för att godta kugghjulsingreppet om den första and andra delen av synkroniseraren är inkopplade i parningsläget utan kollisioner. Två olika inlärningsalgoritmer, direkt jämförelsemetoden och partikelsvärmoptimeringsmetoden presenteras även i avhandlingen. De används för att kalibrera en parameter i off-time test som en del av slutet av produktionslinjen. Denna parameter kallas för den relevanta initialfasen och används vid realtidsuppskattningen. Transmissionsmodellen är simulerad i Simulink och de olika algoritmerna exekveras i Matlab. Alla resultat är plottade och analyserade för vidare utvärdering av olika aspekter i resultatkapitlet. Den direkta jämförelsealgoritmen har en enklare beräkningsstruktur, men mängden av nödvändig exekveringar är oklar för denna algoritm med en sannolikhet att det inte går att hitta lösningen. Däremot visar det sig att partikelsvärmoptimeringsmetoden lyckas med att kalibrera målparametern med dessutom ge mindre fel än den andra algoritmen. Antalet exekveringar påverkar lösningen samt noggrannheten hos lösningarna men påverkar inte själva felfrekvensen.
|
170 |
Online Techniques for Enhancing the Diagnosis of Digital CircuitsTanwir, Sarmad 05 April 2018 (has links)
The test process for semiconductor devices involves generation and application of test patterns, failure logging and diagnosis. Traditionally, most of these activities cater for all possible faults without making any assumptions about the actual defects present in the circuit. As the size of the circuits continues to increase (following the Moore's Law) the size of the test sets is also increasing exponentially. It follows that the cost of testing has already surpassed that of design and fabrication.
The central idea of our work in this dissertation is that we can have substantial savings in the test cost if we bring the actual hardware under test inside the test process's various loops -- in particular: failure logging, diagnostic pattern generation and diagnosis.
Our first work, which we describe in Chapter 3, applies this idea to failure logging. We modify the existing failure logging process that logs only the first few failure observations to an intelligent one that logs failures on the basis of their usefulness for diagnosis. To enable the intelligent logging, we propose some lightweight metrics that can be computed in real-time to grade the diagnosibility of the observed failures. On the basis of this grading, we select the failures to be logged dynamically according to the actual defects in the circuit under test. This means that the failures may be logged in a different manner for devices having different defects. This is in contrast with the existing method that has the same logging scheme for all failing devices. With the failing devices in the loop, we are able to optimize the failure log in accordance with every particular failing device thereby improving the quality of diagnosis subsequently. In Chapter 4, we investigate the most lightweight of these metrics for failure log optimization for the diagnosis of multiple simultaneous faults and provide the results of our experiments.
Often, in spite of exploiting the entire potential of a test set, we might not be able to meet our diagnosis goals. This is because the manufacturing tests are generated to meet the fault coverage goals using as fewer tests as possible. In other words, they are optimized for `detection count' and `test time' and not for `diagnosis'. In our second work, we leverage realtime measures of diagnosibility, similar to the ones that were used for failure log optimization, to generate additional diagnostic patterns. These additional patterns help diagnose the existing failures beyond the power of existing tests. Again, since the failing device is inside the test generation loop, we obtain highly specific tests for each failing device that are optimized for its diagnosis. Using our proposed framework, we are able to diagnose devices better and faster than the state of the art industrial tools. Chapter 5 provides a detailed description of this method.
Our third work extends the hardware-in-the-loop framework to the diagnosis of scan chains. In this method, we define a different metric that is applicable to scan chain diagnosis. Again, this method provides additional tests that are specific to the diagnosis of the particular scan chain defects in individual devices. We achieve two further advantages in this approach as compared to the online diagnostic pattern generator for logic diagnosis. Firstly, we do not need a known good device for generating or knowing the good response and secondly, besides the generation of additional tests, we also perform the final diagnosis online i.e. on the tester during test application. We explain this in detail in Chapter 6.
In our research, we observe that feedback from a device is very useful for enhancing the quality of root-cause investigations of the failures in its logic and test circuitry i.e. the scan chains. This leads to the question whether some primitive signals from the devices can be indicative of the fault coverage of the applied tests. In other words, can we estimate the fault coverage without the costly activities of fault modeling and simulation? By conducting further research into this problem, we found that the entropy measurements at the circuit outputs do indeed have a high correlation with the fault coverage and can also be used to estimate it with a good accuracy. We find that these predictions are accurate not only for random tests but also for the high coverage ATPG generated tests. We present the details of our fourth contribution in Chapter 7. This work is of significant importance because it suggests that high coverage tests can be learned by continuously applying random test patterns to the hardware and using the measured entropy as a reward function. We believe that this lays down a foundation for further research into gate-level sequential test generation, which is currently intractable for industrial scale circuits with the existing techniques. / Ph. D. / When a new microchip fabrication technology is introduced, the manufacturing is far from perfect. A lot of work goes into updating the fabrication rules and microchip designs before we get a higher proportion of good or defect-free chips. With continued advancements in the fabrication technology, this enhancement work has become increasingly difficult. This is primarily because of the sheer number of transistors that can be fabricated on a single chip, which has practically doubled every two years for the last four decades. The microchip testing process involves application of stimuli and checking the responses. These stimuli cater for a huge number of possible defects inside the chips. With the increase in the number of transistors, covering all possible defects is becoming practically impossible within the business constraints.
This research proposes a solution to this problem, which is to make various activities in this process adaptive to the actual defects in the chips. The stimuli, we mentioned above, now depend upon the feedback from the chip. By utilizing this feedback, we have demonstrated significant improvements in three primary activities namely failure logging, scan testing and scan chain diagnosis over state-of-the-art industrial tools. These activities are essential steps related to improving the proportion of good chips in the manufactured lot.
|
Page generated in 0.1327 seconds