• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 138
  • 70
  • 29
  • 23
  • 22
  • 14
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 412
  • 412
  • 351
  • 82
  • 78
  • 74
  • 69
  • 63
  • 55
  • 47
  • 44
  • 43
  • 42
  • 42
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

An Analysis of Overfitting in Particle Swarm Optimised Neural Networks

van Wyk, Andrich Benjamin January 2014 (has links)
The phenomenon of overfitting, where a feed-forward neural network (FFNN) over trains on training data at the cost of generalisation accuracy is known to be speci c to the training algorithm used. This study investigates over tting within the context of particle swarm optimised (PSO) FFNNs. Two of the most widely used PSO algorithms are compared in terms of FFNN accuracy and a description of the over tting behaviour is established. Each of the PSO components are in turn investigated to determine their e ect on FFNN over tting. A study of the maximum velocity (Vmax) parameter is performed and it is found that smaller Vmax values are optimal for FFNN training. The analysis is extended to the inertia and acceleration coe cient parameters, where it is shown that speci c interactions among the parameters have a dominant e ect on the resultant FFNN accuracy and may be used to reduce over tting. Further, the signi cant e ect of the swarm size on network accuracy is also shown, with a critical range being identi ed for the swarm size for e ective training. The study is concluded with an investigation into the e ect of the di erent activation functions. Given strong empirical evidence, an hypothesis is made that stating the gradient of the activation function signi cantly a ects the convergence of the PSO. Lastly, the PSO is shown to be a very effective algorithm for the training of self-adaptive FFNNs, capable of learning from unscaled data. / Dissertation (MSc)--University of Pretoria, 2014. / tm2015 / Computer Science / MSc / Unrestricted
132

Particle swarm optimization : empirical and theoretical stability analysis

Cleghorn, Christopher Wesley January 2017 (has links)
Particle swarm optimization (PSO) is a well-known stochastic population-based search algorithm, originally developed by Kennedy and Eberhart in 1995. Given PSO's success at solving numerous real world problems, a large number of PSO variants have been proposed. However, unlike the original PSO, most variants currently have little to no existing theoretical results. This lack of a theoretical underpinning makes it difficult, if not impossible, for practitioners to make informed decisions about the algorithmic setup. This thesis focuses on the criteria needed for particle stability, or as it is often refereed to as, particle convergence. While new PSO variants are proposed at a rapid rate, the theoretical analysis often takes substantially longer to emerge, if at all. In some situation the theoretical analysis is not performed as the mathematical models needed to actually represent the PSO variants become too complex or contain intractable subproblems. It is for this reason that a rapid means of determining approximate stability criteria that does not require complex mathematical modeling is needed. This thesis presents an empirical approach for determining the stability criteria for PSO variants. This approach is designed to provide a real world depiction of particle stability by imposing absolutely no simplifying assumption on the underlying PSO variant being investigated. This approach is utilized to identify a number of previously unknown stability criteria. This thesis also contains novel theoretical derivations of the stability criteria for both the fully informed PSO and the unified PSO. The theoretical models are then empirically validated utilizing the aforementioned empirical approach in an assumption free context. The thesis closes with a substantial theoretical extension of current PSO stability research. It is common practice within the existing theoretical PSO research to assume that, in the simplest case, the personal and neighborhood best positions are stagnant. However, in this thesis, stability criteria are derived under a mathematical model where by the personal best and neighborhood best positions are treated as convergent sequences of random variables. It is also proved that, in order to derive stability criteria, no weaker assumption on the behavior of the personal and neighborhood best positions can be made. The theoretical extension presented caters for a large range of PSO variants. / Thesis (PhD)--University of Pretoria, 2017. / Computer Science / PhD / Unrestricted
133

A Hierarchical Particle Swarm Optimizer and Its Adaptive Variant

Janson, Stefan, Middendorf, Martin 05 February 2019 (has links)
Ahierarchical version of the particle swarm optimization (PSO) metaheuristic is introduced in this paper. In the new method called H-PSO, the particles are arranged in a dynamic hierarchy that is used to define a neighborhood structure. Depending on the quality of their so-far best-found solution, the particles move up or down the hierarchy. This gives good particles that move up in the hierarchy a larger influence on the swarm. We introduce a variant of H-PSO, in which the shape of the hierarchy is dynamically adapted during the execution of the algorithm. Another variant is to assign different behavior to the individual particles with respect to their level in the hierarchy. H-PSO and its variants are tested on a commonly used set of optimization functions and are compared to PSO using different standard neighborhood schemes.
134

Multi-Objective Optimization of Plug-In HEV Powertrain Using Modified Particle Swarm Optimization

Parkar, Omkar 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / An increase in the awareness of environmental conservation is leading the automotive industry into the adaptation of alternatively fueled vehicles. Electric, Fuel-Cell as well as Hybrid-Electric vehicles focus on this research area with the aim to efficiently utilize vehicle powertrain as the first step. Energy and Power Management System control strategies play a vital role in improving the efficiency of any hybrid propulsion system. However, these control strategies are sensitive to the dynamics of the powertrain components used in the given system. A kinematic mathematical model for Plug-in Hybrid Electric Vehicle (PHEV) has been developed in this study and is further optimized by determining optimal power management strategy for minimal fuel consumption as well as NOx emissions while executing a set drive cycle. A multi-objective optimization using weighted sum formulation is needed in order to observe the trade-off between the optimized objectives. Particle Swarm Optimization (PSO) algorithm has been used in this research, to determine the trade-off curve between fuel and NOx. In performing these optimizations, the control signal consisting of engine speed and reference battery SOC trajectory for a 2-hour cycle is used as the controllable decision parameter input directly from the optimizer. Each element of the control signal was split into 50 distinct points representing the full 2 hours, giving slightly less than 2.5 minutes per point, noting that the values used in the model are interpolated between the points for each time step. With the control signal consisting of 2 distinct signals, speed, and SOC trajectory, as 50 element time-variant signals, a multidimensional problem was formulated for the optimizer. Novel approaches to balance the optimizer exploration and convergence, as well as seeding techniques are suggested to solve the optimal control problem. The optimization of each involved individual runs at 5 different weight levels with the resulting cost populations being compiled together to visually represent with the help of Pareto front development. The obtained results of simulations and optimization are presented involving performances of individual components of the PHEV powertrain as well as the optimized PMS strategy to follow for a given drive cycle. Observations of the trade-off are discussed in the case of Multi-Objective Optimizations.
135

Evolutionary Optimization Algorithms for Nonlinear Systems

Raj, Ashish 01 May 2013 (has links)
Many real world problems in science and engineering can be treated as optimization problems with multiple objectives or criteria. The demand for fast and robust stochastic algorithms to cater to the optimization needs is very high. When the cost function for the problem is nonlinear and non-differentiable, direct search approaches are the methods of choice. Many such approaches use the greedy criterion, which is based on accepting the new parameter vector only if it reduces the value of the cost function. This could result in fast convergence, but also in misconvergence where it could lead the vectors to get trapped in local minima. Inherently, parallel search techniques have more exploratory power. These techniques discourage premature convergence and consequently, there are some candidate solution vectors which do not converge to the global minimum solution at any point of time. Rather, they constantly explore the whole search space for other possible solutions. In this thesis, we concentrate on benchmarking three popular algorithms: Real-valued Genetic Algorithm (RGA), Particle Swarm Optimization (PSO), and Differential Evolution (DE). The DE algorithm is found to out-perform the other algorithms in fast convergence and in attaining low-cost function values. The DE algorithm is selected and used to build a model for forecasting auroral oval boundaries during a solar storm event. This is compared against an established model by Feldstein and Starkov. As an extended study, the ability of the DE is further put into test in another example of a nonlinear system study, by using it to study and design phase-locked loop circuits. In particular, the algorithm is used to obtain circuit parameters when frequency steps are applied at the input at particular instances.
136

Novel Semi-Supervised Learning Models to Balance Data Inclusivity and Usability in Healthcare Applications

January 2019 (has links)
abstract: Semi-supervised learning (SSL) is sub-field of statistical machine learning that is useful for problems that involve having only a few labeled instances with predictor (X) and target (Y) information, and abundance of unlabeled instances that only have predictor (X) information. SSL harnesses the target information available in the limited labeled data, as well as the information in the abundant unlabeled data to build strong predictive models. However, not all the included information is useful. For example, some features may correspond to noise and including them will hurt the predictive model performance. Additionally, some instances may not be as relevant to model building and their inclusion will increase training time and potentially hurt the model performance. The objective of this research is to develop novel SSL models to balance data inclusivity and usability. My dissertation research focuses on applications of SSL in healthcare, driven by problems in brain cancer radiomics, migraine imaging, and Parkinson’s Disease telemonitoring. The first topic introduces an integration of machine learning (ML) and a mechanistic model (PI) to develop an SSL model applied to predicting cell density of glioblastoma brain cancer using multi-parametric medical images. The proposed ML-PI hybrid model integrates imaging information from unbiopsied regions of the brain as well as underlying biological knowledge from the mechanistic model to predict spatial tumor density in the brain. The second topic develops a multi-modality imaging-based diagnostic decision support system (MMI-DDS). MMI-DDS consists of modality-wise principal components analysis to incorporate imaging features at different aggregation levels (e.g., voxel-wise, connectivity-based, etc.), a constrained particle swarm optimization (cPSO) feature selection algorithm, and a clinical utility engine that utilizes inverse operators on chosen principal components for white-box classification models. The final topic develops a new SSL regression model with integrated feature and instance selection called s2SSL (with “s2” referring to selection in two different ways: feature and instance). s2SSL integrates cPSO feature selection and graph-based instance selection to simultaneously choose the optimal features and instances and build accurate models for continuous prediction. s2SSL was applied to smartphone-based telemonitoring of Parkinson’s Disease patients. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2019
137

Real-time estimation of state-of-charge using particle swarm optimization on the electro-chemical model of a single cell

Chandra Shekar, Arun 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Accurate estimation of State of Charge (SOC) is crucial. With the ever-increasing usage of batteries, especially in safety critical applications, the requirement of accurate estimation of SOC is paramount. Most current methods of SOC estimation rely on data collected and calibrated offline, which could lead to inaccuracies in SOC estimation as the battery ages or under different operating conditions. This work aims at exploring the real-time estimation and optimization of SOC by applying Particle Swarm Optimization (PSO) to a detailed electrochemical model of a single cell. The goal is to develop a single cell model and PSO algorithm which can run on an embedded device with reasonable utilization of CPU and memory resources and still be able to estimate SOC with acceptable accuracy. The scope is to demonstrate the accurate estimation of SOC for 1C charge and discharge for both healthy and aged cell.
138

3-D Scene Reconstruction for Passive Ranging Using Depth from Defocus and Deep Learning

Emerson, David R. 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Depth estimation is increasingly becoming more important in computer vision. The requirement for autonomous systems to gauge their surroundings is of the utmost importance in order to avoid obstacles, preventing damage to itself and/or other systems or people. Depth measuring/estimation systems that use multiple cameras from multiple views can be expensive and extremely complex. And as these autonomous systems decrease in size and available power, the supporting sensors required to estimate depth must also shrink in size and power consumption. This research will concentrate on a single passive method known as Depth from Defocus (DfD), which uses an in-focus and out-of-focus image to infer the depth of objects in a scene. The major contribution of this research is the introduction of a new Deep Learning (DL) architecture to process the the in-focus and out-of-focus images to produce a depth map for the scene improving both speed and performance over a range of lighting conditions. Compared to the previous state-of-the-art multi-label graph cuts algorithms applied to the synthetically blurred dataset the DfD-Net produced a 34.30% improvement in the average Normalized Root Mean Square Error (NRMSE). Similarly the DfD-Net architecture produced a 76.69% improvement in the average Normalized Mean Absolute Error (NMAE). Only the Structural Similarity Index (SSIM) had a small average decrease of 2.68% when compared to the graph cuts algorithm. This slight reduction in the SSIM value is a result of the SSIM metric penalizing images that appear to be noisy. In some instances the DfD-Net output is mottled, which is interpreted as noise by the SSIM metric. This research introduces two methods of deep learning architecture optimization. The first method employs the use of a variant of the Particle Swarm Optimization (PSO) algorithm to improve the performance of the DfD-Net architecture. The PSO algorithm was able to find a combination of the number of convolutional filters, the size of the filters, the activation layers used, the use of a batch normalization layer between filters and the size of the input image used during training to produce a network architecture that resulted in an average NRMSE that was approximately 6.25% better than the baseline DfD-Net average NRMSE. This optimized architecture also resulted in an average NMAE that was 5.25% better than the baseline DfD-Net average NMAE. Only the SSIM metric did not see a gain in performance, dropping by 0.26% when compared to the baseline DfD-Net average SSIM value. The second method illustrates the use of a Self Organizing Map clustering method to reduce the number convolutional filters in the DfD-Net to reduce the overall run time of the architecture while still retaining the network performance exhibited prior to the reduction. This method produces a reduced DfD-Net architecture that has a run time decrease of between 14.91% and 44.85% depending on the hardware architecture that is running the network. The final reduced DfD-Net resulted in a network architecture that had an overall decrease in the average NRMSE value of approximately 3.4% when compared to the baseline, unaltered DfD-Net, mean NRMSE value. The NMAE and the SSIM results for the reduced architecture were 0.65% and 0.13% below the baseline results respectively. This illustrates that reducing the network architecture complexity does not necessarily reduce the reduction in performance. Finally, this research introduced a new, real world dataset that was captured using a camera and a voltage controlled microfluidic lens to capture the visual data and a 2-D scanning LIDAR to capture the ground truth data. The visual data consists of images captured at seven different exposure times and 17 discrete voltage steps per exposure time. The objects in this dataset were divided into four repeating scene patterns in which the same surfaces were used. These scenes were located between 1.5 and 2.5 meters from the camera and LIDAR. This was done so any of the deep learning algorithms tested would see the same texture at multiple depths and multiple blurs. The DfD-Net architecture was employed in two separate tests using the real world dataset. The first test was the synthetic blurring of the real world dataset and assessing the performance of the DfD-Net trained on the Middlebury dataset. The results of the real world dataset for the scenes that were between 1.5 and 2.2 meters from the camera the DfD-Net trained on the Middlebury dataset produced an average NRMSE, NMAE and SSIM value that exceeded the test results of the DfD-Net tested on the Middlebury test set. The second test conducted was the training and testing solely on the real world dataset. Analysis of the camera and lens behavior led to an optimal lens voltage step configuration of 141 and 129. Using this configuration, training the DfD-Net resulted in an average NRMSE, NMAE and SSIM of 0.0660, 0.0517 and 0.8028 with a standard deviation of 0.0173, 0.0186 and 0.0641 respectively.
139

Applying Different Wide-Area Response-Based Controls to Different Contingencies in Power Systems

Iranmanesh, Shahrzad 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The electrical disturbances in the power system have threatened the stability of the system. In the first step, it is necessary to detect these electrical disturbances or events. In the next step, a proper control should apply to the system to decrease the consequences of the disturbances. One-shot control is one of the effective methods for stabilizing the events. In this method, a proper amount of loads are increased or decreased to the electrical system. Determining the amounts of loads, and the location for shedding is crucial. Moreover, some control combinations are more effective for some events and less effective for some others. Therefore, this project is completed in two different sections. First, finding the effective control combinations, second, finding an algorithm for applying different control combinations to different contingencies in real-time. To find effective control combinations, sensitivity analysis is employed to locate the most effective loads in the system. Then to find the control combination commands, gradient descent, and PSO algorithm are used in this project. In the next step, a pattern recognition method is used to apply the appropriate control combination for every event. The decision tree is selected as the pattern recognition method. The three most effective control combinations found by sensitivity analysis and the PSO method are used in the remainder of this study. A decision tree is trained for each of the three control combinations, and their outputs are combined into an algorithm for selecting the best control in real-time. Finally, the algorithm is evaluated using a test set of contingencies. The final results reveal a 30\% improvement in comparison to the previous studies.
140

Autonomous Mission Planning for Multi-Terrain Solar-Powered Unmanned Ground Vehicles

Chen, Fei 30 July 2019 (has links)
No description available.

Page generated in 0.056 seconds