• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Introducing Stochastic Time Delays in Gradient Optimization as a Method for Complex Loss Surface Navigation in High-Dimensional Settings

Manner, Eric Benson 24 April 2023 (has links) (PDF)
Time delays are an inherent part of real-world systems. Besides the apparent slowing of the system, these time delays often cause destabilization in otherwise stable systems, and perhaps even more unexpectedly, can stabilize an unstable system. Here, we propose the Stochastic Time-Delayed Adaptation as a method for improving optimization on certain high-dimensional surfaces, which simply wraps a known optimizer --such as the Adam optimizer-- and is able to add a variety of time-delays. We begin by exploring time delays on certain gradient-based optimization methods and their affect on the optimizer's convergence properties. These optimizers include the standard gradient descent method and the more recent Adam Optimizer, where the latter is commonly used in neural networks for deep learning. To begin to describe the effect of time-delays on these methods, we use the theory of intrinsic stability. It has been shown that a system that possesses the property of intrinsic stability (a stronger form of global stability) will maintain its stability when subject to any time delays, e.g., constant, periodic, stochastic, etc. In feasible cases, we find relevant conditions under which the optimization method adapted with time delays is intrinsically stable and therefore converges to the system's minimal value. Finally, we examine the optimizer's performance using common optimizer performance metrics. This includes the number of steps an algorithm takes to converge and also the final loss value in relation to the global minimum of the loss function. We test these outcomes using various adaptations of the Adam optimizer on multiple common test optimization functions, which are designed to be difficult for vanilla optimizer methods. We show that the Stochastic Time-Delayed Adaptation can greatly improve an optimizer's ability to find a global minimum of a complex loss function.
2

Book retrieval system : Developing a service for efficient library book retrievalusing particle swarm optimization

Woods, Adam January 2024 (has links)
Traditional methods for locating books and resources in libraries often entail browsing catalogsor manual searching that are time-consuming and inefficient. This thesis investigates thepotential of automated digital services to streamline this process, by utilizing Wi-Fi signal datafor precise indoor localization. Central to this study is the development of a model that employsWi-Fi signal strength (RSSI) and round-trip time (RTT) to estimate the locations of library userswith arm-length accuracy. This thesis aims to enhance the accuracy of location estimation byexploring the complex, nonlinear relationship between Received Signal Strength Indicator(RSSI) and Round-Trip Time (RTT) within signal fingerprints. The model was developed usingan artificial neural network (ANN) to capture the relationship between RSSI and RTT. Besides,this thesis introduces and evaluates the performance of a novel variant of the Particle SwarmOptimization (PSO) algorithm, named Randomized Particle Swarm Optimization (RPSO). Byincorporating randomness into the conventional PSO framework, the RPSO algorithm aims toaddress the limitations of the standard PSO, potentially offering more accurate and reliablelocation estimations. The PSO algorithms, including RPSO, were integrated into the trainingprocess of ANN to optimize the network’s weights and biases through direct optimization, aswell as to enhance the hyperparameters of the ANN’s built-in optimizer. The findings suggestthat optimizing the hyperparameters yields better results than direct optimization of weights andbiases. However, RPSO did not significantly enhance the performance compared to thestandard PSO in this context, indicating the need for further investigation into its application andpotential benefits in complex optimization scenarios.

Page generated in 0.0729 seconds