• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2682
  • 1221
  • 191
  • 181
  • 120
  • 59
  • 35
  • 27
  • 26
  • 25
  • 24
  • 21
  • 20
  • 20
  • 18
  • Tagged with
  • 5713
  • 5713
  • 2024
  • 1738
  • 1486
  • 1378
  • 1251
  • 1194
  • 997
  • 755
  • 702
  • 672
  • 623
  • 533
  • 516
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

A designed experiment on the use of neural-network models in short-term hourly load forecasting /

Choueiki, Mohamad Hisham January 1995 (has links)
No description available.
532

Training of Neural Networks Using the Smooth Variable Structure Filter with Application to Fault Detection

Ahmed, Ryan 04 1900 (has links)
Artificial neural network (ANNs) is an information processing paradigm inspired by the human brain. ANNs have been used in numerous applications to provide complex nonlinear input-output mappings. They have the ability to adapt and learn from observed data. The training of neural networks is an important area of research and consideration. Training techniques have to provide high accuracy, fast speed of convergence, and avoid premature convergence to local minima. In this thesis, a novel training method is proposed. This method is based on the relatively new Smooth Variable Structure filter (SVSF) and is formulated for feedforward multilayer perceptron training. The SVSF is a state and parameter estimation that is based on the Sliding Mode Concept and works in a predictor-corrector fashion. The SVSF applies a discontinuous corrective term to estimate state and parameters. Its advantages include guaranteed stability, robustness, and fast speed of convergence. The proposed training technique is applied to three real-world benchmark problems and to a fault detection application in a Ford diesel engine. SVSF-based training technique shows an excellent generalization capability and a fast speed of convergence. / Artificial neural network (ANNs) is an information processing paradigm inspired by the human brain. ANNs have been used in numerous applications to provide complex nonlinear input-output mappings. They have the ability to adapt and learn from observed data. The training of neural networks is an important area of research and consideration. Training techniques have to provide high accuracy, fast speed of convergence, and avoid premature convergence to local minima. In this thesis, a novel training method is proposed. This method is based on the relatively new Smooth Variable Structure filter (SVSF) and is formulated for feedforward multilayer perceptron training. The SVSF is a state and parameter estimation that is based on the Sliding Mode Concept and works in a predictor-corrector fashion. The SVSF applies a discontinuous corrective term to estimate state and parameters. Its advantages include guaranteed stability, robustness, and fast speed of convergence. The proposed training technique is applied to three real-world benchmark problems and to a fault detection application in a Ford diesel engine. SVSF-based training technique shows an excellent generalization capability and a fast speed of convergence. / Thesis / Master of Applied Science (MASc)
533

Robust Electric Power Infrastructures. Response and Recovery during Catastrophic Failures

Bretas, Arturo Suman 06 December 2001 (has links)
This dissertation is a systematic study of artificial neural networks (ANN) applications in power system restoration (PSR). PSR is based on available generation and load to be restored analysis. A literature review showed that the conventional PSR methods, i.e. the pre-established guidelines, the expert systems method, the mathematical programming method and the petri-net method have limitations such as the necessary time to obtain the PSR plan. ANN may help to solve this problem presenting a reliable PSR plan in a smaller time. Based on actual and past experiences, a PSR engine based on ANN was proposed and developed. Data from the Iowa 162 bus power system was used in the implementation of the technique. Reactive and real power balance, fault location, phase angles across breakers and intentional islanding were taken into account in the implementation of the technique. Constraints in PSR as thermal limits of transmission lines (TL), stability issues, number of TL used in the restoration plan and lockout breakers were used to create feasible PSR plans. To compare the time necessary to achieve the PSR plan with another technique a PSR method based on a breadth-search algorithm was implemented. This algorithm was also used to create training and validation patterns for the ANN used in the scheme. An algorithm to determine the switching sequence of the breakers was also implemented. In order to determine the switching sequence of the breakers the algorithm takes into account the most priority loads and the final system configuration generated by the ANN. The PSR technique implemented is composed by several pairs of ANN, each one assigned to an individual island of the system. The restoration of the system is done in parallel in each island. After each island is restored the tie lines are closed. The results encountered shows that ANN based schemes can be used in PSR helping the operators restore the system under the stressful conditions following a blackout. / Ph. D.
534

Examining Electronic Markets in Which Intelligent Agents Are Used for Comparison Shopping and Dynamic Pricing

Hertweck, Bryan M. 07 October 2005 (has links)
Electronic commerce markets are becoming increasingly popular forums for commerce. As those markets mature, buyers and sellers will both vigorously seek techniques to improve their performance. The Internet lends itself to the use of agents to work on behalf of buyers and sellers. Through simulation, this research examines different implementations of buyers' agents (shopbots) and sellers' agents (pricebots) so that buyers, sellers, and agent builders can capitalize on the evolution of e-commerce technologies. Internet markets bring price visibility to a level beyond what is observed in traditional brick-and-mortar markets. Additionally, an online seller is able to update prices quickly and cheaply. Due to these facts, there are many pricing strategies that sellers can implement via pricebot to react to their environments. The best strategy for a particular seller is dependent on characteristics of its marketplace. This research shows that the extent to which buyers are using shopbots is a critical driver of the success of pricing strategies. When measuring profitability, the interaction between shopbot usage and seller strategy is very strong - what works well at low shopbot usage levels may perform poorly at high levels. If a seller is evaluating strategies based on sales volume, the choice may change. Additionally, as markets evolve and competitors change strategies, the choice of most effective counterstrategies may evolve as well. Sellers need to clearly define their goals and thoroughly understand their marketplace before choosing a pricing strategy. Just as sellers have choices to make in implementing pricebots, buyers have decisions to make with shopbots. In addition to the factors described above, the types of shopbots in use can actually affect the relative performance of pricing strategies. This research also shows that varying shopbot implementations (specifically involving the use of a price memory component) can affect the prices that buyers ultimately pay - an especially important consideration for high-volume buyers. Modern technology permits software agents to employ artificial intelligence. This work demonstrates the potential of neural networks as a tool for pricebots. As discussed above, a seller's best strategy option can change as the behavior of the competition changes. Simulation can be used to evaluate a multitude of scenarios and determine what strategies work best under what conditions. This research shows that a neural network can be effectively implemented to classify the behavior of competitors and point to the best counterstrategy. / Ph. D.
535

Neural Network Enhancement of Closed-Loop Controllers for Ill-Modeled Systems with Unknown Nonlinearities

Smith, Bradley R. 15 December 1997 (has links)
The nonlinearities of a nonlinear system can degrade the performance of a closed-loop system. In order to improve the performance of the closed-loop system, an adaptive technique, using a neural network, was developed. A neural network is placed in series between the output of the fixed-gain controller and the input into the plant. The weights are initialized to values that result in a unity gain across the neural network, which is referred to as a "feed-through neural network." The initial unity gain causes the output of the neural network to be equal to the input of neural network at the beginning of the convergence process. The result is that the closed-loop system's performance with the neural network is, initially, equal to the closed-loop system's performance without the neural network. As the weights of the neural network converge, the performance of the system improves. However, the back propagation algorithm was developed to update the weights of the feed-forward neural network in the open loop. Although the back propagation algorithm converged the weights in the closed loop, it worked very slowly. Two new update algorithms were developed for converging the weights of the neural network inside the closed-loop. The first algorithm was developed to make the convergence process independent of the plants dynamics and to correct for the effects of the closed loop. The second algorithm does not eliminate the effects of the plant's dynamics, but still does correct for the effects of the closed loop. Both algorithms are effective in converging the weights much faster than the back propagation algorithm. All of the update algorithms have been shown to work effectively on stable and unstable nonlinear plants. / Ph. D.
536

Modeling Autonomous Agents' Behavior Using Neuro-Immune Networks

Meshref, Hossam 22 August 2002 (has links)
Autonomous robots are expected to interact with their dynamic changing environment. This interactions requires certain level of behavior based Intelligence, which facilitates the dynamic adaptation of the robot behavior accordingly with his surrounding environment. Many researches have been done in biological information processing systems to model the behavior of an autonomous robot. The Artificial Immune System (AIS) provides new paradigm suitable for dynamic problem dealing with unknown environment rather than a static problem. The immune system has some features such as memory, tolerance, diversity and more features that can be used in engineering applications. The immune system has an important feature called meta-dynamics in which new species of antibodies are produced continuously from the bone marrow. If the B-Cell (robot) cannot deal with the current situation, new behaviors (antibodies) should be generated by the meta dynamics function. This behavior should be incorporated into the existing immune system to gain immunity against new environmental changes. We decided to use a feed forward Artificial Neural Network (ANN) to simulate this problem, and to build the AIS memory. Many researchers have tried to tackle different points in mimicking the biological immune system, but no one previously has proposed such an acquired memory. This contribution is made as a "proof of concept" to the field of biological immune system simulation as a start of further research efforts in this direction. Many applications can potentially use our designed Neuro-Immune Network (NIN), especially in the area of autonomous robotics. We demonstrated the use of the designed NIN to control a robot arm in an unknown environment. As the system encounters new cases, it will increase its ability to deal with old and new situations encountered. This novel technique can be applied to many robotics applications in industry, where autonomous robots are required to have adaptive behavior in response to their environmental changes. Regarding future work, the use of VLSI neural networks to enhance the speed of the system for real time applications can be investigated along with possible methods of design and implementation of a similar VLSI chip for the AIN. / Ph. D.
537

Industry Based Fundamental Analysis: Using Neural Networks and a Dual-Layered Genetic Algorithm Approach

Stivason, Charles T. 06 January 1999 (has links)
This research tests the ability of artificial learning methodologies to map market returns better than logistic regression. The learning methodologies used are neural networks and dual-layered genetic algorithms. These methodologies are used to develop a trading strategy to generate excess returns. The excess returns are compared to test the trading strategy's effectiveness. Market-adjusted and size-adjusted excess returns are calculated. Using a trading strategy based approach the logistic regression models generated greater returns than the neural network and dual-layered genetic algorithm models. It appears that the noise in the financial markets prevents the artificial learning methodologies from properly mapping the market returns. The results confirm the findings that fundamental analysis can be used to generate excess returns. / Ph. D.
538

Utilizing Recurrent Neural Networks for Temporal Data Generation and Prediction

Nguyen, Thaovy Tuong 15 June 2021 (has links)
The Falling Creek Reservoir (FCR) in Roanoke is monitored for water quality and other key measurements to distribute clean and safe water to the community. Forecasting these measurements is critical for management of the FCR. However, current techniques are limited by inherent Gaussian linearity assumptions. Since the dynamics of the ecosystem may be non-linear, we propose neural network-based schemes for forecasting. We create the LatentGAN architecture by extending the recurrent neural network-based ProbCast and autoencoder forecasting architectures to produce multiple forecasts for a single time series. Suites of forecasts allow for calculation of confidence intervals for long-term prediction. This work analyzes and compares LatentGAN's accuracy for two case studies with state-of-the-art neural network forecasting methods. LatentGAN performs similarly with these methods and exhibits promising recursive results. / Master of Science / The Falling Creek Reservoir (FCR) is monitored for water quality and other key measurements to ensure distribution of clean and safe water to the community. Forecasting these measurements is critical for management of the FCR and can serve as indicators of significant ecological events that can greatly reduce water quality. Current predictive techniques are limited due to inherent linear assumptions. Thus, this work introduces LatentGAN, a data-driven, generative, predictive neural network. For a particular sequence of data, LatentGAN is able to generate a suite of possible predictions at the next time step. This work compares LatentGAN's predictive capabilities with existing neural network predictive models. LatentGAN performs similarly with these methods and exhibits promising recursive results.
539

Empirical Evaluation of Models Used to Predict Torso Muscle Recruitment Patterns

Perez, Miguel A. 20 October 1999 (has links)
For years, the human back has puzzled researchers with the complex behaviors it presents. Principally, the internal forces produced by back muscles have not been determined accurately. Two different approaches have historically been taken to predict muscle forces. The first relies on electromyography (EMG), while the second attempts to predict muscle responses using mathematical models. Three such predictive models are compared here. The models are Sum of Cubed Intensities, Artificial Neural Networks, and Distributed Moment Histogram. These three models were adapted to run using recently published descriptions of the lower back anatomy. To evaluate their effectiveness, the models were compared in terms of their fit to a muscle activation database including 14 different muscles. The database was collected as part of this experiment, and included 8 participants (4 male and 4 female) with similar height and weight. The participants resisted loads applied to their torso via a harness. Results showed the models performed poorly (average R2's in the 0.40's), indicating that further improvements are needed in our current low back muscle activation modeling techniques. Considerable discrepancies were found between internal moments (at L3/L4) determined empirically and measured with a force plate, indicating that the maximum muscle stress selected and/or the anatomy used were faulty. The activation pattern database collected also fills a gap in the literature by considering static loading patterns that had not been systematically varied before. / Master of Science
540

Exploring Accumulated Gradient-Based Quantization and Compression for Deep Neural Networks

Gaopande, Meghana Laxmidhar 29 May 2020 (has links)
The growing complexity of neural networks makes their deployment on resource-constrained embedded or mobile devices challenging. With millions of weights and biases, modern deep neural networks can be computationally intensive, with large memory, power and computational requirements. In this thesis, we devise and explore three quantization methods (post-training, in-training and combined quantization) that quantize 32-bit floating-point weights and biases to lower bit width fixed-point parameters while also achieving significant pruning, leading to model compression. We use the total accumulated absolute gradient over the training process as the indicator of importance of a parameter to the network. The most important parameters are quantized by the smallest amount. The post-training quantization method sorts and clusters the accumulated gradients of the full parameter set and subsequently assigns a bit width to each cluster. The in-training quantization method sorts and divides the accumulated gradients into two groups after each training epoch. The larger group consisting of the lowest accumulated gradients is quantized. The combined quantization method performs in-training quantization followed by post-training quantization. We assume storage of the quantized parameters using compressed sparse row format for sparse matrix storage. On LeNet-300-100 (MNIST dataset), LeNet-5 (MNIST dataset), AlexNet (CIFAR-10 dataset) and VGG-16 (CIFAR-10 dataset), post-training quantization achieves 7.62x, 10.87x, 6.39x and 12.43x compression, in-training quantization achieves 22.08x, 21.05x, 7.95x and 12.71x compression and combined quantization achieves 57.22x, 50.19x, 13.15x and 13.53x compression, respectively. Our methods quantize at the cost of accuracy, and we present our work in the light of the accuracy-compression trade-off. / Master of Science / Neural networks are being employed in many different real-world applications. By learning the complex relationship between the input data and ground-truth output data during the training process, neural networks can predict outputs on new input data obtained in real time. To do so, a typical deep neural network often needs millions of numerical parameters, stored in memory. In this research, we explore techniques for reducing the storage requirements for neural network parameters. We propose software methods that convert 32-bit neural network parameters to values that can be stored using fewer bits. Our methods also convert a majority of numerical parameters to zero. Using special storage methods that only require storage of non-zero parameters, we gain significant compression benefits. On typical benchmarks like LeNet-300-100 (MNIST dataset), LeNet-5 (MNIST dataset), AlexNet (CIFAR-10 dataset) and VGG-16 (CIFAR-10 dataset), our methods can achieve up to 57.22x, 50.19x, 13.15x and 13.53x compression respectively. Storage benefits are achieved at the cost of classification accuracy, and we present our work in the light of the accuracy-compression trade-off.

Page generated in 0.0671 seconds