• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1282
  • 349
  • 214
  • 91
  • 65
  • 53
  • 40
  • 36
  • 27
  • 17
  • 13
  • 13
  • 13
  • 12
  • 7
  • Tagged with
  • 2651
  • 2651
  • 831
  • 812
  • 588
  • 568
  • 448
  • 408
  • 399
  • 331
  • 310
  • 284
  • 259
  • 247
  • 242
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

The Convolutional Recurrent Structure in Computer Vision Applications

Xie, Dong 12 1900 (has links)
By organically fusing the methods of convolutional neural network (CNN) and recurrent neural network (RNN), this dissertation focuses on the application of optical character recognition and image classification processing. The first part of this dissertation presents an end-to-end novel receipt recognition system for capturing effective information from receipts (CEIR). The main contributions of this research part are divided into three parts. First, this research develops a preprocessing method for receipt images. Second, the modified connectionist text proposal network is introduced to execute text detection. Third, the CEIR combines the convolutional recurrent neural network with the connectionist temporal classification with maximum entropy regularization as a loss function to update the weights in networks and extract the characters from receipt. The CEIR system is validated with the scanned receipts optical character recognition and information extraction (SROIE) database. Furthermore, the CEIR system has strong robustness and can be extended to a variety of different scenarios beyond receipts. For the convolutional recurrent structure application of land use image classification, this dissertation comes up with a novel deep learning model for land use classification, the convolutional recurrent land use classifier (CRLUC), which further improves the accuracy in classifying remote sensing land use images. Besides, the convolutional fully-connected neural networks with hard sample memory pool structure (CFMP) is invented to tackle the remote sensing land use image classification tasks. The CRLUC and CFMP algorithm performances are tested in popular datasets. Experimental studies show the proposed algorithms can classify images with higher accuracy and fewer training episodes compared to popular image classification algorithms.
152

Classification of incorrectly picked components using Convolutional Neural Networks

Kolibacz, Eric January 2018 (has links)
Printed circuit boards used in most ordinary electrical devices are usually equipped through an assembly line. Pick and place machines as part of those lines require accurate detection of incorrectly picked components, and this is commonly performed via image analysis. The goal of this project is to investigate if we can achieve state-of-the-art performance in an industrial quality assurance task through the application of artificial neural networks. Experiments regarding different network architectures and data modifications are conducted to achieve precise image classification. Although the classification rates do not surpass or equal the rates of the existing vision-based detection system, there remains great potential in the deployment of a machine-learning-based algorithm into pick and place machines. / Tryckta kretskort som används i de flesta vanliga elektroniska produkter är vanligtvis monterade i monteringslinjer. Ytmonteringsmaskinerna i dessa monteringslinjer kräver exakt detektering av felaktigt plockade komponenter, vilket ofta genomförs med hjälp av bildanalys. Målet med detta projekt är att undersöka om vi kan uppnå framstående resultat i en industriell kvalitetssäkringsuppgift genom användandet av artificiella neuronnätverk. Experiment utförs med olika nätverksarkitekturer och datamodifikationer för att uppnå exakt bildklassificering.  Även om klassificeringsgraderna inte uppnår klassificeringsgraderna hos existerande synbaserade detekteringssystem, finns en stor potential för användandet av maskininlärningsbaserade algoritmer i ytmonteringsmaskiner.
153

Hybrid Machine Learning and Physics-Based Modeling Approaches for Process Control and Optimization

Park, Junho 01 December 2022 (has links)
Transformer neural networks have made a significant impact on natural language processing. The Transformer network self-attention mechanism effectively addresses the vanishing gradient problem that limits a network learning capability, especially when the time series gets longer or the size of the network gets deeper. This dissertation examines the usage of the Transformer model for time-series forecasting and customizes it for a simultaneous multistep-ahead prediction model in a surrogate model predictive control (MPC) application. The proposed method demonstrates enhanced control performance and computation efficiency compared to the Long-short term memory (LSTM)-based MPC and one-step-ahead prediction model structures for both LSTM and Transformer networks. In addition to the Transformer, this research investigates hybrid machine-learning modeling. The machine learning models are known for superior function approximation capability with sufficient data. However, the quantity and quality of data to ensure the prediction precision are usually not readily available. The physics-informed neural network (PINN) is a type of hybrid modeling method using dynamic physics-based equations in training a standard machine learning model as a form of multi-objective optimization. The PINN approach with the state-of-the-art time-series neural networks Transformer is studied in this research providing the standard procedure to develop the Physics-Informed Transformer (PIT) and validating with various case studies. This research also investigates the benefit of nonlinear model-based control and estimation algorithms for managed pressure drilling (MPD). This work presents a new real-time high-fidelity flow model (RT-HFM) for bottom-hole pressure (BHP) regulation in MPD operations. Lastly, this paper presents details of an Arduino microcontroller temperature control lab as a benchmark for modeling and control methods. Standard benchmarks are essential for comparing competing models and control methods, especially when a new method is proposed. A physical benchmark considers real process characteristics such as the requirement to meet a cycle time, discrete sampling intervals, communication overhead with the process, and model mismatch. Novel contributions of this work are (1) a new MPC system built upon a Transformer time-series architecture, (2) a training method for time-series machine learning models that enables multistep-ahead prediction, (3) verification of Transformer MPC solution time performance improvement (15 times) over LSTM networks, (4) physics-informed machine learning to improve extrapolation potential, and (5) two case studies that demonstrate hybrid modeling and benchmark performance criteria.
154

Machine Learning Approaches to Data-Driven Transition Modeling

Zafar, Muhammad-Irfan 15 June 2023 (has links)
Laminar-turbulent transition has a strong impact on aerodynamic performance in many practical applications. Hence, there is a practical need for developing reliable and efficient transition prediction models, which form a critical element of the CFD process for aerospace vehicles across multiple flow regimes. This dissertation explores machine learning approaches to develop transition models using data from computations based on linear stability theory. Such data provide strong correlation with the underlying physics governed by linearized disturbance equations. In the proposed transition model, a convolutional neural network-based model encodes information from boundary layer profiles into integral quantities. Such automated feature extraction capability enables generalization of the proposed model to multiple instability mechanisms, even for those where physically defined shape factor parameters cannot be defined/determined in a consistent manner. Furthermore, sequence-to-sequence mapping is used to predict the transition location based on the mean boundary layer profiles. Such an end-to-end transition model provides a significantly simplified workflow. Although the proposed model has been analyzed for two-dimensional boundary layer flows, the embedded feature extraction capability enables their generalization to other flows as well. Neural network-based nonlinear functional approximation has also been presented in the context of transport equation-based closure models. Such models have been examined for their computational complexity and invariance properties based on the transport equation of a general scalar quantity. The data-driven approaches explored here demonstrate the potential for improved transition prediction models. / Doctor of Philosophy / Surface skin friction and aerodynamic heating caused by the flow over a body significantly increases due to the transition from laminar to turbulent flow. Hence, efficient and reliable prediction of transition onset location is a critical component of simulating fluid flows in engineering applications. Currently available transition prediction tools do not provide a good balance between computational efficiency and accuracy. This dissertation explores machine learning approach to develop efficient and reliable models for predicting transition in a significantly simplified manner. Convolutional neural network is used to extract features from the state of boundary layer flow at each location along the body. These extracted features are then processed sequentially using recurrent neural network to predict the amplification of instabilities in the flow, which is directly correlated to the onset of transition. Such an automated nature of feature extraction enables the generalization of this model to multiple transition mechanisms associated with different flow conditions and geometries. Furthermore, an end-to-end mapping from flow data to transition prediction requires no user expertise in stability theory and provides a significantly simplified workflow as compared to traditional stability-based computations. Another category of neural network-based models (known as neural operators) is also examined which can learn functional mapping from input variable field to output quantities. Such models can learn directly from data for complex set of problems, without the knowledge of underlying governing equations. Such attribute can be leveraged to develop a transition prediction model which can be integrated seamlessly in flow solvers. While further development is needed, such data-driven models demonstrate the potential for improved transition prediction models.
155

Investigation into Regression Analysis of Multivariate Additional Value and Missing Value Data Models Using Artificial Neural Networks and Imputation Techniques

Jagirdar, Suresh 01 October 2008 (has links)
No description available.
156

Investigating Shallow Neural Networks for Orbit Propagation Deployed on Spaceflight-Like Hardware

Quebedeaux, Hunter 01 January 2023 (has links) (PDF)
Orbit propagation is the backbone of many problems in the space domain, such as uncertainty quantification, trajectory optimization, and guidance, navigation, and control of on orbit vehicles. Many of these techniques can rely on millions of orbit propagations, slowing computation, especially evident on low-powered satellite hardware. Past research has relied on the use of lookup tables or data streaming to enable on orbit solutions. These solutions prove inaccurate or ineffective when communication is interrupted. In this work, we introduce the use of physics-informed neural networks (PINNs) for orbit propagation to achieve fast and accurate on-board solutions, accelerated by GPU hardware solutions now available in satellite hardware. Physics-informed neural networks leverage the governing equations of motion in network training, allowing the network to optimize around the physical constraints of the system. This work leverages the use of unsupervised learning and introduces the concept of fundamental integrals of orbits to train PINNs to solve orbit problems with no knowledge of the true solution. Numerical experiments are conducted for both Earth orbits and cislunar space, being the first time a neural network integrator is implemented on flight-like hardware. The results show that the use of PINNs can decrease solution evaluation time by several order of magnitude while retaining accurate solutions to the perturbed two-body problem and the circular restricted three-body problem for deployment on spaceflight-like hardware. Implementation of these neural networks aim to reduce computational time to allow for real-time evaluation of complex algorithms on-board space vehicles.
157

Aktieprediktion med neurala nätverk : En jämförelse av statistiska modeller, neurala nätverk och kombinerade neurala nätverk

Oskarsson, Gustav January 2019 (has links)
This study is about prediction of the stockmarket through a comparison of neural networks and statistical models. The study aims to improve the accuracy of stock prediction. Much of the research made on predicting shares deals with statistical models, but also neural networks and then mainly the types RNN and CNN. No research has been done on how these neural networks can be combined, which is why this study aims for this. Tests are made on statistical models, neural networks and combined neural networks to predict stocks at minute level. The result shows that a combination of two neural networks of type RNN gives the best accuracy in the prediction of shares. The accuracy of the predictions increases further if these combined neural networks are trained to predict different time horizons. In addition to tests for accuracy, simulations have also been made which also confirm that there is some possibility to predict shares. Two combined RNNs gave best results, but in the simulations, even CNN made good predictions. One conclusion can be drawn that the stock market is not entirely effective as some opportunity to predict future values exists. Another conclusion is that neural networks are better than statistical models to predict stocks if the neural networks are combined and are of type RNN. / Denna studie behandlar prediktion av aktier genom en jämförelse av neurala nätverk och statistiska modeller. Studien syftar till att förbättra noggrannheten för aktieprediktion. Mycket av den forskning som gjorts om att förutspå aktier behandlar statistiska modeller, men även neurala nätverk och då främst typerna RNN och CNN. Ingen forskning har dock gjorts på hur dessa neurala nätverk kan kombineras, varför denna studie syftar till just detta. Tester är gjorda på statistiska modeller, neurala nätverk och kombinerade neurala nätverk för att förutspå aktier på minutnivå. Resultatet visar att en kombination av två neurala nätverk av typen RNN ger bäst noggrannhet vid prediktion av aktier. Noggrannheten i prediktionerna ökar ytterligare om dessa neurala nätverk tränas för att förutspå olika tidshorisont. Utöver tester för prediktionernas noggrannhet har även simuleringar genomförts som även de bekräftar att viss möjlighet finns att förutspå aktier. Två kombinerade RNN gav bra resultat, men här visade även CNN bra prediktioner. En slutsats kan dras om att aktiemarknaden inte är helt effektiv då viss möjlighet att förutspå framtida värden finns. Ytterligare en slutsats är att neurala nätverk är bättre än statistiska modeller till att förutspå aktier om de neurala nätverken kombineras och är av typen RNN.
158

Managing a real-time massively-parallel neural architecture

Patterson, James Cameron January 2012 (has links)
A human brain has billions of processing elements operating simultaneously; the only practical way to model this computationally is with a massively-parallel computer. A computer on such a significant scale requires hundreds of thousands of interconnected processing elements, a complex environment which requires many levels of monitoring, management and control. Management begins from the moment power is applied and continues whilst the application software loads, executes, and the results are downloaded. This is the story of the research and development of a framework of scalable management tools that support SpiNNaker, a novel computing architecture designed to model spiking neural networks of biologically-significant sizes. This management framework provides solutions from the most fundamental set of power-on self-tests, through to complex, real-time monitoring of the health of the hardware and the software during simulation. The framework devised uses standard tools where appropriate, covering hardware up / down events and capacity information, through to bespoke software developed to provide real-time insight to neural network software operation across multiple levels of abstraction. With this layered management approach, users (or automated agents) have access to results dynamically and are able to make informed decisions on required actions in real-time.
159

Comparison of linear regression and neural networks for stock price prediction

Karlsson, Nils January 2021 (has links)
Stock market prediction has been a hot topic lately due to advances in computer technology and economics. One economic theory, called Efficient Market Hypothesis (EMH), states that all known information is already factored into the prices which makes it impossible to predict the stock market. Despite the EMH, many researchers have been successful in predicting the stock market using neural networks on historical data. This thesis investigates stock prediction using both linear regression and neural networks (NN), with a twist. The inputs to the proposed methods are a number of profit predictions calculated with stochastic methods such as generalized autoregressive conditional heteroskedasticity (GARCH) and autoregressive integrated moving average (ARIMA). By contrast the traditional approach was instead to use raw data as inputs. The proposed methods show superior result in yielding profit: at best 1.1% in the Swedish market and 4.6% in the American market. The neural network yielded more profit than the linear regression model, which is reasonable given its ability to find nonlinear patterns. The historical data was used with different window sizes. This gives a good understanding of the window size impact on the prediction performance.
160

Deep learning prediction of Quantmap clusters

Parakkal Sreenivasan, Akshai January 2021 (has links)
The hypothesis that similar chemicals exert similar biological activities has been widely adopted in the field of drug discovery and development. Quantitative Structure-Activity Relationship (QSAR) models have been used ubiquitously in drug discovery to understand the function of chemicals in biological systems. A common QSAR modeling method calculates similarity scores between chemicals to assess their biological function. However, due to the fact that some chemicals can be similar and yet have different biological activities, or conversely can be structurally different yet have similar biological functions, various methods have instead been developed to quantify chemical similarity at the functional level. Quantmap is one such method, which utilizes biological databases to quantify the biological similarity between chemicals. Quantmap uses quantitative molecular network topology analysis to cluster chemical substances based on their bioactivities. This method by itself, unfortunately, cannot assign new chemicals (those which may not yet have biological data) to the derived clusters. Owing to the fact that there is a lack of biological data for many chemicals, deep learning models were explored in this project with respect to their ability to correctly assign unknown chemicals to Quantmap clusters. The deep learning methods explored included both convolutional and recurrent neural networks. Transfer learning/pretraining based approaches and data augmentation methods were also investigated. The best performing model, among those considered, was the Seq2seq model (a recurrent neural network containing two joint networks, a perceiver and an interpreter network) without pretraining, but including data augmentation.

Page generated in 0.0741 seconds