• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2678
  • 1221
  • 191
  • 180
  • 120
  • 59
  • 35
  • 27
  • 26
  • 25
  • 24
  • 21
  • 20
  • 19
  • 18
  • Tagged with
  • 5704
  • 5704
  • 2023
  • 1738
  • 1482
  • 1377
  • 1251
  • 1194
  • 994
  • 754
  • 700
  • 672
  • 623
  • 532
  • 515
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Advanced technology applied to PIV measurement

Pan, Xiao Bo January 1996 (has links)
No description available.
382

A neurofuzzy expert system for competitive tendering in civil engineering

Wanous, Mohammed January 2000 (has links)
No description available.
383

Cascaded linear shift invariant processing in pattern recognition

Reed, Stuart January 2000 (has links)
Image recognition is the process of classifying a pattern in an image into one of a number of stored classes. It is used in such diverse applications as medical screening, quality control in manufacture and military target recognition. An image recognition system is called shift invariant if a shift of the pattern in the input image produces a proportional shift in the output, meaning that both the class and location of the object in the image are identified. The work presented in this thesis considers a cascade of linear shift invariant optical processors, or correlators, separated by fields of point non-lineari ties, called the cascaded correlator. This is introduced as a method of providing parallel, shiftinvariant, non-linear pattern recognition in a system that can learn in the manner of neural networks. It is shown that if a neural network is constrained to give overall shift invariance, the resulting structure is a cascade of correlators, meaning that the cascaded correlator is the only architecture which will provide fully shift invariant pattern recognition. The issues of training of such a non-linear system are discussed in neural network terms, and the non-linear decisions of the system are investigated. By considering digital simulations of a two-stage system, it is shown that the cascaded correlator is superior to linear filtering for both discrimination and tolerance to image distortion. This is shown for theoretical images and in real-world applications based on fault identification in can manufacture. The cascaded correlator has also been proven as an optical system by implementation in a joint transform correlator architecture. By comparing simulated and optical results, the resulting practical errors are analysed and compensated. It is shown that the optical implementation produces results similar to those of the simulated system, meaning that it is possible to provide a highly non-linear decision using robust parallel optical processing techniques.
384

Non-linear versus non-gaussian volatility models

Schittenkopf, Christian, Dorffner, Georg, Dockner, Engelbert J. January 1999 (has links) (PDF)
One of the most challenging topics in financial time series analysis is the modeling of conditional variances of asset returns. Although conditional variances are not directly observable there are numerous approaches in the literature to overcome this problem and to predict volatilities on the basis of historical asset returns. The most prominent approach is the class of GARCH models where conditional variances are governed by a linear autoregressive process of past squared returns and variances. Recent research in this field, however, has focused on modeling asymmetries of conditional variances by means of non-linear models. While there is evidence that such an approach improves the fit to empirical asset returns, most non-linear specifications assume conditional normal distributions and ignore the importance of alternative models. Concentrating on the distributional assumptions is, however, essential since asset returns are characterized by excess kurtosis and hence fat tails that cannot be explained by models with suffcient heteroskedasticity. In this paper we take up the issue of returns' distributions and contrast it with the specification of non-linear GARCH models. We use daily returns for the Dow Jones Industrial Average over a large period of time and evaluate the predictive power of different linear and non-linear volatility specifications under alternative distributional assumptions. Our empirical analysis suggests that while non-linearities do play a role in explaining the dynamics of conditional variances, the predictive power of the models does also depend on the distributional assumptions. (author's abstract) / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
385

Training Recurrent Neural Networks

Sutskever, Ilya 13 August 2013 (has links)
Recurrent Neural Networks (RNNs) are powerful sequence models that were believed to be difficult to train, and as a result they were rarely used in machine learning applications. This thesis presents methods that overcome the difficulty of training RNNs, and applications of RNNs to challenging problems. We first describe a new probabilistic sequence model that combines Restricted Boltzmann Machines and RNNs. The new model is more powerful than similar models while being less difficult to train. Next, we present a new variant of the Hessian-free (HF) optimizer and show that it can train RNNs on tasks that have extreme long-range temporal dependencies, which were previously considered to be impossibly hard. We then apply HF to character-level language modelling and get excellent results. We also apply HF to optimal control and obtain RNN control laws that can successfully operate under conditions of delayed feedback and unknown disturbances. Finally, we describe a random parameter initialization scheme that allows gradient descent with momentum to train RNNs on problems with long-term dependencies. This directly contradicts widespread beliefs about the inability of first-order methods to do so, and suggests that previous attempts at training RNNs failed partly due to flaws in the random initialization.
386

Machine Learning for Aerial Image Labeling

Mnih, Volodymyr 09 August 2013 (has links)
Information extracted from aerial photographs has found applications in a wide range of areas including urban planning, crop and forest management, disaster relief, and climate modeling. At present, much of the extraction is still performed by human experts, making the process slow, costly, and error prone. The goal of this thesis is to develop methods for automatically extracting the locations of objects such as roads, buildings, and trees directly from aerial images. We investigate the use of machine learning methods trained on aligned aerial images and possibly outdated maps for labeling the pixels of an aerial image with semantic labels. We show how deep neural networks implemented on modern GPUs can be used to efficiently learn highly discriminative image features. We then introduce new loss functions for training neural networks that are partially robust to incomplete and poorly registered target maps. Finally, we propose two ways of improving the predictions of our system by introducing structure into the outputs of the neural networks. We evaluate our system on the largest and most-challenging road and building detection datasets considered in the literature and show that it works reliably under a wide variety of conditions. Furthermore, we are releasing the first large-scale road and building detection datasets to the public in order to facilitate future comparisons with other methods.
387

Machine Learning for Aerial Image Labeling

Mnih, Volodymyr 09 August 2013 (has links)
Information extracted from aerial photographs has found applications in a wide range of areas including urban planning, crop and forest management, disaster relief, and climate modeling. At present, much of the extraction is still performed by human experts, making the process slow, costly, and error prone. The goal of this thesis is to develop methods for automatically extracting the locations of objects such as roads, buildings, and trees directly from aerial images. We investigate the use of machine learning methods trained on aligned aerial images and possibly outdated maps for labeling the pixels of an aerial image with semantic labels. We show how deep neural networks implemented on modern GPUs can be used to efficiently learn highly discriminative image features. We then introduce new loss functions for training neural networks that are partially robust to incomplete and poorly registered target maps. Finally, we propose two ways of improving the predictions of our system by introducing structure into the outputs of the neural networks. We evaluate our system on the largest and most-challenging road and building detection datasets considered in the literature and show that it works reliably under a wide variety of conditions. Furthermore, we are releasing the first large-scale road and building detection datasets to the public in order to facilitate future comparisons with other methods.
388

Training Recurrent Neural Networks

Sutskever, Ilya 13 August 2013 (has links)
Recurrent Neural Networks (RNNs) are powerful sequence models that were believed to be difficult to train, and as a result they were rarely used in machine learning applications. This thesis presents methods that overcome the difficulty of training RNNs, and applications of RNNs to challenging problems. We first describe a new probabilistic sequence model that combines Restricted Boltzmann Machines and RNNs. The new model is more powerful than similar models while being less difficult to train. Next, we present a new variant of the Hessian-free (HF) optimizer and show that it can train RNNs on tasks that have extreme long-range temporal dependencies, which were previously considered to be impossibly hard. We then apply HF to character-level language modelling and get excellent results. We also apply HF to optimal control and obtain RNN control laws that can successfully operate under conditions of delayed feedback and unknown disturbances. Finally, we describe a random parameter initialization scheme that allows gradient descent with momentum to train RNNs on problems with long-term dependencies. This directly contradicts widespread beliefs about the inability of first-order methods to do so, and suggests that previous attempts at training RNNs failed partly due to flaws in the random initialization.
389

Hybrid Learning Algorithm For Intelligent Short-term Load Forecasting

Topalli, Ayca Kumluca 01 January 2003 (has links) (PDF)
Short-term load forecasting (STLF) is an important part of the power generation process. For years, it has been achieved by traditional approaches stochastic like time series / but, new methods based on artificial intelligence emerged recently in literature and started to replace the old ones in the industry. In order to follow the latest developments and to have a modern system, it is aimed to make a research on STLF in Turkey, by neural networks. For this purpose, a method is proposed to forecast Turkey&rsquo / s total electric load one day in advance. A hybrid learning scheme that combines off-line learning with real-time forecasting is developed to make use of the available past data for adapting the weights and to further adjust these connections according to the changing conditions. It is also suggested to tune the step size iteratively for better accuracy. Since a single neural network model cannot cover all load types, data are clustered due to the differences in their characteristics. Apart from this, special days are extracted from the normal training sets and handled separately. In this way, a solution is proposed for all load types, including working days, weekends and special holidays. For the selection of input parameters, a technique based on principal component analysis is suggested. A traditional ARMA model is constructed for the same data as a benchmark and results are compared. Proposed method gives lower percent errors all the time, especially for holiday loads. The average error for year 2002 data is obtained as 1.60%.
390

Parallel training algorithms for analogue hardware neural nets

Zhang, Liang January 2007 (has links)
Feedforward neural networks are massively parallel computing structures that have the capability of universal function approximation. The most prevalent realisation of neural nets is in the form of an algorithm implemented in a computer program. Neural networks as computer programs lose the inher- ent parallism. Parallism can only be recovered by executing the program on an expensive parallel digital computer. Achievement of the inherent massive parallelism at a lower cost requires direct hardware realisation of the neural net. Such hardware has been developed jointly by QUT and the Heinz Nixdorf Institute (Germany) called the Local Cluster Neural Network (LCNN) chip. But this neural net chip lacks the capability of in-circuit learning or on-chip training. The weights for the analogue LCNN network have to be computed o® chip on a digital computer. Based on the previous work, this research focuses on the Local Cluster Neu- ral Network and its analogue chip. The characteristic of the LCNN chip was measured exhaustively and its behaviours were compared to the theoretical functionality of the LCNN. To overcome the manufacturing °uctuations and deviations presented in analogue circuits, we used chip-in-the-loop strategy for training of the LCNN chip. A new training algorithm: Probabilistic Random Weight Change for the chip-in-the-loop training for function approximation. In order to implement the LCNN analogue chip with on-chip training, two training algorithms are studied in on-line training mode in simulations: the Probabilistic Random Weight Change (PRWC) algorithm and the modified Gradient Descent (GD) algorithm. The circuits design for the PRWC on-chip training and the GD on-chip training are outlined. These two methods are compared for their training performance and the complexity of their circuits. This research provides the foundation for the next version of LCNN analogue hardware implementation.

Page generated in 0.0396 seconds