• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1288
  • 349
  • 214
  • 91
  • 65
  • 53
  • 40
  • 36
  • 27
  • 17
  • 14
  • 13
  • 13
  • 13
  • 7
  • Tagged with
  • 2666
  • 2666
  • 836
  • 820
  • 592
  • 571
  • 449
  • 410
  • 405
  • 333
  • 310
  • 284
  • 259
  • 248
  • 243
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Methods for Increasing Robustness of Deep Convolutional Neural Networks

Uličný, Matej January 2015 (has links)
Recent discoveries uncovered flaws in machine learning algorithms such as deep neural networks. Deep neural networks seem vulnerable to small amounts of non-random noise, created by exploiting the input to output mapping of the network. Applying this noise to an input image drastically decreases classication performance. Such image is referred to as an adversarial example. The purpose of this thesis is to examine how known regularization/robustness methods perform on adversarial examples. The robustness methods: dropout, low-pass filtering, denoising autoencoder, adversarial training and committees have been implemented, combined and tested. For the well-known benchmark, the MNIST (Mixed National Institute of Standards and Technology) dataset, the best combination of robustness methods has been found. Emerged from the results of the experiments, ensemble of models trained on adversarial examples is considered to be the best approach for MNIST. Harmfulness of the adversarial noise and some robustness experiments are demonstrated on CIFAR10 (The Canadian Institute for Advanced Research) dataset as well. Apart from robustness tests, the thesis describes experiments with human classification performance on noisy images and the comparison with performance of deep neural network.
562

Optimizing neural network structures: faster speed, smaller size, less tuning

Li, Zhe 01 January 2018 (has links)
Deep neural networks have achieved tremendous success in many domains (e.g., computer vision~\cite{Alexnet12,vggnet15,fastrcnn15}, speech recognition~\cite{hinton2012deep,dahl2012context}, natural language processing~\cite{dahl2012context,collobert2011natural}, games~\cite{silver2017mastering,silver2016mastering}), however, there are still many challenges in deep learning comunity such as how to speed up training large deep neural networks, how to compress large nerual networks for mobile/embed device without performance loss, how to automatically design the optimal network structures for a certain task, and how to further design the optimal networks with improved performance and certain model size with reduced computation cost. To speed up training large neural networks, we propose to use multinomial sampling for dropout, i.e., sampling features or neurons according to a multinomial distribution with different probabilities for different features/neurons. To exhibit the optimal dropout probabilities, we analyze the shallow learning with multinomial dropout and establish the risk bound for stochastic optimization. By minimizing a sampling dependent factor in the risk bound, we obtain a distribution-dependent dropout with sampling probabilities dependent on the second order statistics of the data distribution. To tackle the issue of evolving distribution of neurons in deep learning, we propose an efficient adaptive dropout (named evolutional dropout) that computes the sampling probabilities on-the-fly from a mini-batch of examples. To compress large neural network structures, we propose a simple yet powerful method for compressing the size of deep Convolutional Neural Networks (CNNs) based on parameter binarization. The striking difference from most previous work on parameter binarization/quantization lies at different treatments of $1\times 1$ convolutions and $k\times k$ convolutions ($k>1$), where we only binarize $k\times k$ convolutions into binary patterns. By doing this, we show that previous deep CNNs such as GoogLeNet and Inception-type Nets can be compressed dramatically with marginal drop in performance. Second, in light of the different functionalities of $1\times 1$ (data projection/transformation) and $k\times k$ convolutions (pattern extraction), we propose a new block structure codenamed the pattern residual block that adds transformed feature maps generated by $1\times 1$ convolutions to the pattern feature maps generated by $k\times k$ convolutions, based on which we design a small network with $\sim 1$ million parameters. Combining with our parameter binarization, we achieve better performance on ImageNet than using similar sized networks including recently released Google MobileNets. To automatically design neural networks, we study how to design a genetic programming approach for optimizing the structure of a CNN for a given task under limited computational resources yet without imposing strong restrictions on the search space. To reduce the computational costs, we propose two general strategies that are observed to be helpful: (i) aggressively selecting strongest individuals for survival and reproduction, and killing weaker individuals at a very early age; (ii) increasing mutation frequency to encourage diversity and faster evolution. The combined strategy with additional optimization techniques allows us to explore a large search space but with affordable computational costs. To further design the optimal networks with improved performance and certain model size under reduced computation cost, we propose an ecologically inspired genetic approach for neural network structure search , that includes two types of succession: primary and secondary succession as well as accelerated extinction. Specifically, we first use primary succession to rapidly evolve a community of poor initialized neural network structures into a more diverse community, followed by a secondary succession stage for fine-grained searching based on the networks from the primary succession. Accelerated extinction is applied in both stages to reduce computational cost. In addition, we also introduce the gene duplication to further utilize the novel block of layers that appeared in the discovered network structure.
563

Metadata Validation Using a Convolutional Neural Network : Detection and Prediction of Fashion Products

Nilsson Harnert, Henrik January 2019 (has links)
In the e-commerce industry, importing data from third party clothing brands require validation of this data. If the validation step of this data is done manually, it is a tedious and time-consuming task. Part of this task can be replaced or assisted by using computer vision to automatically find clothing types, such as T-shirts and pants, within imported images. After a detection of clothing type is computed, it is possible to recommend the likelihood of clothing products correlating to data imported with a certain accuracy. This was done alongside a prototype interface that can be used to start training, finding clothing types in an image and to mask annotations of products. Annotations are areas describing different clothing types and are used to train an object detector model. A model for finding clothing types is trained on Mask R-CNN object detector and achieves 0.49 mAP accuracy. A detection take just above one second on an Nvidia GTX 1070 8 GB graphics card. Recommending one or several products based on a detection take 0.5 seconds and the algorithm used is k-nearest neighbors. If prediction is done on products of which is used to build the model of the prediction algorithm almost perfect accuracy is achieved while products in images for another products does not achieve nearly as good results.
564

Water quality modeling and rainfall estimation: a data driven approach

Roz, Evan Phillips 01 July 2011 (has links)
Water is vital to man and its quality it a serious topic of concern. Addressing sustainability issues requires new understanding of water quality and water transport. Past research in hydrology has focused primarily on physics-based models to explain hydrological transport and water quality processes. The widespread use of in situ hydrological instrumentation has provided researchers a wealth of data to use for analysis and therefore use of data mining for data-driven modeling is warranted. In fact, this relatively new field of hydroinformatics makes use of the vast data collection and communication networks that are prevalent in the field of hydrology. In this Thesis, a data-driven approach for analyzing water quality is introduced. Improvements in the data collection of information system allow collection of large volumes of data. Although improvements in data collection systems have given researchers sufficient information about various systems, they must be used in conjunction with novel data-mining algorithms to build models and recognize patterns in large data sets. Since the mid 1990's, data mining has been successful used for model extraction and describing various phenomena of interest.
565

Modelos de previsão de tarifa de água, aplicados a autarquias municipais e empresas privadas, nas regiões Sul e Sudeste do Brasil /

Bezerra, Alberto Guilherme de Oliveira. January 2019 (has links)
Orientador: Marcelo Libânio / Resumo: O objetivo do presente trabalho é avaliar modelos de previsão de tarifa de água, aplicados a autarquias municipais e empresas privadas, nas regiões Sul e Sudeste do Brasil. Utilizando a metodologia de cálculo e posterior comparação dos erros obtidos para as previsões, verificando também a aplicabilidade das tarifas previstas para cada sistema de abastecimento. Utilizou-se dois modelos de previsão, o primeiro, fundamentado em técnicas de regressão linear múltipla e o segundo, baseado na aplicação de redes neurais artificiais. Avaliando, dessa forma, a capacidade de os dois modelos em questão preverem os valores tarifários a serem cobrados pelos prestadores de serviços de abastecimento de água e coleta de esgoto, a partir da análise das tarifas anteriormente praticadas. Os dados subsidiários para elaboração dos modelos foram obtidos por meio do sistema nacional de informações sobre saneamento (SNIS). Confirmada a consistência do banco de dados primário, procedeu-se com processamento destes dados, e definição das variáveis mais intervenientes para a definição da tarifa por meio da técnica de análise de correlação. Propôs-se a classificação dos sistemas de acordo com a classe jurídica do prestador de serviço, os cenários financeiros (superávit ou déficit) destes prestadores e o porte populacional dos municípios atendidos. Os resultados obtidos indicaram que os processos de previsão, em ambos os modelos utilizados, foram capazes de prever com elevada acurácia as tarifas, e garanti... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The objective of the present work was evaluating forecasting models for water tariff applied to municipal and private companies in the South and Southeast regions of Brazil. Using the calculation methodology and subsequent comparison of the errors obtained for the forecasts, also verifying the applicability of the forecast tariffs for each supply system. Two prediction models are used, the first based on multiple linear regression techniques and the second based on the application of artificial neural networks. Evaluating, in this way, the ability of the two models in question to predict the tariff values to be charged by the water supply and wastewater collection service providers, based on the analysis of the tariffs previously practiced. The subsidiary data for the elaboration of the models were obtained through the national sanitation information system (SNIS). Confirming the consistency of the primary database, we proceeded with processing of these data and definition of the most intervening variables for the definition of the tariff through the correlation analysis technique. The classification of the systems according to the legal class of the service provider, the financial scenarios (surplus or deficit) of these providers and the population size of the municipalities served were proposed. The obtained results indicated that the forecasting processes, in both models used, were able to predict with high accuracy the tariffs, and guaranteed the maintenance of the surplu... (Complete abstract click electronic access below) / Mestre
566

Human Activity Recognition Based on Transfer Learning

Pang, Jinyong 06 July 2018 (has links)
Human activity recognition (HAR) based on time series data is the problem of classifying various patterns. Its widely applications in health care owns huge commercial benefit. With the increasing spread of smart devices, people have strong desires of customizing services or product adaptive to their features. Deep learning models could handle HAR tasks with a satisfied result. However, training a deep learning model has to consume lots of time and computation resource. Consequently, developing a HAR system effectively becomes a challenging task. In this study, we develop a solid HAR system using Convolutional Neural Network based on transfer learning, which can eliminate those barriers.
567

[en] CONTINUOUS SPEECH RECOGNITION WITH MFCC, SSCH AND PNCC FEATURES, WAVELET DENOISING AND NEURAL NETWORKS / [pt] RECONHECIMENTO DE VOZ CONTÍNUA COM ATRIBUTOS MFCC, SSCH E PNCC, WAVELET DENOISING E REDES NEURAIS

JAN KRUEGER SIQUEIRA 09 February 2012 (has links)
[pt] Um dos maiores desafios na área de reconhecimento de voz contínua é desenvolver sistemas robustos ao ruído aditivo. Para isso, este trabalho analisa e testa três técnicas. A primeira delas é a extração de atributos do sinal de voz usando os métodos MFCC, SSCH e PNCC. A segunda é a remoção de ruído do sinal de voz via wavelet denoising. A terceira e última é uma proposta original batizada de feature denoising, que busca melhorar os atributos extraídos usando um conjunto de redes neurais. Embora algumas dessas técnicas já sejam conhecidas na literatura, a combinação entre elas trouxe vários resultados interessantes e inéditos. Inclusive, nota-se que o melhor desempenho vem da união de PNCC com feature denoising. / [en] One of the biggest challenges on the continuous speech recognition field is to develop systems that are robust to additive noise. To do so, this work analyses and tests three techniques. The first one extracts features from the voice signal using the MFCC, SSCH and PNCC methods. The second one removes noise from the voice signal through wavelet denoising. The third one is an original one, called feature denoising, that seeks to improve the extracted features using a set of neural networks. Although some of these techniques are already known in the literature, the combination of them brings many interesting and new results. In fact, it is noticed that the best performance comes from the union of PNCC and feature denoising.
568

Fusão de sensores para obtenção de dados de produtividade em colhedora de cana-de-açúcar / Fusion of sensors to obtain a yield data for sugarcane harvesters

Lima, Jeovano de Jesus Alves de 20 February 2019 (has links)
A cana-de-açúcar é uma importante cultura semi-perene em regiões tropicais do mundo como a principal fonte de açúcar e bioenergia e o Brasil é seu maior produtor. Como qualquer outra cultura, demanda um aperfeiçoamento prática constante, buscando uma cultura sustentável e com maiores rendimentos e menores custos. Uma das alternativas é a utilização de práticas de agricultura de precisão para explorar a variabilidade espacial dos rendimentos potenciais e para tanto, os mapas de produtividade são essenciais. Para obter os dados necessários para gerar um mapa confiável, é necessário um sistema com capacidade de ler e georreferenciar os dados do sensor e compará-los a uma calibração. No entanto, os resultados das pesquisas mais recentes associadas aos monitores de rendimento comercial, que utilizam apenas um tipo de sensor para determinar os mapas de produtividade, não retratam a exatidão exigida para a cana-de-açúcar. Este estudo teve como objetivo explorar o potencial do uso de dados provenientes de sensores instalados em diferentes partes da colhedora de cana-de-açúcar para determinação e aplicação em monitores de produtividade e determinação de falhas na lavoura. Para fins de comparação foi utilizado um transbordo instrumentado com células de carga para aferição da massa colhida. Foram utilizadas abordagens estatísticas convencionais e inteligência artificial para fusão dos dados e predição da produtividade da cana-de-açúcar, os métodos convencionais foram regressão linear simples e múltipla, e comparado com o método de redes neurais. Além da produtividade foi possível constatar que é possível identificar as falhas na lavoura através dos dados coletados e das falhas produzidas manualmente, todos os sensores medidos identificaram as falhas georeferenciadas. Com relação aos modelos implementados e utilizados, os baseados em regressão linear múltipla não apresentaram potencial na integração e predição da produtividade com os valores de erros definidos nas premissas do trabalho que é de menor que 2%. Além disso os mapas gerados com esses modelos tiveram algumas discrepâncias quanto ao aumento da produtividade em algumas áreas e extração das falhas existentes. Já o modelo de fusão utilizando redes neurais artificiais apresentou uma excelente alternativa para predição da produtividade. Uma vez que a rede é treinada, a mesma apresentou erros inferiores a 2% em todos os mapas gerados. De maneira geral todos os sensores quando avaliados individualmente apresentaram vantagens e desvantagens na determinação da produtividade. Porém, quando fundido os dados dos diversos sensores, as respostas encontradas apresentaram coeficiente de determinação R2 superiores a 95%, RMSE menor que 1kg e RE menor que 2%. / Sugarcane is an important semi-perennial crop in tropical regions of the world as the main source of sugar and bioenergy, and Brazil is its largest producer. Like any other culture, it demands constant improvement in practice, seeking a sustainable culture with higher yields and lower costs. One of the alternatives is the use of precision farming practices to explore the spatial variability of potential yields and for that, productivity maps are essential. To obtain the data needed to generate a reliable map, a system is required that is capable of reading and georeferencing the sensor data and comparing them to a calibration. However, the results of the most recent surveys associated with commercial yield monitors, which use only one type of sensor to determine productivity maps, do not depict the exactitude required for sugarcane. This study aimed to explore the potential of using data from sensors installed in different parts of the sugarcane harvester for determination and application in productivity monitors and determination of crop failure, for comparison purposes a transhipment was used instrumented with load cells to measure the mass harvested. We used conventional statistical approaches and artificial intelligence for data fusion and prediction of sugarcane productivity, conventional methods were simple and multiple linear regression, and compared with the neural network method. In addition to productivity, it was possible to verify that it is possible to identify crop failures through the data collected and the failures produced manually, all the measured sensors identified georeferenced faults. Regarding the implemented and used models, those based on multiple linear regression did not present potential in the integration and prediction of productivity with the values of errors defined in the assumptions of the work that is less than 2%. In addition, the maps generated with these models had some discrepancies regarding productivity increase in some areas and extraction of existing flaws. On the other hand, the model of fusion using artificial neural networks presented an excellent alternative for prediction of productivity; since the network is trained the same one presented in all the generated maps errors inferior to 2%. In a general way all the sensors when evaluated individually presented advantages and disadvantages in determining the productivity, but when fused the data of the various sensors the answers found of coefficient of determination R2 higher than 95%, RMSE less than 1kg and RE less than 2%.
569

Characterisation and modelling of naturally fractured reservoirs

Tran, Nam Hong, Petroleum Engineering, Faculty of Engineering, UNSW January 2004 (has links)
Naturally fractured reservoirs are generally extremely complex. The aim of characterisation and modelling of such reservoirs is to construct numerical models of rock and fractures, preparing input data for reliable stimulation and fluid flow simulation analyses. This requires the knowledge of different fracture heterogeneities and their correlations at well locations and inter-well regions. This study addresses the issues of how to integrate different information from various field data sources and construct comprehensive discrete fracture networks for naturally fractured reservoir. The methodology combines several mathematical and artificial intelligent techniques, which include statistics, geostatistics, fuzzy neural network, stochastic simulation and simulated annealing global optimisation. The study has contributed to knowledge in characterisation and modelling of naturally fractured reservoirs in several ways. It has developed: .An effective and data-dependant fracture characterisation procedure. It examines all the conventional reservoir data sources and their roles towards characterisation of different fracture properties. The procedure has the advantage of being both comprehensive and flexible. It is able to integrate all multi-scaled and diverse fracture information from the different data sources. .An improved hybrid stochastic generation algorithm for modelling discrete fracture networks. The stochastic simulation is able to utilise both discrete and continuum fracture information. It could simulate not only complicated distributions for fracture properties (e.g. multimodal circular statistics and non-parametric distributions) but also their correlations. In addition, with the incorporation of artificial fuzzy neural simulation, discrete multifractal geometry of fracture size and fracture density distribution map could be evaluated and modelled. Compared to most of the previous fracture modelling approach, this model is more flexible and comprehensive. .An improved conditional global optimisation model for modelling discrete fracture networks. The hybrid model takes full advantages of the advanced fracture characterisation using geostatistical and fuzzy neural analyses. Discrete fractures are treated individually and yet continuum information could be modelled. Compared to the stochastic simulation approach, this model produces more representative fracture networks. Compared to the conventional optimisation programs, this model is more versatile and contains superior objective function.
570

Call-independent identification in birds

Fox, Elizabeth J. S. January 2008 (has links)
[Truncated abstract] The identification of individual animals based on acoustic parameters is a non-invasive method of identifying individuals with considerable advantages over physical marking procedures. One requirement for an effective and practical method of acoustic individual identification is that it is call-independent, i.e. determining identity does not require a comparison of the same call or song type. This means that an individuals identity over time can be determined regardless of any changes to its vocal repertoire, and different individuals can be compared regardless of whether they share calls. Although several methods of acoustic identification currently exist, for example discriminant function analysis or spectrographic cross-correlation, none are call-independent. Call-independent identification has been developed for human speaker recognition, and this thesis aimed to: 1) determine if call-independent identification was possible in birds, using similar methods to those used for human speaker recognition, 2) examine the impact of noise in a recording on the identification accuracy and determine methods of removing the noise and increasing accuracy, 3) provide a comparison of features and classifiers to determine the best method of call-independent identification in birds, and 4) determine the practical limitations of call-independent identification in birds, with respect to increasing population size, changing vocal characteristics over time, using different call categories, and using the method in an open population. ... For classification, Gaussian mixture models and probabilistic neural networks resulted in higher accuracy, and were simpler to use, than multilayer perceptrons. Using the best methods of feature extraction and classification resulted in 86-95.5% identification accuracy for two passerine species, with all individuals correctly identified. A study of the limitations of the technique, in terms of population size, the category of call used, accuracy over time, and the effects of having an open population, found that acoustic identification using perceptual linear prediction and probabilistic neural networks can be used to successfully identify individuals in a population of at least 40 individuals, can be used successfully on call categories other than song, and can be used in open populations in which a new recording may belong to a previously unknown individual. However, identity was only able to be determined with accuracy for less than three months, limiting the current technique to short-term field studies. This thesis demonstrates the application of speaker recognition technology to enable call-independent identification in birds. Call-independence is a pre-requisite for the successful application of acoustic individual identification in many species, especially passerines, but has so far received little attention in the scientific literature. This thesis demonstrates that call-independent identification is possible in birds, as well as testing and finding methods to overcome the practical limitations of the methods, enabling their future use in biological studies, particularly for the conservation of threatened species.

Page generated in 0.0353 seconds