• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 53
  • 22
  • 16
  • 6
  • 6
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 203
  • 104
  • 98
  • 69
  • 61
  • 60
  • 60
  • 58
  • 55
  • 52
  • 50
  • 36
  • 35
  • 34
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Predictive data mining in a collaborative editing system: the Wikipedia articles for deletion process.

Ashok, Ashish Kumar January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William H. Hsu / In this thesis, I examine the Articles for Deletion (AfD) system in /Wikipedia/, a large-scale collaborative editing project. Articles in Wikipedia can be nominated for deletion by registered users, who are expected to cite criteria for deletion from the Wikipedia deletion. For example, an article can be nominated for deletion if there are any copyright violations, vandalism, advertising or other spam without relevant content, advertising or other spam without relevant content. Articles whose subject matter does not meet the notability criteria or any other content not suitable for an encyclopedia are also subject to deletion. The AfD page for an article is where Wikipedians (users of Wikipedia) discuss whether an article should be deleted. Articles listed are normally discussed for at least seven days, after which the deletion process proceeds based on community consensus. Then the page may be kept, merged or redirected, transwikied (i.e., copied to another Wikimedia project), renamed/moved to another title, userfied or migrated to a user subpage, or deleted per the deletion policy. Users can vote to keep, delete or merge the nominated article. These votes can be viewed in article’s view AfD page. However, this polling does not necessarily determine the outcome of the AfD process; in fact, Wikipedia policy specifically stipulates that a vote tally alone should not be considered sufficient basis for a decision to delete or retain a page. In this research, I apply machine learning methods to determine how the final outcome of an AfD process is affected by factors such as the difference between versions of an article, number of edits, and number of disjoint edits (according to some contiguity constraints). My goal is to predict the outcome of an AfD by analyzing the AfD page and editing history of the article. The technical objectives are to extract features from the AfD discussion and version history, as reflected in the edit history page, that reflect factors such as those discussed above, can be tested for relevance, and provide a basis for inductive generalization over past AfDs. Applications of such feature analysis include prediction and recommendation, with the performance goal of improving the precision and recall of AfD outcome prediction.
2

Neural and genetic modelling, control and real-time finite simulation of flexible manipulators

Shaheed, Mohammad Hasan January 2000 (has links)
No description available.
3

Climate and agrometeorology forecasting using soft computing techniques. /

Esteves, João Trevizoli January 2018 (has links)
Orientador: Glauco de Souza Rolim / Resumo: Precipitação, em pequenas escalas de tempo, é um fenômeno associado a altos níveis de incerteza e variabilidade. Dada a sua natureza, técnicas tradicionais de previsão são dispendiosas e exigentes em termos computacionais. Este trabalho apresenta um modelo para prever a ocorrência de chuvas em curtos intervalos de tempo por Redes Neurais Artificiais (RNAs) em períodos acumulados de 3 a 7 dias para cada estação climática, mitigando a necessidade de predizer o seu volume. Com essa premissa pretende-se reduzir a variância, aumentar a tendência dos dados diminuindo a responsabilidade do algoritmo que atua como um filtro para modelos quantitativos, removendo ocorrências subsequentes de valores de zero(ausência) de precipitação, o que influencia e reduz seu desempenho. O modelo foi desenvolvido com séries temporais de 10 regiões agricolamente relevantes no Brasil, esses locais são os que apresentam as séries temporais mais longas disponíveis e são mais deficientes em previsões climáticas precisas, com 60 anos de temperatura média diária do ar e precipitação acumulada. foram utilizados para estimar a evapotranspiração potencial e o balanço hídrico; estas foram as variáveis ​​utilizadas como entrada para as RNAs. A precisão média para todos os períodos acumulados foi de 78% no verão, 71% no inverno 62% na primavera e 56% no outono, foi identificado que o efeito da continentalidade, o efeito da altitude e o volume da precipitação normal , tem um impacto direto na precisão das RNAs. Os... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Precipitation, in short periods of time, is a phenomenon associated with high levels of uncertainty and variability. Given its nature, traditional forecasting techniques are expensive and computationally demanding. This paper presents a model to forecast the occurrence of rainfall in short ranges of time by Artificial Neural Networks(ANNs) in accumulated periods from 3 to 7 days for each climatic season, mitigating the necessity of predicting its amount. With this premise it is intended to reduce the variance, rise the bias of data and lower the responsibility of the model acting as a filter for quantitative models by removing subsequent occurrences of zeros values of rainfall which leads to bias the and reduces its performance. The model were developed with time series from 10 agriculturally relevant regions in Brazil, these places are the ones with the longest available weather time series and and more deficient in accurate climate predictions, it was available 60 years of daily mean air temperature and accumulated precipitation which were used to estimate the potential evapotranspiration and water balance; these were the variables used as inputs for the ANNs models. The mean accuracy of the model for all the accumulated periods were 78% on summer, 71% on winter 62% on spring and 56% on autumn, it was identified that the effect of continentality, the effect of altitude and the volume of normal precipitation, have a direct impact on the accuracy of the ANNs. The models have ... (Complete abstract click electronic access below) / Mestre
4

Machine Learning Techniques to Provide Quality of Service in Cognitive Radio Technology

Dhekne, Rucha P. January 2009 (has links)
No description available.
5

O efeito das lesões nas capacidades de memorização e generalização de um perceptron / Effect of lesion on the storage and generalization capabilities of a perceptron

Barbato, Daniela Maria Lemos 08 September 1993 (has links)
Perceptrons são redes neurais sem retroalimentação onde os neurônios estão dispostos em camadas. O perceptron considerado neste trabalho consiste de uma camada de N neurônios sensores Si = ±1; i = 1, , N ligados a um neurônio motor δ através das conexões sinápticas (pesos) Wi; i = 1, ..., N cujos valores restringimos a ±1. Utilizando o formalismo de Mecânica Estatística desenvolvido por Gardner (1988), estudamos os efeitos de eliminarmos uma fração de conexões sinápticas (diluição ) nas capacidades de memorização e generalização da rede neural descrita acima. Consideramos também o efeito de ruído atuando durante o estágio de treinamento do perceptron. Consideramos dois tipos de diluição: diluição móvel na qual os pesos são cortados de maneira a minimizar o erro de treinamento e diluição fixa na qual os pesos são cortados aleatoriamente. A diluição móvel, que modela lesões em cérebro de pacientes muito jovens, pode melhorar a capacidade de memorização e, no caso da rede ser treinada com ruído, também pode melhorar a capacidade de generalização. Por outro lado, a diluição fixa, que modela lesões em cérebros de pacientes adultos, sempre degrada o desempenho da rede, sendo seu principal efeito introduzir um ruído efetivo nos exemplos de treinamento. / Perceptrons are layered, feed-forward neural networks. In this work we consider a per-ceptron composed of one input layer with N sensor neurons Si = ±1; i = 1, ... , N which are connected to a single motor neuron δ through the synaptic weights Wj; i = 1, ... , N, which are constrained to take on the values ±1 only. Using the Statistical Mechanics formalism developed by Gardner (1988), we study the effects of eliminating a fraction of synaptic weights on the memorization and generalization capabilities of the neural network described above. We consider also the effects of noise acting during the perceptron training stage. We consider two types of dilution: annealed dilution, where the weights are cut so as to minimize the training error and quenched dilution, where the weights are cut randomly. The annealed dilution which models brain damage in very young patients can improve the memorization ability and, in the case of training with noise, it can also improve the generalization ability. On the other hand, the quenched dilution which models lesions on adult brains always degrades the performance of the network, its main effect being to introduce an effective noise in the training examples.
6

Automatisk dokumentklassificering med hjälp av maskininlärning / Automated Document Classification using Machine Learning

Dufberg, Johan January 2018 (has links)
Att manuellt hantera och klassificera stora mängder textdokument tar mycket tid och kräver mycket personal, att göra detta med hjälp av maskininlärning är för ändamålet ett alternativ. Det här arbetet önskar ge läsaren en grundläggande inblick i hur automatisk klassificering av texter fungerar, samt ge en lätt samanställning av några av de vanligt förekommande algoritmerna för ändamålet. De exempel som visas använder sig av artiklar på engelska om teknik- och finansnyheter, men arbetet har avstamp i frågan om mognadsgrad av tekniken för hantering av svenska officiella dokument. Första delen är den vetenskapliga bakgrund som den andra delen vilar på, här beskrivs flera algoritmer och tekniker som sedan används i praktiska exempel. Rapporten ämnar inte beskriva en färdig produkt, utan fungerar så som ”proof of concept” för textklassificeringens användning. Avslutningsvis diskuteras resultaten från de tester som gjorts, och en av slutsatserna är att när det finns tillräckligt med data kan en enkel klassificerare prestera nästan likvärdigt med en tekniskt sett mer utvecklad och komplex klassificerare. Relateras prestandan hos klassificeraren till tidsåtgången visar detta på att komplexa klassificerare kräver hårdvara med hög beräkningskapacitet och mycket minne för att vara gångbara. / To manually handle and classify large quantities of text documents, takes a lot of time and demands a large staff, to use machine learning for this purpose is an alternative. This thesis aims to give the reader a fundamental insight in how automatic classification of texts work and give a quick overview of the most common algorithms used for this purpose. The examples that are shown uses news articles in English about tech and finance, but the thesis takes a start in the question about how mature the technique is for handling official Swedish documents. The first part is the scientific background on which the second part rests, here several algorithms and techniques are described which is used in practice later. The report does not aim to describe a product in any form but acts as a “proof of concept” for the use of text classification. Finally, the results from the tests are discussed, and one of the conclusions drawn is that when data is abundant a relatively simple classifier can perform close to equal to a technically more developed and complex classifier. If the performance of the classifier is related to the time taken this indicates that complex classifiers need hardware with high computational power and a fair bit of memory for the classifier to be viable.
7

An?lise de desempenho da rede neural artificial do tipo multilayer perceptron na era multicore

Souza, Francisco Ary Alves de 07 August 2012 (has links)
Made available in DSpace on 2014-12-17T14:56:07Z (GMT). No. of bitstreams: 1 FranciscoAAS_DISSERT.pdf: 1526658 bytes, checksum: 7ba5b80f03a10eaf25a4f9e6a4c91372 (MD5) Previous issue date: 2012-08-07 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / Artificial neural networks are usually applied to solve complex problems. In problems with more complexity, by increasing the number of layers and neurons, it is possible to achieve greater functional efficiency. Nevertheless, this leads to a greater computational effort. The response time is an important factor in the decision to use neural networks in some systems. Many argue that the computational cost is higher in the training period. However, this phase is held only once. Once the network trained, it is necessary to use the existing computational resources efficiently. In the multicore era, the problem boils down to efficient use of all available processing cores. However, it is necessary to consider the overhead of parallel computing. In this sense, this paper proposes a modular structure that proved to be more suitable for parallel implementations. It is proposed to parallelize the feedforward process of an RNA-type MLP, implemented with OpenMP on a shared memory computer architecture. The research consistes on testing and analizing execution times. Speedup, efficiency and parallel scalability are analyzed. In the proposed approach, by reducing the number of connections between remote neurons, the response time of the network decreases and, consequently, so does the total execution time. The time required for communication and synchronization is directly linked to the number of remote neurons in the network, and so it is necessary to investigate which one is the best distribution of remote connections / As redes neurais artificiais geralmente s?o aplicadas ? solu??o de problemas comple- xos. Em problemas com maior complexidade, ao aumentar o n?mero de camadas e de neur?nios, ? poss?vel conseguir uma maior efici?ncia funcional, por?m, isto acarreta em um maior esfor?o computacional. O tempo de resposta ? um fator importante na decis?o de us?-las em determinados sistemas. Muitos defendem que o maior custo computacional est? na fase de treinamento. Por?m, esta fase ? realizada apenas uma ?nica vez. J? trei- nada, ? necess?rio usar os recursos computacionais existentes de forma eficiente. Diante da era multicore esse problema se resume ? utiliza??o eficiente de todos os n?cleos de processamento dispon?veis. No entanto, ? necess?rio considerar a sobrecarga existente na computa??o paralela. Neste sentido, este trabalho prop?e uma estrutura modular que ? mais adequada para as implementa??es paralelas. Prop?e-se paralelizar o processo feed- forward (passo para frente) de uma RNA do tipo MLP, implementada com o OpenMP em uma arquitetura computacional de mem?ria compartilhada. A investiga??o dar-se-? com a realiza??o de testes e an?lises dos tempos de execu??o. A acelera??o, a efici?ncia e a es- calabilidade s?o analisados. Na proposta apresentada ? poss?vel perceber que, ao diminuir o n?mero de conex?es entre os neur?nios remotos, o tempo de resposta da rede diminui e por consequ?ncia diminui tamb?m o tempo total de execu??o. O tempo necess?rio para comunica??o e sincronismo est? diretamente ligado ao n?mero de neur?nios remotos da rede, sendo ent?o, necess?rio observar sua melhor distribui??o
8

O efeito das lesões nas capacidades de memorização e generalização de um perceptron / Effect of lesion on the storage and generalization capabilities of a perceptron

Daniela Maria Lemos Barbato 08 September 1993 (has links)
Perceptrons são redes neurais sem retroalimentação onde os neurônios estão dispostos em camadas. O perceptron considerado neste trabalho consiste de uma camada de N neurônios sensores Si = ±1; i = 1, , N ligados a um neurônio motor δ através das conexões sinápticas (pesos) Wi; i = 1, ..., N cujos valores restringimos a ±1. Utilizando o formalismo de Mecânica Estatística desenvolvido por Gardner (1988), estudamos os efeitos de eliminarmos uma fração de conexões sinápticas (diluição ) nas capacidades de memorização e generalização da rede neural descrita acima. Consideramos também o efeito de ruído atuando durante o estágio de treinamento do perceptron. Consideramos dois tipos de diluição: diluição móvel na qual os pesos são cortados de maneira a minimizar o erro de treinamento e diluição fixa na qual os pesos são cortados aleatoriamente. A diluição móvel, que modela lesões em cérebro de pacientes muito jovens, pode melhorar a capacidade de memorização e, no caso da rede ser treinada com ruído, também pode melhorar a capacidade de generalização. Por outro lado, a diluição fixa, que modela lesões em cérebros de pacientes adultos, sempre degrada o desempenho da rede, sendo seu principal efeito introduzir um ruído efetivo nos exemplos de treinamento. / Perceptrons are layered, feed-forward neural networks. In this work we consider a per-ceptron composed of one input layer with N sensor neurons Si = ±1; i = 1, ... , N which are connected to a single motor neuron δ through the synaptic weights Wj; i = 1, ... , N, which are constrained to take on the values ±1 only. Using the Statistical Mechanics formalism developed by Gardner (1988), we study the effects of eliminating a fraction of synaptic weights on the memorization and generalization capabilities of the neural network described above. We consider also the effects of noise acting during the perceptron training stage. We consider two types of dilution: annealed dilution, where the weights are cut so as to minimize the training error and quenched dilution, where the weights are cut randomly. The annealed dilution which models brain damage in very young patients can improve the memorization ability and, in the case of training with noise, it can also improve the generalization ability. On the other hand, the quenched dilution which models lesions on adult brains always degrades the performance of the network, its main effect being to introduce an effective noise in the training examples.
9

Cybernetic thinking and share-price prediction

Hartley, R. T. January 1974 (has links)
The thesis presents the application of cybernetic thinking to the central problem of investment analysis; that of share-price prediction. Cybernetics is seen as an inter-disciplinary study (as opposed to multi-disciplinary) in which the barriers between living and non-living systems are ignored. Suggestions from two independant studies in investment analysis are taken tip and a theory of investment is proposed with a view to utilising the suggestions. The theory is formalised using the simple linear perceptron and methods based on logical calculi are used to analyse the perceptron formulation. The theory is then tested by allowing the perceptron to make predictions and the results of these predictions are discussed in the light of the theoretical analysis. Finally suggestions are made for alternative approaches to investment analysis which could lead to better results.
10

Formalized Generalization Bounds for Perceptron-Like Algorithms

Kelby, Robin J. 22 September 2020 (has links)
No description available.

Page generated in 0.0465 seconds