• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 180
  • 31
  • 25
  • 21
  • 16
  • 11
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 645
  • 645
  • 645
  • 135
  • 134
  • 124
  • 120
  • 107
  • 93
  • 85
  • 73
  • 71
  • 69
  • 58
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Aplicação de redes neurais na tomada de decisão no mercado de ações. / Application of neural networks in decision making in the stock market.

Jarbas Aquiles Gambogi 29 May 2013 (has links)
Este trabalho apresenta um sistema de trading que toma decisões de compra e de venda do índice Standard & Poors 500, na modalidade seguidor de tendência, mediante o emprego de redes neurais artificiais multicamadas com propagação para frente, no período de 5 anos, encerrado na última semana do primeiro semestre de 2012. Geralmente o critério usual de escolha de redes neurais nas estimativas de preços de ativos financeiros é o do menor erro quadrático médio entre as estimativas e os valores observados. Na seleção das redes neurais foi empregado o critério do menor erro quadrático médio na amostra de teste, entre as redes neurais que apresentaram taxas de acertos nas previsões das oscilações semanais do índice Standard & Poors 500 acima de 60% nessas amostras de teste. Esse critério possibilitou ao sistema de trading superar a taxa anual de retorno das redes neurais selecionadas pelo critério usual e, por larga margem, a estratégia de compre e segure no período. A escolha das variáveis de entrada das redes neurais recaiu entre as que capturaram o efeito da anomalia do momento dos preços do mercado de ações no curto prazo, fenômeno amplamente reconhecido na literatura financeira. / This work presents a trend follower system that makes decisions to buy and sell short the Standard & Poors 500 Index, by using multilayer feedforward neural networks. It was considered a period of 5 years, ending in the last week of the first half of 2012. Usually a neural networks choice criterion to forecast financial asset prices is based on the least mean square error between the estimated and observed prices in the test samples. In this work we also adopted another criterion based on the least mean square error for those neural networks that had a hit rate above 60% of the Standard & Poors 500 Index weekly change in the test sample. This criterion was shown to be the most appropriate one. The neural networks input variables were chosen among those technical indicators that better captured the anomaly of the short term momentum of prices. The annual rate of return of the trading system based on those criteria surpassed those selected by the usual criteria, and by a wide margin the buy-and-hold strategy. The neural networks inputs were chosen to capture the momentum anomaly of the prices on the short term that is fully recognized in the financial literature.
152

Oxicorte: estudo da transferência de calor e modelamento por redes neurais artificiais de variáveis do processo. / Oxicutting: heat transfer study and artificial neural network modeling of process variables.

José Pinto Ramalho 01 July 2008 (has links)
O oxicorte produz superfícies que variam entre um padrão semelhante à usinagem até outro em que o corte é praticamente sem qualidade. Além das condições de equipamentos e habilidade de operadores, estas possibilidades são conseqüências da correta seleção de parâmetros e variáveis de trabalho. O processo baseia-se numa reação química fortemente exotérmica, que gera parte de calor necessário para sua ocorrência juntamente com o restante do calor proveniente da chama do maçarico. A proporção entre estes valores é fortemente dependente, entre outros fatores, da espessura do material utilizado. Este trabalho mostra como calcular a quantidade de energia gerada no oxicorte, com duas metodologias de diferentes autores, estuda de que maneira fatores como a variação da concentração do oxigênio e a temperatura inicial das chapas cortadas podem variar o balanço térmico e simula, com a utilização de Redes Neurais Artificiais, alguns dos dados necessários para a realização destes cálculos. Para isto foram cortadas chapas de aço carbono ASTM A36 de 12,7 a 50,8 mm, com diferentes concentrações de O2 (99,5% e 99,95%) e diferentes temperaturas de pré-aquecimento das chapas (30 e 230±30ºC). As superfícies cortadas foram caracterizadas, os óxidos produzidos identificados e os resultados foram correlacionados com o uso de tratamento matemático e técnicas de inteligência artificial. Para a realização do trabalho alguns aspectos não existentes em literatura foram superados como o desenvolvimento de uma metodologia para a caracterização dos óxidos de Fe por meio de difração de raios X com o método de Rietveld, a utilização de redes neurais artificiais para estimativa de resultados no processo oxicorte e a comparação entre diferentes redes neurais artificiais, que são também aspectos inéditos apresentados nos sete artigos técnicos publicados no decorrer deste trabalho. Os resultados apresentam: uma metodologia para a análise da eficiência energética do processo, o desenvolvimento de técnicas que, com o emprego de inteligência artificial simulam o comportamento de aspectos do processo, o que por fim possibilita a simulação da análise de sua eficiência energética. / Oxygen cutting process produces surfaces that vary from a machine cut finishing to one of virtually no quality at all. Besides equipment conditions and operators\' skills, these possibilities result from the correct selection of work parameters and variables. The process is based on a highly exothermic chemical reaction that generates part of the heat needed for its occurrence, along with the rest of heat resultant from the flame of the blowpipe. The ratio between these values depends highly on the thickness of the material used. This work shows how to calculate the amount of energy generated in the cutting process. Based on two methodologies of different authors, this research studies how factors such as the change in the oxygen concentration and the pre heating temperature of plates can vary the heat balance and simulates, with the use of Artificial Neural Networks, some of the data needed to perform these calculations. ASTM A36 carbon steel plates, from 12.7 to 50.8 mm thick, with different oxygen concentration (99,5% e 99,95%) and preheating temperatures (30 and 230 ±30ºC) were cut. The cut surfaces and the produced oxides were characterized and the results were correlated with the use of mathematical treatment and artificial intelligence techniques. In order to carry out this work some previously inexistent aspects in literature have been developed, such as a Fe oxides characterization methodology with X-ray diffraction and Rietveld method; the use of artificial neural networks to simulate the results in the oxygen cutting process and the comparison between different artificial neural networks, which are unpublished aspects of this work that can be seen in seven technical papers published while this work was in progress. Results show: a methodology for the analysis of the energy efficiency of the process; the development of techniques that, together with artificial intelligence, simulate the results of aspects of the process; which finally allows the simulation analysis of the energy efficiency of the process.
153

Redes neurais aplicada no desenvolvimento de modelo para apoio a decis?o na terapia antirretroviral em portadores do HIV-1

VIEIRA, Thuany Christine Lessa de Azevedo 15 April 2015 (has links)
Submitted by Jorge Silva (jorgelmsilva@ufrrj.br) on 2017-06-02T19:41:02Z No. of bitstreams: 1 2015 - Thuany Christine Lessa de Azevedo Vieira.pdf: 2529622 bytes, checksum: e7159cd09d3d218f61d6f6cee88a775e (MD5) / Made available in DSpace on 2017-06-02T19:41:02Z (GMT). No. of bitstreams: 1 2015 - Thuany Christine Lessa de Azevedo Vieira.pdf: 2529622 bytes, checksum: e7159cd09d3d218f61d6f6cee88a775e (MD5) Previous issue date: 2015-04-15 / FAPERJ / During the last decade, the antiretroviral therapy (ART) contributed to the reduction of the mortality and morbidity among HIV-1 affected people. However, the therapeutic flaw related to the appearance of resistence to the retrovirals due to the mutations and/or the not adherence to the antiretroviral therapy, is a problem of public health. It becomes of extreme importance, the comprehension of the resistence patterns and the mechanism related to them, enabling the choice of a suited therapeutic treatment that considers the mutation frequency, the quantity of viral particles and CD4+ cells among subtypes B and C. Therefore, the goal of the paper is to develop a model based on computational intelligence to help make decisions and give a better support to the clinic practice and research for those who deal with the pati-ents. 923 samples were used for thus study, obtained together with the Laboratory of Molecu-lar Virology of the Federal University of Rio de Janeiro, that belongs to the genotyping net-work of the Health Ministry. Initially, it was done a study of the profile of the mutations of subtypes B and C. To do so, it was made a cut of the patients that entered from 1998 on, with mutation frequency in the protease equal or greater than 5% and submitted to only one HA-ART therapy with just one protease inhibitor, Nelfinavir (NFV), or without any kind of prote-ase inhibitor. Through these studies, it was possible to observe that the subtype C has a diffe-rent character from subtype B, not only when it comes to the CV and CD4+ level but also the numbers of mutations in the protease gene, this fact emphasizes the necessity of specific tre-atments, from health professionals, for each subtype Key-words: antiretroviral treatment, HIV, Artificial Neural Network. / Durante a ?ltima d?cada a terapia antirretroviral (TARV) contribuiu para a redu??o da taxa de mortalidade e morbidade entre as pessoas infectadas pelo HIV-1. Contudo, a falha terap?utica relacionada ao surgimento de resist?ncias aos retrovirais em fun??o das muta??es e/ou pela n?o ades?o ? terapia antirretroviral ? um problema de sa?de p?blica. Torna-se de fundamental import?ncia a compreens?o dos padr?es de resist?ncias e dos mecanismos a eles associados, possibilitando a escolha de um tratamento terap?utico apropriado que considere a frequ?ncia de muta??o, quantidade de part?culas virais (CV) e c?lulas CD4+ entre os subtipos B e C. Portanto, o objetivo desse trabalho ? desenvolver um modelo baseado em intelig?ncia computacional para auxiliar a tomada de decis?o e proporcionar melhor suporte a pr?tica cl?-nica e de pesquisa daqueles que lidam diretamente com pacientes. Foram utilizadas 923 amos-tras para esse estudo, obtidas juntos ao Laborat?rio de Virologia Molecular da Universidade Federal do Rio de Janeiro pertencente ? rede de genotipagem do Minist?rio da Sa?de. Inicial-mente foi realizado um estudo do perfil de muta??es dos subtipos B e C. Para tal foi feito um corte com pacientes com entrada no sistema a partir de 1998, com frequ?ncia de muta??es na protease maior ou igual a 5% e submetidos a uma ?nica terapia HAART com apenas um ini-bidor de protease, Nelfinavir (NFV), ou sem nenhum inibidor de protease. Foram realizadas 50 simula??es para cada um dos subtipos usando as posi??es da sequ?ncia da protease como dados entrada juntamente com as taxas de carga viral e CD4+. Atrav?s dos estudos foi poss?-vel observar que o subtipo C possui car?ter diferenciado do subtipo B tanto em n?vel de CV e CD4+ quanto ao n?mero de muta??es no gene da protease, fato esse que enfatiza a necessida-de de tratamentos espec?ficos para cada subtipo pelos profissionais da sa?de. Al?m disso, o modelo demonstrou um desempenho satisfat?rio, possuindo um bom ?ndice de acertos.
154

Bancos de dados geográficos e redes neurais artificiais: tecnologias de apoio à gestão do território. / Geographic data bank and artificial neural network: technologies of support for the territorial management.

José Simeão de Medeiros 27 August 1999 (has links)
Este trabalho apresenta o desenvolvimento de um instrumento de apoio à gestão territorial, denominado Banco de Dados Geográficos – BDG, constituído de uma base de dados georreferenciadas, de um sistema de gerenciamento de banco de dados, de um sistema de informação geográfica – SIG e de um simulador de redes neurais artificiais – SRNA. O roteiro metodológico adotado permitiu a transposição do Detalhamento da Metodologia para Execução do Zoneamento Ecológico-Econômico pelos Estados da Amazônia Legal para um modelo conceitual materializado no BDG, que serviu de suporte para a criação de uma base de dados geográficos, na qual utilizou-se os conceitos de geo-campos e geo-objetos para modelagem das entidades geográficas definidas. Através deste ambiente computacional foram realizados procedimentos de correção e refinamento dos dados do meio físico e sócio-econômicos, de interpretação de imagens de satélite e análises e combinações dos dados, que permitiram definir unidades básicas de informação do território, a partir das quais foram geradas as sínteses referentes à potencialidade social e econômica, à sustentabilidade do ambiente, aos subsídios para ordenação do território, incluindo orientações à gestão do território na área de estudo localizada no sudoeste do estado de Rondônia. Sobre os dados do meio físico, foram utilizadas duas técnicas de análise geográfica: álgebra de mapas e rede neural artificial, que produziram cenários relativos à vulnerabilidade natural à erosão. A análise das matrizes de erros obtidas da tabulação cruzada entre os cenários, revelou uma boa exatidão global (acima de 90%) entre os cenários obtidos através da modelagem via álgebra de mapas e via rede neural artificial e, uma exatidão global regular (em torno de 60%), quando foram comparados os cenários obtidos via álgebra de mapas e via rede neural artificial com o cenário obtido através de procedimentos manuais. / This work presents the development of a tool to support the land management called Geographical Data Base (GDB) formed by a georrefered data base, a data base management system (DBMS), a geographic information system (GIS) and an artificial neural net simulator (ANNS). The methodological approach allowed the conceptual modelling of the methodology of the ZEE (Ecological-Economic Zoning) institutional program within GDB, using both field and object concepts, in which the geographic entities were modelled. Using this computacional framework both natural and socio-economic data were corrected and improved, and also procedures of satellite image interpretation using image processing techniques, of analysis and data manipulation using GIS tools, were accomplished. These procedures allowed to define basic units of mapping and to get the following synthesis for the study area located in State of Rondonia: social potenciality, environmental vulnerability, environmental sustentability, land management maps, and guidelines about land management. With the abiotic and biotic data, two different geographical inference methods were used to produce the environmental vulnerability map: a) the common Map Algebra approach and b) an Artificial Neural Network approach – as a technique to deal with the non-linearities involved in inferencial processes. Error matrices were computed from cross tabulation among different scenaries obtained from those inference methods. A good global accuracy (over 90%) was obtained when ANN and Map Algebra scenaries were compared. Medium global accuracies (around 60%) were obtained when ANN and Map Algebra were compared with scenaries obtained by manual procedures.
155

Jämförelse mellan neurala nätverk baserad AI och state-of-the-art AI i racing spel / Comparision between neural network based AI and state-of-the-art AI in racing games

Karlsson, Simon, Jensen, Christopher January 2013 (has links)
Denna rapport jämför prestandan mellan state-of-the-art AI-botar i racing spelet TORCS och en AI-bot som kör med hjälp av ett artificiellt neuralt nätverk (ANN-bot). ANN-boten, som implementerades som en del av arbetet, använder en feedforward arkitektur och backpropagation för inlärning. Ett separat program som användes för att träna det neurala nätverket med träningdata som spelats in från TORCS implementerades också. Som state-of-the-art AI-botar användes AI-botar som har använts i en tävling. De fyra AI-botarna testades på åtta olika banor och data om hur lång tid varje varv tog och hur snabbt AI-botarna körde sparades och sammanställdes. Resultaten visar att på banorna som ANN-boten klarar av att köra runt så är ANN-boten snabbare än en den långsamaste state-of-the-art boten, men ANNboten klara inte av majoriteten av banorna som den testades på. Anledning till detta var antagligen brist på varierande träningsdata.
156

Neural Networks for Part-of-Speech Tagging

Strandqvist, Wiktor January 2016 (has links)
The aim of this thesis is to explore the viability of artificial neural networks using a purely contextual word representation as a solution for part-of-speech tagging. Furthermore, the effects of deep learning and increased contextual information of the network are explored. This was achieved by creating an artificial neural network written in Python. The input vectors employed were created by Word2Vec. This system was compared to a baseline using a tagger with handcrafted features in respect to accuracy and precision. The results show that the use of artificial neural networks using a purely contextual word representation shows promise, but ultimately falls roughly two percent short of the baseline. The suspected reason for this is the suboptimal representation for rare words. The use of deeper network architectures shows an insignificant improvement, indicating that the data sets used might be too small. The use of additional context information provided a higher accuracy, but started to decline after a context size of one.
157

Pattern Recognition applied to Continuous integration system.

VANGALA, SHIVAKANTHREDDY January 2018 (has links)
Context: Thisthesis focuses on regression testing in the continuous integration environment which is integration testing that ensures that changes made in the new development code to thesoftware product do not introduce new faults to the software product. Continuous integration is software development practice which integrates all development, testing, and deployment activities. In continuous integration,regression testing is done by manually selecting and prioritizingtestcases from a larger set of testcases. The main challenge faced using manual testcases selection and prioritization is insome caseswhereneeded testcases are ignored in subset of selected testcasesbecause testers didn’t includethem manually while designing hourly cycle regression test suite for particular feature development in product. So, Ericsson, the company in which environment this thesis is conducted,aims at improvingtheirtestcase selection and prioritization in regression testing using pattern recognition. Objectives:This thesis study suggests prediction models using pattern recognition algorithms for predicting future testcases failures using historical data. This helpsto improve the present quality of continuous integration environment by selecting appropriate subset of testcases from larger set of testcases for regression testing. There exist several candidate pattern recognition algorithms that are promising for predicting testcase failures. Based on the characteristics of the data collected at Ericsson, suitable pattern recognition algorithms are selected and predictive models are built. Finally, two predictive models are evaluated and the best performing model is integrated into the continuous integration system. Methods:Experiment research method is chosen for this research because discovery of cause and effect relationships between dependent and independent variables can be used for the evaluation of the predictive model.The experiment is conducted in RStudio, which facilitates to train the predictive models using continuous integration historical data. The predictive ability of the algorithms is evaluated using prediction accuracy evaluation metrics. Results: After implementing two predictive models (neural networks & k-nearest means) using continuous integration data, neural networks achieved aprediction accuracy of 75.3%, k-nearest neighbor gave result 67.75%. Conclusions: This research investigated the feasibility of an adaptive and self-learning test machinery by pattern recognition in continuous integration environment to improve testcase selection and prioritization in regression testing. Neural networks have proved effective capability of predicting failure testcase by 75.3% over the k-nearest neighbors.Predictive model can only make continuous integration efficient only if it has 100% prediction capability, the prediction capability of the 75.3% will not make continuous integration system more efficient than present static testcase selection and prioritization as it has deficiency of lacking prediction 25%. So, this research can only conclude that neural networks at present has 75.3% prediction capability but in future when data availability is more,this may reach to 100% predictive capability. The present Ericsson continuous integration system needs to improve its data storage for historical data at present it can only store 30 days of historical data. The predictive models require large data to give good prediction. To support continuous integration at present Ericsson is using jenkins automation server, there are other automation servers like Team city, Travis CI, Go CD, Circle CI which can store data more than 30 days using them will mitigate the problem of data storage.
158

An Exploratory Comparison of B-RAAM and RAAM Architectures

Kjellberg, Andreas January 2003 (has links)
Artificial intelligence is a broad research area and there are many different reasons why it is interesting to study artificial intelligence. One of the main reasons is to understand how information might be represented in the human brain. The Recursive Auto Associative Memory (RAAM) is a connectionist architecture that with some success has been used for that purpose since it develops compact distributed representations for compositional structures. A lot of extensions to the RAAM architecture have been developed through the years in order to improve the performance of RAAM; Bi coded RAAM (B-RAAM) is one of those extensions. In this work a modified B-RAAM architecture is tested and compared to RAAM regarding: Training speed, ability to learn with smaller internal representations and generalization ability. The internal representations of the two network models are also analyzed and compared. This dissertation also includes a discussion of some theoretical aspects of B-RAAM. It is found here that the training speed for B-RAAM is considerably lower than RAAM, on the other hand, RAAM learns better with smaller internal representations and is better at generalize than B-RAAM. It is also shown that the extracted internal representation of RAAM reveals more structural information than it does for B-RAAM. This has been shown by hieratically cluster the internal representation and analyse the tree structure. In addition to this a discussion is added about the justifiability to label B-RAAM as an extension to RAAM.
159

Empirical Evaluation of Approaches for Digit Recognition

Joosep, Henno January 2015 (has links)
Optical Character Recognition (OCR) is a well studied subject involving variousapplication areas. OCR results in various limited problem areas are promising,however building highly accurate OCR application is still problematic in practice.This thesis discusses the problem of recognizing and confirming Bingo lottery numbersfrom a real lottery field, and a prototype for Android phone is implementedand evaluated. An OCR library Tesseract and two Artificial Neural Network (ANN)approaches are compared in an experiment and discussed. The results show thattraining a neural network for each number gives slightly higher results than Tesseract.
160

Grid-Based RFID Indoor Localization Using Tag Read Count and Received Signal Strength Measurements

Jeevarathnam, Nanda Gopal 26 October 2017 (has links)
Passive ultra-high frequency (UHF) radio frequency identification (RFID) systems have gained immense popularity in recent years for their wide-scale industrial applications in inventory tracking and management. In this study, we explore the potential of passive RFID systems for indoor localization by developing a grid-based experimental framework using two standard and easily measurable performance metrics: received signal strength indicator (RSSI) and tag read count (TRC). We create scenarios imitating real life challenges such as placing metal objects and other RFID tags in two different read fields (symmetric and asymmetric) to analyze their impacts on location accuracy. We study the prediction potential of RSSI and TRC both independently and collaboratively. In the end, we demonstrate that both signal metrics can be used for localization with sufficient accuracy whereas the best performance is obtained when both metrics are used together for prediction on an artificial neural network especially for more challenging scenarios. Experimental results show an average error of as low as 0.286 (where consecutive grid distance is defined as unity) which satisfies the grid-based localization benchmark of less than 0.5.

Page generated in 0.0907 seconds