• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 14
  • 13
  • 12
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 175
  • 175
  • 175
  • 92
  • 60
  • 57
  • 55
  • 49
  • 34
  • 33
  • 32
  • 29
  • 28
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Rozpoznávání pojmenovaných entit pomocí neuronových sítí / Neural Network Based Named Entity Recognition

Straková, Jana January 2017 (has links)
Title: Neural Network Based Named Entity Recognition Author: Jana Straková Institute: Institute of Formal and Applied Linguistics Supervisor of the doctoral thesis: prof. RNDr. Jan Hajič, Dr., Institute of Formal and Applied Linguistics Abstract: Czech named entity recognition (the task of automatic identification and classification of proper names in text, such as names of people, locations and organizations) has become a well-established field since the publication of the Czech Named Entity Corpus (CNEC). This doctoral thesis presents the author's research of named entity recognition, mainly in the Czech language. It presents work and research carried out during CNEC publication and its evaluation. It fur- ther envelops the author's research results, which improved Czech state-of-the-art results in named entity recognition in recent years, with special focus on artificial neural network based solutions. Starting with a simple feed-forward neural net- work with softmax output layer, with a standard set of classification features for the task, the thesis presents methodology and results, which were later used in open-source software solution for named entity recognition, NameTag. The thesis finalizes with a recurrent neural network based recognizer with word embeddings and character-level word embeddings,...
132

Predictive models for career progression

Soliman, Zakaria 08 1900 (has links)
No description available.
133

Réseaux de neurones, SVM et approches locales pour la prévision de séries temporelles / No available

Cherif, Aymen 16 July 2013 (has links)
La prévision des séries temporelles est un problème qui est traité depuis de nombreuses années. On y trouve des applications dans différents domaines tels que : la finance, la médecine, le transport, etc. Dans cette thèse, on s’est intéressé aux méthodes issues de l’apprentissage artificiel : les réseaux de neurones et les SVM. On s’est également intéressé à l’intérêt des méta-méthodes pour améliorer les performances des prédicteurs, notamment l’approche locale. Dans une optique de diviser pour régner, les approches locales effectuent le clustering des données avant d’affecter les prédicteurs aux sous ensembles obtenus. Nous présentons une modification dans l’algorithme d’apprentissage des réseaux de neurones récurrents afin de les adapter à cette approche. Nous proposons également deux nouvelles techniques de clustering, la première basée sur les cartes de Kohonen et la seconde sur les arbres binaires. / Time series forecasting is a widely discussed issue for many years. Researchers from various disciplines have addressed it in several application areas : finance, medical, transportation, etc. In this thesis, we focused on machine learning methods : neural networks and SVM. We have also been interested in the meta-methods to push up the predictor performances, and more specifically the local models. In a divide and conquer strategy, the local models perform a clustering over the data sets before different predictors are affected into each obtained subset. We present in this thesis a new algorithm for recurrent neural networks to use them as local predictors. We also propose two novel clustering techniques suitable for local models. The first is based on Kohonen maps, and the second is based on binary trees.
134

[en] A DEPENDENCY TREE ARC FILTER / [pt] UM FILTRO PARA ARCOS EM ÁRVORES DE DEPENDÊNCIA

RENATO SAYAO CRYSTALLINO DA ROCHA 13 December 2018 (has links)
[pt] A tarefa de Processamento de Linguagem Natural consiste em analisar linguagens naturais de forma computacional, facilitando o desenvolvimento de programas capazes de utilizar dados falados ou escritos. Uma das tarefas mais importantes deste campo é a Análise de Dependência. Tal tarefa consiste em analisar a estrutura gramatical de frases visando extrair aprender dados sobre suas relações de dependência. Em uma sentença, essas relações se apresentam em formato de árvore, onde todas as palavras são interdependentes. Devido ao seu uso em uma grande variedade de aplicações como Tradução Automática e Identificação de Papéis Semânticos, diversas pesquisas com diferentes abordagens são feitas nessa área visando melhorar a acurácia das árvores previstas. Uma das abordagens em questão consiste em encarar o problema como uma tarefa de classificação de tokens e dividi-la em três classificadores diferentes, um para cada sub-tarefa, e depois juntar seus resultados de forma incremental. As sub-tarefas consistem em classificar, para cada par de palavras que possuam relação paidependente, a classe gramatical do pai, a posição relativa entre os dois e a distância relativa entre as palavras. Porém, observando pesquisas anteriores nessa abordagem, notamos que o gargalo está na terceira sub-tarefa, a predição da distância entre os tokens. Redes Neurais Recorrentes são modelos que nos permitem trabalhar utilizando sequências de vetores, tornando viáveis problemas de classificação onde tanto a entrada quanto a saída do problema são sequenciais, fazendo delas uma escolha natural para o problema. Esse trabalho utiliza-se de Redes Neurais Recorrentes, em específico Long Short-Term Memory, para realizar a tarefa de predição da distância entre palavras que possuam relações de dependência como um problema de classificação sequence-to-sequence. Para sua avaliação empírica, este trabalho segue a linha de pesquisas anteriores e utiliza os dados do corpus em português disponibilizado pela Conference on Computational Natural Language Learning 2006 Shared Task. O modelo resultante alcança 95.27 por cento de precisão, resultado que é melhor do que o obtido por pesquisas feitas anteriormente para o modelo incremental. / [en] The Natural Language Processing task consists of analyzing the grammatical structure of a sentence written in natural language aiming to learn, identify and extract information related to its dependency structure. This data can be structured like a tree, since every word in a sentence has a head-dependent relation to another word from the same sentence. Since Dependency Parsing is used in many applications like Machine Translation, Semantic Role Labeling and Part-Of-Speech Tagging, researchers aiming to improve the accuracy on their models are approaching this task in many different ways. One of the approaches consists in looking at this task as a token classification problem, using different classifiers for each sub-task and joining them in an incremental way. These sub-tasks consist in classifying, for each head-dependent pair, the Part-Of-Speech tag of the head, the relative position between the two words and the distance between them. However, previous researches using this approach show that the bottleneck lies in the distance classifier. Recurrent Neural Networks are a kind of Neural Network that allows us to work using sequences of vectors, allowing for classification problems where both our input and output are sequences, making them a great choice for the problem at hand. This work studies the use of Recurrent Neural Networks, in specific Long Short-Term Memory networks, for the head-dependent distance classifier sub-task as a sequence-to-sequence classification problem. To evaluate its efficiency, this work follows the line of previous researches and makes use of the Portuguese corpus of the Conference on Computational Natural Language Learning 2006 Shared Task. The resulting model attains 95.27 percent precision, which is better than the previous results obtained using incremental models.
135

Recurrent neural network language generation for dialogue systems

Wen, Tsung-Hsien January 2018 (has links)
Language is the principal medium for ideas, while dialogue is the most natural and effective way for humans to interact with and access information from machines. Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact on usability and perceived quality. Many commonly used NLG systems employ rules and heuristics, which tend to generate inflexible and stylised responses without the natural variation of human language. However, the frequent repetition of identical output forms can quickly make dialogue become tedious for most real-world users. Additionally, these rules and heuristics are not scalable and hence not trivially extensible to other domains or languages. A statistical approach to language generation can learn language decisions directly from data without relying on hand-coded rules or heuristics, which brings scalability and flexibility to NLG. Statistical models also provide an opportunity to learn in-domain human colloquialisms and cross-domain model adaptations. A robust and quasi-supervised NLG model is proposed in this thesis. The model leverages a Recurrent Neural Network (RNN)-based surface realiser and a gating mechanism applied to input semantics. The model is motivated by the Long-Short Term Memory (LSTM) network. The RNN-based surface realiser and gating mechanism use a neural network to learn end-to-end language generation decisions from input dialogue act and sentence pairs; it also integrates sentence planning and surface realisation into a single optimisation problem. The single optimisation not only bypasses the costly intermediate linguistic annotations but also generates more natural and human-like responses. Furthermore, a domain adaptation study shows that the proposed model can be readily adapted and extended to new dialogue domains via a proposed recipe. Continuing the success of end-to-end learning, the second part of the thesis speculates on building an end-to-end dialogue system by framing it as a conditional generation problem. The proposed model encapsulates a belief tracker with a minimal state representation and a generator that takes the dialogue context to produce responses. These features suggest comprehension and fast learning. The proposed model is capable of understanding requests and accomplishing tasks after training on only a few hundred human-human dialogues. A complementary Wizard-of-Oz data collection method is also introduced to facilitate the collection of human-human conversations from online workers. The results demonstrate that the proposed model can talk to human judges naturally, without any difficulty, for a sample application domain. In addition, the results also suggest that the introduction of a stochastic latent variable can help the system model intrinsic variation in communicative intention much better.
136

Métodos Neuronais para a Solução da Equação Algébrica de Riccati e o LQR / Neural methods for the solution of Equation Of algebraic Riccati and LQR

Silva, Fabio Nogueira da 20 June 2008 (has links)
Made available in DSpace on 2016-08-17T14:53:01Z (GMT). No. of bitstreams: 1 Fabio Nogueira da Silva.pdf: 1098466 bytes, checksum: a72dcced91748fe6c54f3cab86c19849 (MD5) Previous issue date: 2008-06-20 / FUNDAÇÃO DE AMPARO À PESQUISA E AO DESENVOLVIMENTO CIENTIFICO E TECNOLÓGICO DO MARANHÃO / We present in this work the results about two neural networks methods to solve the algebraic Riccati(ARE), what are used in many applications, mainly in the Linear Quadratic Regulator (LQR), H2 and H1 controls. First is showed the real symmetric form of the ARE and two methods based on neural computation. One feedforward neural network (FNN), that de¯nes an error as function of the ARE and a recurrent neural network (RNN), which converts a constrain optimization problem, restricted to the state space model, into an unconstrained convex optimization problem de¯ning an energy as function of the ARE and Cholesky factor. A proposal to chose the learning parameters of the RNN used to solve the ARE, by making a surface of the parameters variations, thus we can tune the neural network for a better performance. Computational experiments related with the plant matrices perturbations of the tested systems in order to perform an analysis of the behavior of the presented methodologies, that are based on homotopies methods, where we chose a good initial condition and compare the results to the Schur method. Two 6th order systems were used, a Doubly Fed Induction Generator(DFIG) and an aircraft plant. The results showed the RNN a good alternative compared with the FNN and Schur methods. / Apresenta-se nesta dissertação os resultados a respeito de dois métodos neuronais para a resolução da equação algébrica de Riccati(EAR), que tem varias aplicações, sendo principalmente usada pelos Regulador Linear Quadrático(LQR), controle H2 e controle H1. É apresentado a EAR real e simétrica e dois métodos baseados em uma rede neuronal direta (RND) que tem a função de erro associada a EAR e uma rede neuronal recorrente (RNR) que converte um problema de otimização restrita ao modelo de espaço de estados em outro de otimização convexa em função da EAR e do fator de Cholesky de modo a usufruir das propriedades de convexidade e condições de otimalidade. Uma proposta para a escolha dos parâmetros da RNR usada para solucionar a EAR por meio da geração de superfícies com a variação paramétrica da RNR, podendo assim melhor sintonizar a rede neuronal para um melhor desempenho. Experimentos computacionais relacionados a perturbações nos sistemas foram realizados para analisar o comportamento das metodologias apresentadas, tendo como base o princípio dos métodos homotópicos, com uma boa condição inicial, a partir de uma ponto de operação estável e comparamos os resultados com o método de Schur. Foram usadas as plantas de dois sistemas: uma representando a dinâmica de uma aeronave e outra de um motor de indução eólico duplamente alimentado(DFIG), ambos sistemas de 6a ordem. Os resultados mostram que a RNR é uma boa alternativa se comparado com a RND e com o método de Schur.
137

Toward a brain-like memory with recurrent neural networks

Salihoglu, Utku 12 November 2009 (has links)
For the last twenty years, several assumptions have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural network should be coded in some way or another in one of the dynamical attractors of the brain, and retrieved by stimulating the network to trap its dynamics in the desired item’s basin of attraction. The second view shared by neural network researchers is to base the learning of the synaptic matrix on a local Hebbian mechanism. The third assumption is the presence of chaos and the benefit gained by its presence. Chaos, although very simply produced, inherently possesses an infinite amount of cyclic regimes that can be exploited for coding information. Moreover, the network randomly wanders around these unstable regimes in a spontaneous way, thus rapidly proposing alternative responses to external stimuli, and being easily able to switch from one of these potential attractors to another in response to any incoming stimulus. Finally, since their introduction sixty years ago, cell assemblies have proved to be a powerful paradigm for brain information processing. After their introduction in artificial intelligence, cell assemblies became commonly used in computational neuroscience as a neural substrate for content addressable memories. <p> <p>Based on these assumptions, this thesis provides a computer model of neural network simulation of a brain-like memory. It first shows experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the background, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause, but the consequence of the learning. However, it appears as an helpful consequence that widens the network’s encoding capacity. To learn the information to be stored, two supervised iterative Hebbian learning algorithm are proposed. One leaves the semantics of the attractors to be associated with the feeding data unprescribed, while the other defines it a priori. Both algorithms show good results, even though the first one is more robust and has a greater storing capacity. Using these promising results, a biologically plausible alternative to these algorithms is proposed using cell assemblies as substrate for information. Even though this is not new, the mechanisms underlying their formation are poorly understood and, so far, there are no biologically plausible algorithms that can explain how external stimuli can be online stored in cell assemblies. This thesis provide such a solution combining a fast Hebbian/anti-Hebbian learning of the network's recurrent connections for the creation of new cell assemblies, and a slower feedback signal which stabilizes the cell assemblies by learning the feed forward input connections. This last mechanism is inspired by the retroaxonal hypothesis. <p> / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
138

SOLVING PREDICTION PROBLEMS FROM TEMPORAL EVENT DATA ON NETWORKS

Hao Sha (11048391) 06 August 2021 (has links)
<div><div><div><p>Many complex processes can be viewed as sequential events on a network. In this thesis, we study the interplay between a network and the event sequences on it. We first focus on predicting events on a known network. Examples of such include: modeling retweet cascades, forecasting earthquakes, and tracing the source of a pandemic. In specific, given the network structure, we solve two types of problems - (1) forecasting future events based on the historical events, and (2) identifying the initial event(s) based on some later observations of the dynamics. The inverse problem of inferring the unknown network topology or links, based on the events, is also of great important. Examples along this line include: constructing influence networks among Twitter users from their tweets, soliciting new members to join an event based on their participation history, and recommending positions for job seekers according to their work experience. Following this direction, we study two types of problems - (1) recovering influence networks, and (2) predicting links between a node and a group of nodes, from event sequences.</p></div></div></div>
139

Rozpoznávání historických textů pomocí hlubokých neuronových sítí / Convolutional Networks for Historic Text Recognition

Kišš, Martin January 2018 (has links)
The aim of this work is to create a tool for automatic transcription of historical documents. The work is mainly focused on the recognition of texts from the period of modern times written using font Fraktur. The problem is solved with a newly designed recurrent convolutional neural networks and a Spatial Transformer Network. Part of the solution is also an implemented generator of artificial historical texts. Using this generator, an artificial data set is created on which the convolutional neural network for line recognition is trained. This network is then tested on real historical lines of text on which the network achieves up to 89.0 % of character accuracy. The contribution of this work is primarily the newly designed neural network for text line recognition and the implemented artificial text generator, with which it is possible to train the neural network to recognize real historical lines of text.
140

Strojový překlad pomocí umělých neuronových sítí / Machine Translation Using Artificial Neural Networks

Holcner, Jonáš January 2018 (has links)
The goal of this thesis is to describe and build a system for neural machine translation. System is built with recurrent neural networks - encoder-decoder architecture in particular. The result is a nmt library used to conduct experiments with different model parameters. Results of the experiments are compared with system built with the statistical tool Moses.

Page generated in 0.1567 seconds