• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 115
  • 115
  • 115
  • 115
  • 77
  • 64
  • 55
  • 45
  • 42
  • 42
  • 41
  • 40
  • 39
  • 38
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

[en] A DEPENDENCY TREE ARC FILTER / [pt] UM FILTRO PARA ARCOS EM ÁRVORES DE DEPENDÊNCIA

RENATO SAYAO CRYSTALLINO DA ROCHA 13 December 2018 (has links)
[pt] A tarefa de Processamento de Linguagem Natural consiste em analisar linguagens naturais de forma computacional, facilitando o desenvolvimento de programas capazes de utilizar dados falados ou escritos. Uma das tarefas mais importantes deste campo é a Análise de Dependência. Tal tarefa consiste em analisar a estrutura gramatical de frases visando extrair aprender dados sobre suas relações de dependência. Em uma sentença, essas relações se apresentam em formato de árvore, onde todas as palavras são interdependentes. Devido ao seu uso em uma grande variedade de aplicações como Tradução Automática e Identificação de Papéis Semânticos, diversas pesquisas com diferentes abordagens são feitas nessa área visando melhorar a acurácia das árvores previstas. Uma das abordagens em questão consiste em encarar o problema como uma tarefa de classificação de tokens e dividi-la em três classificadores diferentes, um para cada sub-tarefa, e depois juntar seus resultados de forma incremental. As sub-tarefas consistem em classificar, para cada par de palavras que possuam relação paidependente, a classe gramatical do pai, a posição relativa entre os dois e a distância relativa entre as palavras. Porém, observando pesquisas anteriores nessa abordagem, notamos que o gargalo está na terceira sub-tarefa, a predição da distância entre os tokens. Redes Neurais Recorrentes são modelos que nos permitem trabalhar utilizando sequências de vetores, tornando viáveis problemas de classificação onde tanto a entrada quanto a saída do problema são sequenciais, fazendo delas uma escolha natural para o problema. Esse trabalho utiliza-se de Redes Neurais Recorrentes, em específico Long Short-Term Memory, para realizar a tarefa de predição da distância entre palavras que possuam relações de dependência como um problema de classificação sequence-to-sequence. Para sua avaliação empírica, este trabalho segue a linha de pesquisas anteriores e utiliza os dados do corpus em português disponibilizado pela Conference on Computational Natural Language Learning 2006 Shared Task. O modelo resultante alcança 95.27 por cento de precisão, resultado que é melhor do que o obtido por pesquisas feitas anteriormente para o modelo incremental. / [en] The Natural Language Processing task consists of analyzing the grammatical structure of a sentence written in natural language aiming to learn, identify and extract information related to its dependency structure. This data can be structured like a tree, since every word in a sentence has a head-dependent relation to another word from the same sentence. Since Dependency Parsing is used in many applications like Machine Translation, Semantic Role Labeling and Part-Of-Speech Tagging, researchers aiming to improve the accuracy on their models are approaching this task in many different ways. One of the approaches consists in looking at this task as a token classification problem, using different classifiers for each sub-task and joining them in an incremental way. These sub-tasks consist in classifying, for each head-dependent pair, the Part-Of-Speech tag of the head, the relative position between the two words and the distance between them. However, previous researches using this approach show that the bottleneck lies in the distance classifier. Recurrent Neural Networks are a kind of Neural Network that allows us to work using sequences of vectors, allowing for classification problems where both our input and output are sequences, making them a great choice for the problem at hand. This work studies the use of Recurrent Neural Networks, in specific Long Short-Term Memory networks, for the head-dependent distance classifier sub-task as a sequence-to-sequence classification problem. To evaluate its efficiency, this work follows the line of previous researches and makes use of the Portuguese corpus of the Conference on Computational Natural Language Learning 2006 Shared Task. The resulting model attains 95.27 percent precision, which is better than the previous results obtained using incremental models.
22

[en] FORECASTING EMPLOYMENT AND UNEMPLOYMENT IN US. A COMPARISON BETWEEN MODELS / [pt] PREVENDO EMPREGO E DESEMPREGO NOS EUA. UMA COMPARAÇÃO ENTRE MODELOS

MARCOS LOPES MUNIZ 12 November 2020 (has links)
[pt] Prever emprego e desemprego é de grande importância para praticamente todos os agentes de uma economia. Emprego é uma das principais variáveis analisadas como indicador econômico, e desemprego serve para os policy makers como uma orientação às suas decisões. Neste trabalho, eu estudo quais características das duas séries podemos usar para auxiliar no tratamento dos dados e métodos empregados para auxiliar no poder preditivo das mesmas. Eu comparo modelos de machine (Random Forest e Lasso Adaptativo) e Deep (Long short Term memory) learning, procurando capturar as não linearidades e dinâmicas de ambas séries. Os resultados encontrados sugerem que o modelo AR com Random Forest aplicado nos resíduos, como uma maneira de separar parte linear e não linear, é o melhor modelo para previsão de emprego, enquanto Random Forest e AdaLasso com Random Forest aplicado nos resíduos são os melhores para o desemprego. / [en] Forecasting employment and unemployment is of great importance for virtually all agents in the economy. Employment is one of the main variables analyzed as an economic indicator, and unemployment serves to policy makers as a guide to their actions. In this essay, I study what features of both series we can use on data treatment and methods used to add to the forecasting predictive power. Using an AR model as a benchmark, I compare machine (Random Forest and Adaptive Lasso) and deep (Long Short Term Memory) learning methods, seeking to capture non-linearities of both series dynamics. The results suggests that an AR model with a Random Forest on residuals (as a way to separate linear and non-linear part) is the best model for employment forecast, while Random Forest and AdaLasso with Random Forest on residuals were the best for unemployment forecast.
23

Exploration and Evaluation of RNN Models on Low-Resource Embedded Devices for Human Activity Recognition / Undersökning och utvärdering av RNN-modeller på resurssvaga inbyggda system för mänsklig aktivitetsigenkänning

Björnsson, Helgi Hrafn, Kaldal, Jón January 2023 (has links)
Human activity data is typically represented as time series data, and RNNs, often with LSTM cells, are commonly used for recognition in this field. However, RNNs and LSTM-RNNs are often too resource-intensive for real-time applications on resource constrained devices, making them unsuitable. This thesis project is carried out at Wrlds AB, Stockholm. At Wrlds, all machine learning is run in the cloud, but they have been attempting to run their AI algorithms on their embedded devices. The main task of this project was to investigate alternative network structures to minimize the size of the networks to be used on human activity data. This thesis investigates the use of Fast GRNN, a deep learning algorithm developed by Microsoft researchers, to classify human activity on resource-constrained devices. The FastGRNN algorithm was compared to state-of-the-art RNNs, LSTM, GRU, and Simple RNN in terms of accuracy, classification time, memory usage, and energy consumption. This research is limited to implementing the FastRNN algorithm on Nordic SoCs using their SDK and TensorFlow Lite Micro. The result of this thesis shows that the proposed network has similar performance as LSTM networks in terms of accuracy while being both considerably smaller and faster, making it a promising solution for human activity recognition on embedded devices with limited computational resources and merits further investigation. / Rörelse igenkännings analys är oftast representerat av tidsseriedata där ett RNN modell meden LSTM arkitektur är oftast den självklara vägen att ta. Dock så är denna arkitektur väldigt resurskrävande för applikationer i realtid och gör att det uppstår problem med resursbegränsad hårdvara. Detta examensarbete är utfört i samarbete med Wrlds Technologies AB. På Wrlds så körs deras maskin inlärningsmodeller på molnet och lokalt på mobiltelefoner. Wrlds har nu påbörjat en resa för att kunna köra modeller direkt på små inbyggda system. Examensarbete kommer att utvärdera en FastGRNN som är en NN-arkitektur utvecklad av Microsoft i syfte att användas på resurs begränsad hårdvara. FastGRNN algoritmen jämfördes med andra högkvalitativa arkitekturer som RNNs, LSTM, GRU och en simpel RNN. Träffsäkerhet, klassifikationstid, minnesanvändning samt energikonsumtion användes för att jämföra dom olika varianterna. Detta arbete kommer bara att utvärdera en FastGRNN algoritm på en Nordic SoCs och kommer att användas deras SDK samt Tensorflow Lite Micro. Resultatet från detta examensarbete visar att det utvärderade nätverket har liknande prestanda som ett LSTM nätverk men också att nätverket är betydligt mindre i storlek och därmed snabbare. Detta betyder att ett FastGRNN visar lovande resultat för användningen av rörelseigenkänning på inbyggda system med begränsad prestanda kapacitet.
24

Predicting customer purchase behavior within Telecom : How Artificial Intelligence can be collaborated into marketing efforts / Förutspå köpbeteenden inom telekom : Hur Artificiell Intelligens kan användas i marknadsföringsaktiviteter

Forslund, John, Fahlén, Jesper January 2020 (has links)
This study aims to investigate the implementation of an AI model that predicts customer purchases, in the telecom industry. The thesis also outlines how such an AI model can assist decision-making in marketing strategies. It is concluded that designing the AI model by following a Recurrent Neural Network (RNN) architecture with a Long Short-Term Memory (LSTM) layer, allow for a successful implementation with satisfactory model performances. Stepwise instructions to construct such model is presented in the methodology section of the study. The RNN-LSTM model further serves as an assisting tool for marketers to assess how a consumer’s website behavior affect their purchase behavior over time, in a quantitative way - by observing what the authors refer to as the Customer Purchase Propensity Journey (CPPJ). The firm empirical basis of CPPJ, can help organizations improve their allocation of marketing resources, as well as benefit the organization’s online presence by allowing for personalization of the customer experience. / Denna studie undersöker implementeringen av en AI-modell som förutspår kunders köp, inom telekombranschen. Studien syftar även till att påvisa hur en sådan AI-modell kan understödja beslutsfattande i marknadsföringsstrategier. Genom att designa AI-modellen med en Recurrent Neural Network (RNN) arkitektur med ett Long Short-Term Memory (LSTM) lager, drar studien slutsatsen att en sådan design möjliggör en framgångsrik implementering med tillfredsställande modellprestation. Instruktioner erhålls stegvis för att konstruera modellen i studiens metodikavsnitt. RNN-LSTM-modellen kan med fördel användas som ett hjälpande verktyg till marknadsförare för att bedöma hur en kunds beteendemönster på en hemsida påverkar deras köpbeteende över tiden, på ett kvantitativt sätt - genom att observera det ramverk som författarna kallar för Kundköpbenägenhetsresan, på engelska Customer Purchase Propensity Journey (CPPJ). Den empiriska grunden av CPPJ kan hjälpa organisationer att förbättra allokeringen av marknadsföringsresurser, samt gynna deras digitala närvaro genom att möjliggöra mer relevant personalisering i kundupplevelsen.
25

Hierarchical Control of Simulated Aircraft / Hierarkisk kontroll av simulerade flygplan

Mannberg, Noah January 2023 (has links)
This thesis investigates the effectiveness of employing pretraining and a discrete "control signal" bottleneck layer in a neural network trained in aircraft navigation through deep reinforcement learning. The study defines two distinct tasks to assess the efficacy of this approach. The first task is utilized for pretraining specific parts of the network, while the second task evaluates the potential benefits of this technique. The experimental findings indicate that the network successfully learned three main macro actions during pretraining. flying straight ahead, turning left, and turning right, and achieved high rewards on the task. However, utilizing the pretrained network on the transfer task yielded poor performance, possibly due to the limited effective action space or deficiencies in the training process. The study discusses several potential solutions, such as incorporating multiple pretraining tasks and alterations of the training process as avenues for future research. Overall, this study highlights the challanges and opportunities associated with combining pretraining with a discrete bottleneck layer in the context of simulated aircraft navigation using reinforcement learning. / Denna studie undersöker effektiviteten av att använda förträning och en diskret "styrsignal" som fungerar som flaskhals i ett neuralt nätverk tränat i flygnavigering med hjälp av djup förstärkande inlärning. Studien definierar två olika uppgifter för att bedöma effektiviteten hos denna metod. Den första uppgiften används för att förträna specifika delar at nätverket, medan den andra uppgiften utvärderar de potentiella fördelarna med denna teknik. De experimentella resultaten indikerar att nätverket framgångsrikt lärde sig tre huvudsakliga makrohandlingar under förträningen: att flyga rakt fram, att svänga vänster och att svänga höger, och uppnådde höga belöningar för uppgiften. Men att använda det förtränade nätverket för den uppföljande uppgiften gav dålig prestation, möjligen på grund av det begränsade effektiva handlingsutrymmet eller begränsningar i träningsprocessen. Studien diskuterar flera potentiella lösningar, såsom att inkorporera flera förträningsuppgifter och ändringar i träningsprocessen, som möjliga framtida forskningsvägar. Sammantaget belyser denna studie de utmaningar och möjligheter som är förknippade med att kombinera förträning med ett diskret flaskhalslager inom kontexten av simulerad flygnavigering och förstärkningsinlärning.
26

Computational models for multilingual negation scope detection

Fancellu, Federico January 2018 (has links)
Negation is a common property of languages, in that there are few languages, if any, that lack means to revert the truth-value of a statement. A challenge to cross-lingual studies of negation lies in the fact that languages encode and use it in different ways. Although this variation has been extensively researched in linguistics, little has been done in automated language processing. In particular, we lack computational models of processing negation that can be generalized across language. We even lack knowledge of what the development of such models would require. These models however exist and can be built by means of existing cross-lingual resources, even when annotated data for a language other than English is not available. This thesis shows this in the context of detecting string-level negation scope, i.e. the set of tokens in a sentence whose meaning is affected by a negation marker (e.g. 'not'). Our contribution has two parts. First, we investigate the scenario where annotated training data is available. We show that Bi-directional Long Short Term Memory (BiLSTM) networks are state-of-the-art models whose features can be generalized across language. We also show that these models suffer from genre effects and that for most of the corpora we have experimented with, high performance is simply an artifact of the annotation styles, where negation scope is often a span of text delimited by punctuation. Second, we investigate the scenario where annotated data is available in only one language, experimenting with model transfer. To test our approach, we first build NEGPAR, a parallel corpus annotated for negation, where pre-existing annotations on English sentences have been edited and extended to Chinese translations. We then show that transferring a model for negation scope detection across languages is possible by means of structured neural models where negation scope is detected on top of a cross-linguistically consistent representation, Universal Dependencies. On the other hand, we found cross-lingual lexical information only to help very little with performance. Finally, error analysis shows that performance is better when a negation marker is in the same dependency substructure as its scope and that some of the phenomena related to negation scope requiring lexical knowledge are still not captured correctly. In the conclusions, we tie up the contributions of this thesis and we point future work towards representing negation scope across languages at the level of logical form as well.
27

Comparative analysis of XGBoost, MLP and LSTM techniques for the problem of predicting fire brigade Iiterventions /

Cerna Ñahuis, Selene Leya January 2019 (has links)
Orientador: Anna Diva Plasencia Lotufo / Abstract: Many environmental, economic and societal factors are leading fire brigades to be increasingly solicited, and, as a result, they face an ever-increasing number of interventions, most of the time on constant resource. On the other hand, these interventions are directly related to human activity, which itself is predictable: swimming pool drownings occur in summer while road accidents due to ice storms occur in winter. One solution to improve the response of firefighters on constant resource is therefore to predict their workload, i.e., their number of interventions per hour, based on explanatory variables conditioning human activity. The present work aims to develop three models that are compared to determine if they can predict the firefighters' response load in a reasonable way. The tools chosen are the most representative from their respective categories in Machine Learning, such as XGBoost having as core a decision tree, a classic method such as Multi-Layer Perceptron and a more advanced algorithm like Long Short-Term Memory both with neurons as a base. The entire process is detailed, from data collection to obtaining the predictions. The results obtained prove a reasonable quality prediction that can be improved by data science techniques such as feature selection and adjustment of hyperparameters. / Resumo: Muitos fatores ambientais, econômicos e sociais estão levando as brigadas de incêndio a serem cada vez mais solicitadas e, como consequência, enfrentam um número cada vez maior de intervenções, na maioria das vezes com recursos constantes. Por outro lado, essas intervenções estão diretamente relacionadas à atividade humana, o que é previsível: os afogamentos em piscina ocorrem no verão, enquanto os acidentes de tráfego, devido a tempestades de gelo, ocorrem no inverno. Uma solução para melhorar a resposta dos bombeiros com recursos constantes é prever sua carga de trabalho, isto é, seu número de intervenções por hora, com base em variáveis explicativas que condicionam a atividade humana. O presente trabalho visa desenvolver três modelos que são comparados para determinar se eles podem prever a carga de respostas dos bombeiros de uma maneira razoável. As ferramentas escolhidas são as mais representativas de suas respectivas categorias em Machine Learning, como o XGBoost que tem como núcleo uma árvore de decisão, um método clássico como o Multi-Layer Perceptron e um algoritmo mais avançado como Long Short-Term Memory ambos com neurônios como base. Todo o processo é detalhado, desde a coleta de dados até a obtenção de previsões. Os resultados obtidos demonstram uma previsão de qualidade razoável que pode ser melhorada por técnicas de ciência de dados, como seleção de características e ajuste de hiperparâmetros. / Mestre
28

SENSOR-BASED HUMAN ACTIVITY RECOGNITION USING BIDIRECTIONAL LSTM FOR CLOSELY RELATED ACTIVITIES

Pavai, Arumugam Thendramil 01 December 2018 (has links)
Recognizing human activities using deep learning methods has significance in many fields such as sports, motion tracking, surveillance, healthcare and robotics. Inertial sensors comprising of accelerometers and gyroscopes are commonly used for sensor based HAR. In this study, a Bidirectional Long Short-Term Memory (BLSTM) approach is explored for human activity recognition and classification for closely related activities on a body worn inertial sensor data that is provided by the UTD-MHAD dataset. The BLSTM model of this study could achieve an overall accuracy of 98.05% for 15 different activities and 90.87% for 27 different activities performed by 8 persons with 4 trials per activity per person. A comparison of this BLSTM model is made with the Unidirectional LSTM model. It is observed that there is a significant improvement in the accuracy for recognition of all 27 activities in the case of BLSTM than LSTM.
29

On The Effectiveness of Multi-TaskLearningAn evaluation of Multi-Task Learning techniques in deep learning models

Tovedal, Sofiea January 2020 (has links)
Multi-Task Learning is today an interesting and promising field which many mention as a must for achieving the next level advancement within machine learning. However, in reality, Multi-Task Learning is much more rarely used in real-world implementations than its more popular cousin Transfer Learning. The questionis why that is and if Multi-Task Learning outperforms its Single-Task counterparts. In this thesis different Multi-Task Learning architectures were utilized in order to build a model that can handle labeling real technical issues within two categories. The model faces a challenging imbalanced data set with many labels to choose from and short texts to base its predictions on. Can task-sharing be the answer to these problems? This thesis investigated three Multi-Task Learning architectures and compared their performance to a Single-Task model. An authentic data set and two labeling tasks was used in training the models with the method of supervised learning. The four model architectures; Single-Task, Multi-Task, Cross-Stitched and the Shared-Private, first went through a hyper parameter tuning process using one of the two layer options LSTM and GRU. They were then boosted by auxiliary tasks and finally evaluated against each other.
30

Arrival Time Predictions for Buses using Recurrent Neural Networks / Ankomsttidsprediktioner för bussar med rekurrenta neurala nätverk

Fors Johansson, Christoffer January 2019 (has links)
In this thesis, two different types of bus passengers are identified. These two types, namely current passengers and passengers-to-be have different needs in terms of arrival time predictions. A set of machine learning models based on recurrent neural networks and long short-term memory units were developed to meet these needs. Furthermore, bus data from the public transport in Östergötland county, Sweden, were collected and used for training new machine learning models. These new models are compared with the current prediction system that is used today to provide passengers with arrival time information. The models proposed in this thesis uses a sequence of time steps as input and the observed arrival time as output. Each input time step contains information about the current state such as the time of arrival, the departure time from thevery first stop and the current position in Cartesian coordinates. The targeted value for each input is the arrival time at the next time step. To predict the rest of the trip, the prediction for the next step is simply used as input in the next time step. The result shows that the proposed models can improve the mean absolute error per stop between 7.2% to 40.9% compared to the system used today on all eight routes tested. Furthermore, the choice of loss function introduces models thatcan meet the identified passengers need by trading average prediction accuracy for a certainty that predictions do not overestimate or underestimate the target time in approximately 95% of the cases.

Page generated in 0.1155 seconds