• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 24
  • 7
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 139
  • 139
  • 54
  • 33
  • 26
  • 25
  • 24
  • 23
  • 22
  • 21
  • 19
  • 18
  • 18
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Queued and Pooled Semantics for State Machines in the Umple Model-Oriented Programming Language

Alghamdi, Aliaa January 2015 (has links)
This thesis describes extensions to state machines in the Umple model-oriented programming language to offer queued state machines (QSM), pooled state machines (PSM) and handing of the arrival of unexpected events. These features allow for modeling the behavior of a system or protocol in a more accurate way in Umple because they enable detecting and fixing common design errors such as unspecified receptions. In addition, they simplify the communication between communicating state machines by allowing for asynchronous calls of events and passing of messages between state machines. Also, a pooled state machine (PSM) has been developed to provide a different policy of handling events that avoid unspecified receptions. This mechanism has similar semantics as a queued state machine, but it differs in the way of detecting unspecified receptions because it helps handling these errors. Another mechanism has been designed to use the keyword ‘unspecified’ in whatever state of a state machine the user wants to detect these errors. In this thesis, the test-driven development (TDD) process has been followed to first modify the Umple syntax to add ‘queued,’ ‘pooled,’ and ‘unspecified’ keywords to Umple state machine’s grammar; and second, to make a change to the Umple semantics in order to implement these extensions in Umple. Then, additional modifications have been made to allow for Java code generation from those types of state machines. Finally, more test cases have been written to ensure that these models are syntactically and semantically correct. In order to show the usefulness and usability of these new features, an example is shown as a case study that is modeled using the queued state machine (QSM) besides other small tests cases.
62

Automatic morphological analysis of L-verbs in Palula / Automatisk morfologisk analys av L-verb i Palula

Wallerö, Emma January 2020 (has links)
This study is exploring the possibilities of automatic morphological analysis of L-verbs in the Palula language by the help from Finite-state technology and two-level morphology along with supervised machine learning. The type of machine learning used are neural Sequence to Sequence models. A morphological transducer is made with the Helsinki Finite-State Transducer Technology, HFST, toolkit covering the L-verbs of the Palula Language. Several Sequence to Sequence models are trained on sets of L-verbs along with morphological tagging annotation. One model is trained with a small amount of manually annotated data and four models are trained with different amounts of training examples generated by the Finite-State Transducer. The efficiency and accuracy of these methods are investigated. The Sequence to Sequence model trained on solely manually annotated data did not perform as well as the other models. A Sequence to Sequence model trained with training examples generated by the transducer performed the best recall, accuracy and F1-score, while the Finite-State Transducer performed the best precision score. / Denna studie undersöker möjligheterna för en automatisk morfologisk analys av L-verb i språket Palula med hjälp av finit tillståndsteknik och två-nivå-morfologi samt övervakad maskininlärning. Den typ av maskininlärning som används i studien är neurala Sekvens till Sekvens-modeller. En morfologisk transduktor är skapad med verktyget Helsinki Finite-State Transducer Technology, HFST, som täcker L-verben i Palula. Flera Sekvens till Sekvens-modeller tränas på set av L-verb med morfologisk taggningsannotation. En modell tränas på ett litet set av manuellt annoterade data och fyra modeller tränas på olika mängder träningsdata som genererats av den finita tillstånds-transduktorn. Effektiviteten och noggrannheten för dessa modeller undersöks. Sekvens till Sekvens-modellen som tränats med bara manuellt annoterade data presterade inte lika bra som de andra modellerna i studien. En Sekvens till Sekvens-modell tränad med träningsdata bestående av genereringar producerade av transduktorn gav bästa svarsfrekvens, noggrannhet och F1-poäng, medan den finita tillstånds-transduktorn gav bästa precision.
63

Decision-making in Highway Autonomous Driving Combined with Prediction Algorithms / Beslutsfattande inom motorvägsautonom körning i kombination med förutsägelsealgoritmer

Chen, Jingsheng January 2022 (has links)
Over the past two decades, autonomous driving technology has made tremendous breakthroughs. With this technology, human drivers have been able to take their hands off the wheel in many scenarios and let the vehicle drive itself. Highway scenarios are less disturbed than urban scenarios, so autonomous driving is much simpler to implement and can be accomplished very well with a rule-based approach. However, a significant drawback of the rule-based approach compared to human drivers is that it is difficult to predict the intent of the vehicles in the surrounding environment by designing the algorithm’s logic. In contrast, human drivers can easily implement the intent analysis. Therefore, in this research work, we introduce the prediction module as the upstream of the autonomous driving decision-making module, so that the autonomous driving decision-maker has richer input information to better optimize the decision output by getting the intent of the surrounding vehicles. The evaluation of the final results confirms that our proposed approach is helpful for optimizing Rule-based autonomous driving decisions. / Under de senaste två decennierna har tekniken för autonom körning gjort enorma genombrott. Med denna teknik har mänskliga förare kunnat ta bort händerna från ratten i många situationer och låta fordonet köra sig självt. Scenarier på motorvägar är mindre störda än scenarier i städer, så autonom körning är mycket enklare att genomföra och kan åstadkommas mycket bra med en regelbaserad metod. En betydande nackdel med det regelbaserade tillvägagångssättet jämfört med mänskliga förare är dock att det är svårt att förutsäga avsikten hos fordonen i den omgivande miljön genom att utforma algoritmens logik. Däremot kan mänskliga förare lätt genomföra avsiktsanalysen. I det här forskningsarbetet inför vi därför förutsägelsemodulen som en uppströmsmodul för beslutsfattandet vid autonom körning, så att beslutsfattaren vid autonom körning har mer omfattande information för att bättre optimera beslutsutfallet genom att få reda på de omgivande fordonens intentioner. Utvärderingen av slutresultaten bekräftar att vårt föreslagna tillvägagångssätt är till hjälp för att optimera regelbaserade beslut om autonom körning.
64

Efficient finite-state algorithms for the application of local grammars / Algorithmes performants à états finis pour l'application de grammaires locales

Sastre Martinez, Javier Miguel 11 July 2011 (has links)
Notre travail porte sur le développement d'algorithmes performants d'application de grammaires locales, en prenant comme référence ceux des logiciels libres existants : l'analyseur syntaxique descendant d'Unitex et l'analyseur syntaxique à la Earley d'Outilex. Les grammaires locales sont un formalisme de représentation de la syntaxe des langues naturelles basé sur les automates finis. Les grammaires locales sont un modèle de construction de descriptions précises et à grande échelle de la syntaxe des langues naturelles par le biais de l'observation systématique et l'accumulation méthodique de données. L'adéquation des grammaires locales pour cette tâche a été testée à l'occasion de nombreux travaux. À cause de la nature ambiguë des langues naturelles et des propriétés des grammaires locales, les algorithmes classiques d'analyse syntaxique tels que LR, CYK et ne peuvent pas être utilisés dans le contexte de ce travail. Les analyseurs top-down et Earley sont des alternatives possibles ; cependant, ils ont des coûts asymptotiques exponentiels pour le cas des grammaires locales. Nous avons d'abord conçu un algorithme d'application de grammaires locales avec un coût polynomial dans le pire des cas. Ensuite, nous avons conçu des structures de donnés performantes pour la représentation d'ensembles d'éléments et de séquences. Elles ont permis d'améliorer la vitesse de notre algorithme dans le cas général. Nous avons mis en œuvre notre algorithme et ceux des systèmes Unitex et Outilex avec les mêmes outils afin de les tester dans les mêmes conditions. En outre, nous avons mis en œuvre différents versions de chaque algorithme en utilisant nos structures de données et algorithmes pour la représentation d'ensembles et ceux fournis par la Standard Template Library (STL) de GNU. Nous avons comparé les performances des différents algorithmes et de leurs variantes dans le cadre d'un projet industriel proposé par l'entreprise Telefónica I+D : augmenter la capacité de compréhension d'un agent conversationnel qui fournit des services en ligne, voire l'envoi de SMS à des téléphones portables ainsi que des jeux et d'autres contenus numériques. Les conversations avec l'agent sont en espagnol et passent par Windows Live Messenger. En dépit du domaine limité et de la simplicité des grammaires appliquées, les temps d'exécution de notre algorithme, couplé avec nos structures de données et algorithmes pour la représentation d'ensembles, ont été plus courts. Grâce au coût asymptotique amélioré, on peut s'attendre à des temps d'exécution significativement inférieurs par rapport aux algorithmes utilisés dans les systèmes Unitex et Outilex, pour le cas des grammaires complexes et à large couverture / This work focuses on the research and development of efficient algorithms of application of local grammars, taking as reference those of the currently existent open-source systems : Unitex's top-down parser and Outilex's Earley-like parser. Local grammars are a finite-state based formalism for the representation of natural language grammars. Moreover, local grammars are a model for the construction of fully scaled and accurated descriptions of the syntax of natural languages by means of systematic observation and methodical accumulation of data. The adequacy of local grammars for this task has been proved by multiple works. Due to the ambiguous nature of natural languages, and the particular properties of local grammars, classic parsing algorithms such as LR, CYK's and Tomita's cannot be used in the context of this work. Top-down and Earley parsers are possible alternatives, though they have an exponential worst-case cost for the case of local grammars. We have first conceived an algorithm of application of local grammars having a polynomial worst-case cost. Furthermore, we have conceived other optimizations which increase the efficiency of the algorithm for general cases, namely the efficient management of sets of elements and sequences. We have implemented our algorithm and those of the Unitex and Outilex systems with the same tools in order to test them under the same conditions. Moreover, we have implemented different versions of each algorithm, either using our custom set data structures or those included in GNU's implementation of the C++ Standard Template Library (STL). We have compared the performances of the different algorithms and algorithm versions in the context of an industrial natural language application provided by the enterprise Telefónica I+D : extending the understanding capabilities of a chatterbot that provides mobile services, such as sending SMSs to mobile phones as well as games and other digital contents. Conversation with the chatterbot is held in Spanish by means of Microsoft's Windows Live Messenger. In spite of the limited domain and the simplicity of the applied grammars, execution times of our parsing algorithm coupled with our custom implementation of sets were lower. Thanks to the improved asymptotic cost of our algorithm, execution times for the case of complex and large coverage grammars can be expected to be considerably lower than those of the Unitex and Outilex algorithms
65

NeuroFSM: aprendizado de Autômatos Finitos através do uso de Redes Neurais Artificiais aplicadas à robôs móveis e veículos autônomos / NeuroFSM: finite state machines learning using artificial neural networks applied to mobile robots and autonomous vehicles

Sales, Daniel Oliva 23 July 2012 (has links)
A navegação autônoma é uma tarefa fundamental na robótica móvel. Para que esta tarefa seja realizada corretamente é necessário um sistema inteligente de controle e navegação associado ao sistema sensorial. Este projeto apresenta o desenvolvimento de um sistema de controle para a navegação de veículos e robôs móveis autônomos. A abordagem utilizada neste trabalho utiliza Redes Neurais Artificiais para o aprendizado de Autômatos Finitos de forma que os robôs possam lidar com os dados provenientes de seus sensores mesmo estando sujeitos a imprecisões e erros e ao mesmo tempo permite que sejam consideradas as diferentes situações e estados em que estes robôs se encontram (contexto). Dessa forma, é possível decidir como agir para realizar o controle da sua movimentação, e assim executar tarefas de controle e navegação das mais simples até as mais complexas e de alto nível. Portanto, esta dissertação visa utilizar Redes Neurais Artificiais para reconhecer o estado atual (contexto) do robô em relação ao ambiente em que está inserido. Uma vez que seja identificado seu estado, o que pode inclusive incluir a identificação de sua posição em relação aos elementos presentes no ambiente, o robô será capaz de decidir qual a ação/comportamento que deverá ser executado. O sistema de controle e navegação irá implementar um Autômato Finito que a partir de um estado atual define uma ação corrente, sendo capaz de identificar a mudança de estados, e assim alternar entre diferentes comportamentos previamente definidos. De modo a validar esta proposta, diversos experimentos foram realizados através do uso de um simulador robótico (Player-Stage), e através de testes realizados com robôs reais (Pioneer P3-AT, SRV-1 e veículos automatizados) / Autonomous navigation is a fundamental task in mobile robotics. In order to accurately perform this task it is necessary an intelligent navigation and control system associated to the sensorial system. This project presents the development of a control system for autonomous mobile robots and vehicles navigation. The adopted approach uses Artificial Neural Networks for Finite State Machine learning, allowing the robots to deal with sensorial data even when this data is not precise and correct. Simultaneously, it allows the robots to consider the different situations and states they are inserted in (context detection). This way, it is possible to decide how to proceed with motion control and then execute navigation and control tasks from the most simple ones until the most complex and high level tasks. So, this work uses Artificial Neural Networks to recognize the robots current state (context) at the environment where it is inserted. Once the state is detected, including identification of robots position according to environment elements, the robot will be able to determine the action/- behavior to be executed. The navigation and control system implements a Finite State Machine deciding the current action from current state, being able to identify state changes, alternating between different previously defined behaviors. In order to validade this approach, many experiments were performed with the use of a robotic simulator (Player-Stage), and carrying out tests with real robots (Pioneer P3-AT, SRV-1 and autonomous vehicles)
66

Extração de conhecimento a partir de redes reurais recorrentes / knowledge extraction from recurrent neural networks

Simon, Denise Regina Pechmann 11 May 2004 (has links)
Made available in DSpace on 2015-03-05T13:53:45Z (GMT). No. of bitstreams: 0 Previous issue date: 11 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Neste trabalho é proposto um método de extração de conhecimento a partir de Redes Neurais Recorrentes. Expressar formalmente o conhecimento armazenado dentro de uma Rede Neural Artificial representa um grande desafio, já que tal conhecimento precisa ser reformulado e apresentado de uma maneira simples e inteligível. Três formalismos simbólicos são abordados para a representação deste conhecimento: Autômatos Finitos Difusos, Cadeias de Markov e Autômatos Finitos Determinísticos. Para as extrações de conhecimento utilizadas no trabalho, atribui-se significado às regiões do espaço de atividade dos neurônios. O método proposto utiliza a clusterização do espaço neural para obtenção dos estados do autômato, sendo utilizados para isso, o algoritmo K-means e a clusterização difusa. A obtenção do conhecimento é feita utilizando-se Redes Neurais Recorrentes para aprender o comportamento de dois sistemas dinâmicos não lineares e, a partir das redes treinadas, extrair os estados e possíveis transições do autômato. Os sis / ln this work a method ofknowledge extraction from Recurrent Neural Network is proposed. Express formally the knowledge stored inside an Artificial Neural Network is a great challenge, because such knowledge has to be reformulated and presented by simple and understandable means. Three symbolic formats are presented for the representation of this knowledge: Fuzzy Finite Automata, Markov Chains and Deterministic Finite Automata. For the knowledge extraction used in this work, each space region of the neuron activity is associated to a meaning. The considered method uses clusterization of the neural space in order to obtain the automata states, using the K-means algorithm and the fuzzy clustering. The knowledge acquisition is made using Recurrent Neural Networks to learn the behavior of the two non linear dynamic systems and, from the trained nets, to extract the states and possible automata transitions. The dynamic systems are the lnverse Pendulum system and the Lorenz system. The presented extraction method wa
67

Finite-state methods and natural language processing : 6th International Workshop, FSMNLP 2007 Potsdam, Germany, september 14 - 16 ; revised papers

January 2008 (has links)
Proceedings with the revised papers of the FSMNLP (Finite-state Methods and Natural Language Processing) 2007 Workshop in Potsdam / Tagungsband mit den Beiträgen der FSMNLP (Finite-state Methods and Natural Language Processing) 2007 in Potsdam
68

Joint Equalization and Decoding via Convex Optimization

Kim, Byung Hak 2012 May 1900 (has links)
The unifying theme of this dissertation is the development of new solutions for decoding and inference problems based on convex optimization methods. Th first part considers the joint detection and decoding problem for low-density parity-check (LDPC) codes on finite-state channels (FSCs). Hard-disk drives (or magnetic recording systems), where the required error rate (after decoding) is too low to be verifiable by simulation, are most important applications of this research. Recently, LDPC codes have attracted a lot of attention in the magnetic storage industry and some hard-disk drives have started using iterative decoding. Despite progress in the area of reduced-complexity detection and decoding algorithms, there has been some resistance to the deployment of turbo-equalization (TE) structures (with iterative detectors/decoders) in magnetic-recording systems because of error floors and the difficulty of accurately predicting performance at very low error rates. To address this problem for channels with memory, such as FSCs, we propose a new decoding algorithms based on a well-defined convex optimization problem. In particular, it is based on the linear-programing (LP) formulation of the joint decoding problem for LDPC codes over FSCs. It exhibits two favorable properties: provable convergence and predictable error-floors (via pseudo-codeword analysis). Since general-purpose LP solvers are too complex to make the joint LP decoder feasible for practical purposes, we develop an efficient iterative solver for the joint LP decoder by taking advantage of its dual-domain structure. The main advantage of this approach is that it combines the predictability and superior performance of joint LP decoding with the computational complexity of TE. The second part of this dissertation considers the matrix completion problem for the recovery of a data matrix from incomplete, or even corrupted entries of an unknown matrix. Recommender systems are good representatives of this problem, and this research is important for the design of information retrieval systems which require very high scalability. We show that our IMP algorithm reduces the well-known cold-start problem associated with collaborative filtering systems in practice.
69

Problems Related to Shortest Strings in Formal Languages

Ang, Thomas January 2010 (has links)
In formal language theory, studying shortest strings in languages, and variations thereof, can be useful since these strings can serve as small witnesses for properties of the languages, and can also provide bounds for other problems involving languages. For example, the length of the shortest string accepted by a regular language provides a lower bound on the state complexity of the language. In Chapter 1, we introduce some relevant concepts and notation used in automata and language theory, and we show some basic results concerning the connection between the length of the shortest string and the nondeterministic state complexity of a regular language. Chapter 2 examines the effect of the intersection operation on the length of the shortest string in regular languages. A tight worst-case bound is given for the length of the shortest string in the intersection of two regular languages, and loose bounds are given for two variations on the problem. Chapter 3 discusses languages that are defined over a free group instead of a free monoid. We study the length of the shortest string in a regular language that becomes the empty string in the free group, and a variety of bounds are given for different cases. Chapter 4 mentions open problems and some interesting observations that were made while studying two of the problems: finding good bounds on the length of the shortest squarefree string accepted by a deterministic finite automaton, and finding an efficient way to check if a finite set of finite words generates the free monoid. Some of the results in this thesis have appeared in work that the author has participated in \cite{AngPigRamSha,AngShallit}.
70

Efficient Bayesian Nonparametric Methods for Model-Free Reinforcement Learning in Centralized and Decentralized Sequential Environments

Liu, Miao January 2014 (has links)
<p>As a growing number of agents are deployed in complex environments for scientific research and human well-being, there are increasing demands for designing efficient learning algorithms for these agents to improve their control polices. Such policies must account for uncertainties, including those caused by environmental stochasticity, sensor noise and communication restrictions. These challenges exist in missions such as planetary navigation, forest firefighting, and underwater exploration. Ideally, good control policies should allow the agents to deal with all the situations in an environment and enable them to accomplish their mission within the budgeted time and resources. However, a correct model of the environment is not typically available in advance, requiring the policy to be learned from data. Model-free reinforcement learning (RL) is a promising candidate for agents to learn control policies while engaged in complex tasks, because it allows the control policies to be learned directly from a subset of experiences and with time efficiency. Moreover, to ensure persistent performance improvement for RL, it is important that the control policies be concisely represented based on existing knowledge, and have the flexibility to accommodate new experience. Bayesian nonparametric methods (BNPMs) both allow the complexity of models to be adaptive to data, and provide a principled way for discovering and representing new knowledge.</p><p>In this thesis, we investigate approaches for RL in centralized and decentralized sequential decision-making problems using BNPMs. We show how the control policies can be learned efficiently under model-free RL schemes with BNPMs. Specifically, for centralized sequential decision-making, we study Q learning with Gaussian processes to solve Markov decision processes, and we also employ hierarchical Dirichlet processes as the prior for the control policy parameters to solve partially observable Markov decision processes. For decentralized partially observable Markov decision processes, we use stick-breaking processes as the prior for the controller of each agent. We develop efficient inference algorithms for learning the corresponding control policies. We demonstrate that by combining model-free RL and BNPMs with efficient algorithm design, we are able to scale up RL methods for complex problems that cannot be solved due to the lack of model knowledge. We adaptively learn control policies with concise structure and high value, from a relatively small amount of data.</p> / Dissertation

Page generated in 0.0572 seconds