• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 13
  • 7
  • 2
  • 1
  • 1
  • Tagged with
  • 82
  • 82
  • 33
  • 25
  • 16
  • 15
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A new calibration approach to graph-based semantic segmentation / Uma nova técnica de calibração para segmentação semântica baseada em grafos

Riva, Mateus 13 December 2018 (has links)
We introduce a calibration method for semantic segmentation of images utilizing statistical-relational graphs (SRGs), with a particular focus on pediatric Magnetic Resonance Imaging (MRI). The SRG provides a representation of a structured scene, describing both the attributes of each object of interest and the nature of their relationships, such as relative position in space. Each vertex in the graph represents an object of interest and each edge represents the relationship between two objects. Semantic segmentation can thus be performed by matching an SRG built from an observed image to a previously-built model SRG. We develop a calibration method for assessing the quality of SRG segmentation given a set of parameters, as well as an exploration of several sets of parameters applied to MRI. We present the validity and usefulness of the calibration technique, along with preliminary results on real MRI data segmentation. We additionally discuss future work on improving real data SRG-based segmentation. / Apresentamos um método de calibração da segmentação semântica de imagens baseada em Grafos Estatísticos-Relacionais (GERs), com um foco particular em Imagens de Ressonância Magnética (IRM) pediátricas. O GER provê uma representação de uma cena estruturada, descrevendo tanto os atributos de cada objeto de interesse quanto a natureza de seus relacionamentos, por exemplo a posição relativa no espaço. Cada vértice no grafo representa um objeto de interesse e cada aresta representa um relacionamento entre dois objetos. A segmentação semântica pode ser feita realizando um casamento entre um GER construído a partir de uma imagem observada com um GER modelo previamente construído. Nós desenvolvemos um método de calibração para verificar a qualidade da segmentação baseada em GER dado um conjunto de parâmetros, assim como uma exploração de diversos conjuntos de parâmetros aplicados à segmentação de IRM. Apresentamos a validade e utilidade da técnica de calibração, junto de resultados preliminares de segmentação de dados IRM reais. Adicionalmente, discutimos trabalhos futuros na melhoria de segmentação de dados reais utilizando GERs.
52

A Methodology for Domain-Specific Conceptual Data Modeling and Querying

Tian, Hao 02 May 2007 (has links)
Traditional data management technologies originating from business domain are currently facing many challenges from other domains such as scientific research. Data structures in databases are becoming more and more complex and data query functions are moving from the back-end database level towards the front-end user-interface level. Traditional query languages such as SQL, OQL, and form-based query interfaces cannot fully meet the needs today. This research is motivated by the data management issues in life science applications. I propose a methodology for domain-specific conceptual data modeling and querying. The methodology can be applied to any domain to capture more domain semantics and empower end-users to formulate a query at the conceptual level with terminologies and functions familiar to them. The query system resulting from the methodology is designed to work on all major types of database management systems (DBMS) and support end-users to dynamically define and add new domain-specific functions. That is, all user-defined functions can be either pre-defined by domain experts and/or data model creators at the time of system creation, or dynamically defined by end-users from the client side at any time. The methodology has a domain-specific conceptual data model (DSC-DM) and a domain-specific conceptual query language (DSC-QL). DSC-QL uses only the abstract concepts, relationships, and functions defined in DSC-DM. It is a user-oriented high level query language and intentionally designed to be flexible, extensible, and readily usable. DSC-QL queries are much simpler than corresponding SQL or OQL queries because of advanced features such as user-defined functions, composite and set attributes, dot-path expressions, and super-classes. DSC-QL can be translated into SQL and OQL through a dynamic mapping function, and automatically updated when the underlying database schema evolves. The operational and declarative semantics of DSC-QL are formally defined in terms of graphs. A normal form for DSC-QL as a standard format for the mappings from flexible conceptual expressions to restricted SQL or OQL statements is also defined. Two translation algorithms from normalized DSC-QL to SQL and OQL are introduced. Through comparison, DSC-QL is shown to have very good balance between simplicity and expressive power and is suitable for end-users. Implementation details of the query system are reported as well. Two prototypes have been built. One prototype is for neuroscience domain, which is built on an object-oriented DBMS. The other one is for traditional business domain, which is built on a relational DBMS.
53

A Design Space Exploration Process for Large Scale, Multi-Objective Computer Simulations

Zentner, John Marc 07 July 2006 (has links)
The primary contributions of this thesis are associated with the development of a new method for exploring the relationships between inputs and outputs for large scale computer simulations. Primarily, the proposed design space exploration procedure uses a hierarchical partitioning method to help mitigate the curse of dimensionality often associated with the analysis of large scale systems. Closely coupled with the use of a partitioning approach, is the problem of how to partition the system. This thesis also introduces and discusses a quantitative method developed to aid the user in finding a set of good partitions for creating partitioned metamodels of large scale systems. The new hierarchically partitioned metamodeling scheme, the lumped parameter model (LPM), was developed to address two primary limitations to the current partitioning methods for large scale metamodeling. First the LPM was formulated to negate the need to rely on variable redundancies between partitions to account for potentially important interactions. By using a hierarchical structure, the LPM addresses the impact of neglected, direct interactions by indirectly accounting for these interactions via the interactions that occur between the lumped parameters in intermediate to top-level mappings. Secondly, the LPM was developed to allow for hierarchical modeling of black-box analyses that do not have available intermediaries with which to partition the system around. The second contribution of this thesis is a graph-based partitioning method for large scale, black-box systems. The graph-based partitioning method combines the graph and sparse matrix decomposition methods used by the electrical engineering community with the results of a screening test to create a quantitative method for partitioning large scale, black-box systems. An ANOVA analysis of the results of a screening test can be used to determine the sparse nature of the large scale system. With this information known, the sparse matrix and graph theoretic partitioning schemes can then be used to create potential sets of partitions to use with the lumped parameter model.
54

Extraktion und Identifikation von Entitäten in Textdaten im Umfeld der Enterprise Search / Extraction and identification of entities in text data in the field of enterprise search

Brauer, Falk January 2010 (has links)
Die automatische Informationsextraktion (IE) aus unstrukturierten Texten ermöglicht völlig neue Wege, auf relevante Informationen zuzugreifen und deren Inhalte zu analysieren, die weit über bisherige Verfahren zur Stichwort-basierten Dokumentsuche hinausgehen. Die Entwicklung von Programmen zur Extraktion von maschinenlesbaren Daten aus Texten erfordert jedoch nach wie vor die Entwicklung von domänenspezifischen Extraktionsprogrammen. Insbesondere im Bereich der Enterprise Search (der Informationssuche im Unternehmensumfeld), in dem eine große Menge von heterogenen Dokumenttypen existiert, ist es oft notwendig ad-hoc Programm-module zur Extraktion von geschäftsrelevanten Entitäten zu entwickeln, die mit generischen Modulen in monolithischen IE-Systemen kombiniert werden. Dieser Umstand ist insbesondere kritisch, da potentiell für jeden einzelnen Anwendungsfall ein von Grund auf neues IE-System entwickelt werden muss. Die vorliegende Dissertation untersucht die effiziente Entwicklung und Ausführung von IE-Systemen im Kontext der Enterprise Search und effektive Methoden zur Ausnutzung bekannter strukturierter Daten im Unternehmenskontext für die Extraktion und Identifikation von geschäftsrelevanten Entitäten in Doku-menten. Grundlage der Arbeit ist eine neuartige Plattform zur Komposition von IE-Systemen auf Basis der Beschreibung des Datenflusses zwischen generischen und anwendungsspezifischen IE-Modulen. Die Plattform unterstützt insbesondere die Entwicklung und Wiederverwendung von generischen IE-Modulen und zeichnet sich durch eine höhere Flexibilität und Ausdrucksmächtigkeit im Vergleich zu vorherigen Methoden aus. Ein in der Dissertation entwickeltes Verfahren zur Dokumentverarbeitung interpretiert den Daten-austausch zwischen IE-Modulen als Datenströme und ermöglicht damit eine weitgehende Parallelisierung von einzelnen Modulen. Die autonome Ausführung der Module führt zu einer wesentlichen Beschleu-nigung der Verarbeitung von Einzeldokumenten und verbesserten Antwortzeiten, z. B. für Extraktions-dienste. Bisherige Ansätze untersuchen lediglich die Steigerung des durchschnittlichen Dokumenten-durchsatzes durch verteilte Ausführung von Instanzen eines IE-Systems. Die Informationsextraktion im Kontext der Enterprise Search unterscheidet sich z. B. von der Extraktion aus dem World Wide Web dadurch, dass in der Regel strukturierte Referenzdaten z. B. in Form von Unternehmensdatenbanken oder Terminologien zur Verfügung stehen, die oft auch die Beziehungen von Entitäten beschreiben. Entitäten im Unternehmensumfeld haben weiterhin bestimmte Charakteristiken: Eine Klasse von relevanten Entitäten folgt bestimmten Bildungsvorschriften, die nicht immer bekannt sind, auf die aber mit Hilfe von bekannten Beispielentitäten geschlossen werden kann, so dass unbekannte Entitäten extrahiert werden können. Die Bezeichner der anderen Klasse von Entitäten haben eher umschreibenden Charakter. Die korrespondierenden Umschreibungen in Texten können variieren, wodurch eine Identifikation derartiger Entitäten oft erschwert wird. Zur effizienteren Entwicklung von IE-Systemen wird in der Dissertation ein Verfahren untersucht, das alleine anhand von Beispielentitäten effektive Reguläre Ausdrücke zur Extraktion von unbekannten Entitäten erlernt und damit den manuellen Aufwand in derartigen Anwendungsfällen minimiert. Verschiedene Generalisierungs- und Spezialisierungsheuristiken erkennen Muster auf verschiedenen Abstraktionsebenen und schaffen dadurch einen Ausgleich zwischen Genauigkeit und Vollständigkeit bei der Extraktion. Bekannte Regellernverfahren im Bereich der Informationsextraktion unterstützen die beschriebenen Problemstellungen nicht, sondern benötigen einen (annotierten) Dokumentenkorpus. Eine Methode zur Identifikation von Entitäten, die durch Graph-strukturierte Referenzdaten vordefiniert sind, wird als dritter Schwerpunkt untersucht. Es werden Verfahren konzipiert, welche über einen exakten Zeichenkettenvergleich zwischen Text und Referenzdatensatz hinausgehen und Teilübereinstimmungen und Beziehungen zwischen Entitäten zur Identifikation und Disambiguierung heranziehen. Das in der Arbeit vorgestellte Verfahren ist bisherigen Ansätzen hinsichtlich der Genauigkeit und Vollständigkeit bei der Identifikation überlegen. / The automatic information extraction (IE) from unstructured texts enables new ways to access relevant information and analyze text contents, which goes beyond existing technologies for keyword-based search in document collections. However, the development of systems for extracting machine-readable data from text still requires the implementation of domain-specific extraction programs. In particular in the field of enterprise search (the retrieval of information in the enterprise settings), in which a large amount of heterogeneous document types exists, it is often necessary to develop ad-hoc program-modules and to combine them with generic program components to extract by business relevant entities. This is particularly critical, as potentially for each individual application a new IE system must be developed from scratch. In this work we examine efficient methods to develop and execute IE systems in the context of enterprise search and effective algorithms to exploit pre-existing structured data in the business context for the extraction and identification of business entities in documents. The basis of this work is a novel platform for composition of IE systems through the description of the data flow between generic and application-specific IE modules. The platform supports in particular the development and reuse of generic IE modules and is characterized by a higher flexibility as compared to previous methods. A technique developed in this work interprets the document processing as data stream between IE modules and thus enables an extensive parallelization of individual modules. The autonomous execution of each module allows for a significant runtime improvement for individual documents and thus improves response times, e.g. for extraction services. Previous parallelization approaches focused only on an improved throughput for large document collections, e.g., by leveraging distributed instances of an IE system. Information extraction in the context of enterprise search differs for instance from the extraction from the World Wide Web by the fact that usually a variety of structured reference data (corporate databases or terminologies) is available, which often describes the relationships among entities. Furthermore, entity names in a business environment usually follow special characteristics: On the one hand relevant entities such as product identifiers follow certain patterns that are not always known beforehand, but can be inferred using known sample entities, so that unknown entities can be extracted. On the other hand many designators have a more descriptive character (concatenation of descriptive words). The respective references in texts might differ due to the diversity of potential descriptions, often making the identification of such entities difficult. To address IE applications in the presence of available structured data, we study in this work the inference of effective regular expressions from given sample entities. Various generalization and specialization heuristics are used to identify patterns at different syntactic abstraction levels and thus generate regular expressions which promise both high recall and precision. Compared to previous rule learning techniques in the field of information extraction, our technique does not require any annotated document corpus. A method for the identification of entities that are predefined by graph structured reference data is examined as a third contribution. An algorithm is presented which goes beyond an exact string comparison between text and reference data set. It allows for an effective identification and disambiguation of potentially discovered entities by exploitation of approximate matching strategies. The method leverages further relationships among entities for identification and disambiguation. The method presented in this work is superior to previous approaches with regard to precision and recall.
55

Applications of graph-based codes in networks: analysis of capacity and design of improved algorithms

Vellambi, Badri Narayanan 25 August 2008 (has links)
The conception of turbo codes by Berrou et al. has created a renewed interest in modern graph-based codes. Several encouraging results that have come to light since then have fortified the role these codes shall play as potential solutions for present and future communication problems. This work focuses on both practical and theoretical aspects of graph-based codes. The thesis can be broadly categorized into three parts. The first part of the thesis focuses on the design of practical graph-based codes of short lengths. While both low-density parity-check codes and rateless codes have been shown to be asymptotically optimal under the message-passing (MP) decoder, the performance of short-length codes from these families under MP decoding is starkly sub-optimal. This work first addresses the structural characterization of stopping sets to understand this sub-optimality. Using this characterization, a novel improved decoder that offers several orders of magnitude improvement in bit-error rates is introduced. Next, a novel scheme for the design of a good rate-compatible family of punctured codes is proposed. The second part of the thesis aims at establishing these codes as a good tool to develop reliable, energy-efficient and low-latency data dissemination schemes in networks. The problems of broadcasting in wireless multihop networks and that of unicast in delay-tolerant networks are investigated. In both cases, rateless coding is seen to offer an elegant means of achieving the goals of the chosen communication protocols. It was noticed that the ratelessness and the randomness in encoding process make this scheme specifically suited to such network applications. The final part of the thesis investigates an application of a specific class of codes called network codes to finite-buffer wired networks. This part of the work aims at establishing a framework for the theoretical study and understanding of finite-buffer networks. The proposed Markov chain-based method extends existing results to develop an iterative Markov chain-based technique for general acyclic wired networks. The framework not only estimates the capacity of such networks, but also provides a means to monitor network traffic and packet drop rates on various links of the network.
56

Adaptive Graph-Based Algorithms for Conditional Anomaly Detection and Semi-Supervised Learning

Valko, Michal 01 August 2011 (has links) (PDF)
We develop graph-based methods for semi-supervised learning based on label propagation on a data similarity graph. When data is abundant or arrive in a stream, the problems of computation and data storage arise for any graph-based method. We propose a fast approximate online algorithm that solves for the harmonic solution on an approximate graph. We show, both empirically and theoretically, that good behavior can be achieved by collapsing nearby points into a set of local representative points that minimize distortion. Moreover, we regularize the harmonic solution to achieve better stability properties. We also present graph-based methods for detecting conditional anomalies and apply them to the identification of unusual clinical actions in hospitals. Our hypothesis is that patient-management actions that are unusual with respect to the past patients may be due to errors and that it is worthwhile to raise an alert if such a condition is encountered. Conditional anomaly detection extends standard unconditional anomaly framework but also faces new problems known as fringe and isolated points. We devise novel nonparametric graph-based methods to tackle these problems. Our methods rely on graph connectivity analysis and soft harmonic solution. Finally, we conduct an extensive human evaluation study of our conditional anomaly methods by 15 experts in critical care.
57

Constrained graph-based semi-supervised learning with higher order regularization / Aprendizado semissupervisionado restrito baseado em grafos com regularização de ordem elevada

Celso Andre Rodrigues de Sousa 10 August 2017 (has links)
Graph-based semi-supervised learning (SSL) algorithms have been widely studied in the last few years. Most of these algorithms were designed from unconstrained optimization problems using a Laplacian regularizer term as smoothness functional in an attempt to reflect the intrinsic geometric structure of the datas marginal distribution. Although a number of recent research papers are still focusing on unconstrained methods for graph-based SSL, a recent statistical analysis showed that many of these algorithms may be unstable on transductive regression. Therefore, we focus on providing new constrained methods for graph-based SSL. We begin by analyzing the regularization framework of existing unconstrained methods. Then, we incorporate two normalization constraints into the optimization problem of three of these methods. We show that the proposed optimization problems have closed-form solution. By generalizing one of these constraints to any distribution, we provide generalized methods for constrained graph-based SSL. The proposed methods have a more flexible regularization framework than the corresponding unconstrained methods. More precisely, our methods can deal with any graph Laplacian and use higher order regularization, which is effective on general SSL taks. In order to show the effectiveness of the proposed methods, we provide comprehensive experimental analyses. Specifically, our experiments are subdivided into two parts. In the first part, we evaluate existing graph-based SSL algorithms on time series data to find their weaknesses. In the second part, we evaluate the proposed constrained methods against six state-of-the-art graph-based SSL algorithms on benchmark data sets. Since the widely used best case analysis may hide useful information concerning the SSL algorithms performance with respect to parameter selection, we used recently proposed empirical evaluation models to evaluate our results. Our results show that our methods outperforms the competing methods on most parameter settings and graph construction methods. However, we found a few experimental settings in which our methods showed poor performance. In order to facilitate the reproduction of our results, the source codes, data sets, and experimental results are freely available. / Algoritmos de aprendizado semissupervisionado baseado em grafos foram amplamente estudados nos últimos anos. A maioria desses algoritmos foi projetada a partir de problemas de otimização sem restrições usando um termo regularizador Laplaciano como funcional de suavidade numa tentativa de refletir a estrutura geométrica intrínsica da distribuição marginal dos dados. Apesar de vários artigos científicos recentes continuarem focando em métodos sem restrição para aprendizado semissupervisionado em grafos, uma análise estatística recente mostrou que muitos desses algoritmos podem ser instáveis em regressão transdutiva. Logo, nós focamos em propor novos métodos com restrições para aprendizado semissupervisionado em grafos. Nós começamos analisando o framework de regularização de métodos sem restrições existentes. Então, nós incorporamos duas restrições de normalização no problema de otimização de três desses métodos. Mostramos que os problemas de otimização propostos possuem solução de forma fechada. Ao generalizar uma dessas restrições para qualquer distribuição, provemos métodos generalizados para aprendizado semissupervisionado restrito baseado em grafos. Os métodos propostos possuem um framework de regularização mais flexível que os métodos sem restrições correspondentes. Mais precisamente, nossos métodos podem lidar com qualquer Laplaciano em grafos e usar regularização de ordem elevada, a qual é efetiva em tarefas de aprendizado semissupervisionado em geral. Para mostrar a efetividade dos métodos propostos, nós provemos análises experimentais robustas. Especificamente, nossos experimentos são subdivididos em duas partes. Na primeira parte, avaliamos algoritmos de aprendizado semissupervisionado em grafos existentes em dados de séries temporais para encontrar possíveis fraquezas desses métodos. Na segunda parte, avaliamos os métodos restritos propostos contra seis algoritmos de aprendizado semissupervisionado baseado em grafos do estado da arte em conjuntos de dados benchmark. Como a amplamente usada análise de melhor caso pode esconder informações relevantes sobre o desempenho dos algoritmos de aprendizado semissupervisionado com respeito à seleção de parâmetros, nós usamos modelos de avaliação empírica recentemente propostos para avaliar os nossos resultados. Nossos resultados mostram que os nossos métodos superam os demais métodos na maioria das configurações de parâmetro e métodos de construção de grafos. Entretanto, encontramos algumas configurações experimentais nas quais nossos métodos mostraram baixo desempenho. Para facilitar a reprodução dos nossos resultados, os códigos fonte, conjuntos de dados e resultados experimentais estão disponíveis gratuitamente.
58

A graph-based framework for comparing curricula

Marshall, Linda January 2014 (has links)
The problem addressed in this thesis was identified in a real life context in which an attempt was made to re-constitute a BSc Computer Science degree programme. The curriculum was modelled on the ACM/IEEE Computing Curriculum of 2001. It was further required to comply with accreditation requirements as defined by ABET’s Computing Accreditation Commission. Relying on a spreadsheet, the curriculum was iteratively and manually evaluated against the ACM/IEEE curriculum specification. A need was identified to automate or at least semi-automate this process. In this thesis a generalisation of the problem is presented. Curricula are modelled as directed graphs (digraphs) in which graph vertices represent curriculum elements such as topics, knowledge areas, knowledge units year- levels or modules. Edges in the graph represent dependencies between these vertices such as belonging to grouping or pre-requisites. The task of curriculum comparison then abstracts to a task of digraph comparison. A framework, the Graph Comparison Framework, is proposed. The frame- work comprises of components which are used to guide the digraph comparison process. The so-called Graph Trans-morphism algorithm component is the only component in the framework which is mandatory. The algorithm converts the information from one of the digraphs being compared into the structure of the other. This conversion enables the graphs to be compared as graph isomorphisms. All digraphs are modelled as sets of triples, making it possible to subtract one digraph from another using the set minus operator. The resultant difference sets are used by components defined in the framework to quantify and visualise the differences. By modelling curricula as digraphs and applying the framework to the di-graphs, it is possible to compare curricula. This application of the framework to a real-world problem forms the applications research part of the thesis. In this part, domain knowledge of curriculum design is necessary to apply to the curriculum being developed in order to improve it. / Thesis (PhD)--University of Pretoria, 2014. / Computer Science / unrestricted
59

Localisation et cartographie simultanées par optimisation de graphe sur architectures hétérogènes pour l’embarqué / Embedded graph-based simultaneous localization and mapping on heterogeneous architectures

Dine, Abdelhamid 05 October 2016 (has links)
La localisation et cartographie simultanées connue, communément, sous le nom de SLAM (Simultaneous Localization And Mapping) est un processus qui permet à un robot explorant un environnement inconnu de reconstruire une carte de celui-ci tout en se localisant, en même temps, sur cette carte. Dans ce travail de thèse, nous nous intéressons au SLAM par optimisation de graphe. Celui-ci utilise un graphe pour représenter et résoudre le problème de SLAM. Une optimisation de graphe consiste à trouver une configuration de graphe (trajectoire et carte) qui correspond le mieux aux contraintes introduites par les mesures capteurs. L'optimisation de graphe présente une forte complexité algorithmique et requiert des ressources de calcul et de mémoire importantes, particulièrement si l'on veut explorer de larges zones. Cela limite l'utilisation de cette méthode dans des systèmes embarqués temps-réel. Les travaux de cette thèse contribuent à l'atténuation de la complexité de calcul du SLAM par optimisation de graphe. Notre approche s’appuie sur deux axes complémentaires : la représentation mémoire des données et l’implantation sur architectures hétérogènes embarquées. Dans le premier axe, nous proposons une structure de données incrémentale pour représenter puis optimiser efficacement le graphe. Dans le second axe, nous explorons l'utilisation des architectures hétérogènes récentes pour accélérer le SLAM par optimisation de graphe. Nous proposons, donc, un modèle d’implantation adéquat aux applications embarquées en mettant en évidence les avantages et les inconvénients des architectures évaluées, à savoir SoCs basés GPU et FPGA. / Simultaneous Localization And Mapping is the process that allows a robot to build a map of an unknown environment while at the same time it determines the robot position on this map.In this work, we are interested in graph-based SLAM method. This method uses a graph to represent and solve the SLAM problem. A graph optimization consists in finding a graph configuration (trajectory and map) that better matches the constraints introduced by the sensors measurements. Graph optimization is characterized by a high computational complexity that requires high computational and memory resources, particularly to explore large areas. This limits the use of graph-based SLAM in real-time embedded systems. This thesis contributes to the reduction of the graph-based computational complexity. Our approach is based on two complementary axes: data representation in memory and implementation on embedded heterogeneous architectures. In the first axis, we propose an incremental data structure to efficiently represent and then optimize the graph. In the second axis, we explore the use of the recent heterogeneous architectures to speed up graph-based SLAM. We propose an efficient implementation model for embedded applications. We highlight the advantages and disadvantages of the evaluated architectures, namely GPU-based and FPGA-based System-On-Chips.
60

Visual Scripting for AR Board Games in Thrymd

Lind, Fredrik January 2021 (has links)
In recent years, the interest in Augmented Reality (AR) applications for entertainment and productivity has grown. One company exploring this technology is LAZER WOLF STUDIOS, the developers behind Thrymd: an AR-driven board games platform powered by the Unity engine.  This paper details the development of a visual scripting framework, meant to provide end users with a means of developing their own games for the platform, without significant programming or background knowledge required. A graph-based visual language was implemented in a custom Unity editor window, in order to maintain a familiar and consistent feel for users. The graph consists of a series of branching, interconnected nodes which pass data in-between each other, and execute in succession. The graph is serialized as a Unity asset, and can easily be interacted with through regular C# scripts.  A small number of nodes were implemented, but for the system to be viable, more are needed. For that reason, extensibility was a core ideal; creating new node types must be fast and painless. As with any script layer, performance is generally worse than compiled code. Further work is needed to improve user experience. / Intresset för användandet av Augmenterad Verklighet (AR) för underhållning och produktivitetssyften har ökat på senare tid. LAZER WOLF STUDIOS är utvecklarna bakom Thrymd, en AR-driven brädspelsplattform byggd i spelmotorn Unity.  Denna rapport dokumenterar utvecklingsprocessen av ett visuellt skriptramverk byggt med avsikt att låta slutanvändare utveckla sina egna spel till plattformen utan större förkunskapskrav. Ett graf-baserat visuellt skriptspråk implementerades i en skräddarsydd editormiljö inuti Unity, för att bibehålla en bekant och konsekvent användarupplevelse. Grafen består av en serie förgrenande, sammankopplade noder som skickar data mellan varann och exekveras i sekvens. Grafen sparas som en resurs på hårddisken och är lätt att interagera med genom traditionella C#-skript i Unity.  Ett mindre antal noder implementerades, men fler krävs för att systemet ska vara brukbart. Av detta skäl designades språket med vidareutveckling i åtanke, då det måste vara enkelt att skapa nya noder. Som med de flesta skriptspråk är prestandan överlag sämre än kompilerad kod. Ytterligare arbete krävs för att förbättra användarupplevelsen.

Page generated in 0.0304 seconds