• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 2
  • 1
  • Tagged with
  • 76
  • 13
  • 12
  • 11
  • 10
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

A language-independent static checking system for coding conventions

Mount, Sarah January 2013 (has links)
Despite decades of research aiming to ameliorate the difficulties of creating software, programming still remains an error-prone task. Much work in Computer Science deals with the problem of specification, or writing the right program, rather than the complementary problem of implementation, or writing the program right. However, many desirable software properties (such as portability) are obtained via adherence to coding standards, and therefore fall outside the remit of formal specification and automatic verification. Moreover, code inspections and manual detection of standards violations are time consuming. To address these issues, this thesis describes Exstatic, a novel framework for the static detection of coding standards violations. Unlike many other static checkers Exstatic can be used to examine code in a variety of languages, including program code, in-line documentation, markup languages and so on. This means that checkable coding standards adhered to by a particular project or institution can be handled by a single tool. Consequently, a major challenge in the design of Exstatic has been to invent a way of representing code from a variety of source languages. Therefore, this thesis describes ICODE, which is an intermediate language suitable for representing code from a number of different programming paradigms. To substantiate the claim that ICODE is a universal intermediate language, a proof strategy has been developed: for a number of different programming paradigms (imperative, declarative, etc.), a proof is constructed to show that semantics-preserving translation exists from an exemplar language (such as IMP or PCF) to ICODE. The usefulness of Exstatic has been demonstrated by the implementation of a number of static analysers for different languages. This includes a checker for technical documentation written in Javadoc which validates documents against the Sun Microsystems (now Oracle) Coding Conventions and a checker for HTML pages against a site-specifc standard. A third system is targeted at a variant of the Python language, written by the author, called python-csp, based on Hoare's Communicating Sequential Processes.
62

Αυτόματη μετατροπή από φωνητική σε ορθογραφική γραφή

Ρεντζεπόπουλος, Παναγιώτης 11 September 2009 (has links)
- / -
63

Static guarantees for coordinated components : a statically typed composition model for stream-processing networks

Penczek, Frank January 2012 (has links)
Does your program do what it is supposed to be doing? Without running the program providing an answer to this question is much harder if the language does not support static type checking. Of course, even if compile-time checks are in place only certain errors will be detected: compilers can only second-guess the programmer’s intention. But, type based techniques go a long way in assisting programmers to detect errors in their computations earlier on. The question if a program behaves correctly is even harder to answer if the program consists of several parts that execute concurrently and need to communicate with each other. Compilers of standard programming languages are typically unable to infer information about how the parts of a concurrent program interact with each other, especially where explicit threading or message passing techniques are used. Hence, correctness guarantees are often conspicuously absent. Concurrency management in an application is a complex problem. However, it is largely orthogonal to the actual computational functionality that a program realises. Because of this orthogonality, the problem can be considered in isolation. The largest possible separation between concurrency and functionality is achieved if a dedicated language is used for concurrency management, i.e. an additional program manages the concurrent execution and interaction of the computational tasks of the original program. Such an approach does not only help programmers to focus on the core functionality and on the exploitation of concurrency independently, it also allows for a specialised analysis mechanism geared towards concurrency-related properties. This dissertation shows how an approach that completely decouples coordination from computation is a very supportive substrate for inferring static guarantees of the correctness of concurrent programs. Programs are described as streaming networks connecting independent components that implement the computations of the program, where the network describes the dependencies and interactions between components. A coordination program only requires an abstract notion of computation inside the components and may therefore be used as a generic and reusable design pattern for coordination. A type-based inference and checking mechanism analyses such streaming networks and provides comprehensive guarantees of the consistency and behaviour of coordination programs. Concrete implementations of components are deliberately left out of the scope of coordination programs: Components may be implemented in an external language, for example C, to provide the desired computational functionality. Based on this separation, a concise semantic framework allows for step-wise interpretation of coordination programs without requiring concrete implementations of their components. The framework also provides clear guidance for the implementation of the language. One such implementation is presented and hands-on examples demonstrate how the language is used in practice.
64

Génération modulaire de grammaires formelles / Modular generation of formal grammars

Petitjean, Simon 11 December 2014 (has links)
Les travaux présentés dans cette thèse visent à faciliter le développement de ressources pour le traitement automatique des langues. Les ressources de ce type prennent des formes très diverses, en raison de l’existence de différents niveaux d’étude de la langue (syntaxe, morphologie, sémantique,. . . ) et de différents formalismes proposés pour la description des langues à chacun de ces niveaux. Les formalismes faisant intervenir différents types de structures, un unique langage de description n’est pas suffisant : il est nécessaire pour chaque formalisme de créer un langage dédié (ou DSL), et d’implémenter un nouvel outil utilisant ce langage, ce qui est une tâche longue et complexe. Pour cette raison, nous proposons dans cette thèse une méthode pour assembler modulairement, et adapter, des cadres de développement spécifiques à des tâches de génération de ressources langagières. Les cadres de développement créés sont construits autour des concepts fondamentaux de l’approche XMG (eXtensible MetaGrammar), à savoir disposer d’un langage de description permettant la définition modulaire d’abstractions sur des structures linguistiques, ainsi que leur combinaison non-déterministe (c’est à dire au moyen des opérateurs logiques de conjonction et disjonction). La méthode se base sur l’assemblage d’un langage de description à partir de briques réutilisables, et d’après un fichier unique de spécification. L’intégralité de la chaîne de traitement pour le DSL ainsi défini est assemblée automatiquement d’après cette même spécification. Nous avons dans un premier temps validé cette approche en recréant l’outil XMG à partir de briques élémentaires. Des collaborations avec des linguistes nous ont également amené à assembler des compilateurs permettant la description de la morphologie de l’Ikota (langue bantoue) et de la sémantique (au moyen de la théorie des frames). / The work presented in this thesis aim at facilitating the development of resources for natural language processing. Resources of this type take different forms, because of the existence of several levels of linguistic description (syntax, morphology, semantics, . . . ) and of several formalisms proposed for the description of natural languages at each one of these levels. The formalisms featuring different types of structures, a unique description language is not enough: it is necessary to create a domain specific language (or DSL) for every formalism, and to implement a new tool which uses this language, which is a long a complex task. For this reason, we propose in this thesis a method to assemble in a modular way development frameworks specific to tasks of linguistic resource generation. The frameworks assembled thanks to our method are based on the fundamental concepts of the XMG (eXtensible MetaGrammar) approach, allowing the generation of tree based grammars. The method is based on the assembling of a description language from reusable bricks, and according to a unique specification file. The totality of the processing chain for the DSL is automatically assembled thanks to the same specification. In a first time, we validated this approach by recreating the XMG tool from elementary bricks. Some collaborations with linguists also brought us to assemble compilers allowing the description of morphology and semantics.
65

An architectural framework for assessing quality of experience of web applications

Radwan, Omar Amer January 2017 (has links)
Web-based service providers have long been required to deliver high quality services in accordance with standards and customer requirements. Increasingly, however, providers are required to think beyond service quality and develop a deeper understanding of their customers’ Quality of Experience (QoE). Whilst models exist that assess the QoE of Web Application, significant challenges remain in defining QoE factors from a Web engineering perspective, as well as mapping between so called ‘objective’ and ‘subjective’ factors of relevance. Specifically, the following challenges are considered as general fundamental problems for assessing QoE: (1) Quantifying the relationship between QoE factors; (2) predicting QoE as well as dealing with the limited data available in relation to subjective factors; (3) optimising and controlling QoE; and (4) perceiving QoE. In response, this research presents a novel model, called QoEWA (and associated software instantiation) that integrates factors through Key Performance Indicators (KPIs) and Key Quality Indicators (KQIs). The mapping is incorporated into a correlation model that assesses QoE, in particular, that of Web Application, with a consideration of defining the factors in terms of quality requirements derived from web architecture. The data resulting from the mapping is used as input for the proposed model to develop artefacts that: quantify, predict, optimise and perceive QoE. The development of QoEWA is framed and guided by Design Science Research (DSR) approach, with the purpose of enabling providers to make more informed decisions regarding QoE and/or to optimise resources accordingly. The evaluation of the designed artefacts is based on a build-and-evaluate cycle that provides feedback and a better understanding of the utilised solutions. The key artefacts are developed and evaluated through four iterations: Iteration 1 utilises the Actual Versus-Target approach to quantify QoE, and applies statistical analysis to evaluate the outputs. Iteration 2: utilises a Machine Learning (ML) approach to predict QoE, and applies statistical tests to compare the performance of ML algorithms. Iteration 3 utilises the Multi-Objective Optimisation (MOO) approach to optimise QoE and control the balance between resources and user experience. Iteration 4 utilises the Agent-Based Modelling approach to perceive and gain insights into QoE. The design of iteration 4 is rigorously tested using verified and validated models.
66

Castor : a constraint-based SPARQL engine with active filter processing / Castor : un moteur SPARQL basé sur les contraintes avec exploitation actif de filtres

Le Clement de Saint-Marcq, Vianney 16 December 2013 (has links)
SPARQL est le langage de requête standard pour les graphes de données du Web Sémantique. L’évaluation de requêtes est étroitement liée aux problèmes d’appariement de graphes. Il a été démontré que l’évaluation est NP-difficile. Les moteurs SPARQLde l’état-de-l’art résolvent les requêtes SPARQL en utilisant des techniques de bases de données traditionnelles. Cette approche est efficace pour les requêtes simples qui fournissent un point de départ précis dans le graphe. Par contre, les requêtes couvrant tout le graphe et impliquant des conditions de filtrage complexes ne passent pas bien à l’échelle. Dans cette thèse, nous proposons de résoudre les requêtes SPARQL en utilisant la Programmation par Contraintes (CP). La CP résout un problème combinatoire enexploitant les contraintes du problème pour élaguer l’arbre de recherche quand elle cherche des solutions. Cette technique s’est montrée efficace pour les problèmes d’appariement de graphes. Nous reformulons la sémantique de SPARQL en termes deproblèmes de satisfaction de contraintes (CSPs). Nous appuyant sur cette sémantique dénotationnelle, nous proposons une sémantique opérationnelle qui peut être utilisée pour résoudre des requêtes SPARQL avec des solveurs CP génériques.Les solveurs CP génériques ne sont cependant pas conçus pour traiter les domaines immenses qui proviennent des base de données du Web Sémantique. Afin de mieux traiter ces masses de données, nous introduisons Castor, un nouveau moteurSPARQL incorporant un solveur CP léger et spécialisé. Nous avons apporté une attention particulière à éviter tant que possible les structures de données et algorithmes dont la complexité temporelle ou spatiale est proportionnelle à la taille de la base dedonnées. Des évaluations expérimentales sur des jeux d’essai connus ont montré la faisabilité et l’efficacité de l’approche. Castor est compétitif avec des moteurs SPARQL de l’état-de-l’art sur des requêtes simples, et les surpasse sur des requête. / SPARQL is the standard query language for graphs of data in the SemanticWeb. Evaluating queries is closely related to graph matching problems, and has been shown to be NP-hard. State-of-the-art SPARQL engines solve queries with traditional relational database technology. Such an approach works well for simple queries that provide a clearly defined starting point in the graph. However, queries encompassing the whole graph and involving complex filtering conditions do not scale well. In this thesis we propose to solve SPARQL queries with Constraint Programming (CP). CP solves a combinatorial problem by exploiting the constraints of the problem to prune the search tree when looking for solutions. Such technique has been shown to work well for graph matching problems. We reformulate the SPARQL semantics by means of constraint satisfaction problems (CSPs). Based on this denotational semantics, we propose an operational semantics that can be used by off-theshelf CP solvers. Off-the-shelf CP solvers are not designed to handle the huge domains that come with SemanticWeb databases though. To handle large databases, we introduce Castor, a new SPARQL engine embedding a specialized lightweight CP solver. Special care has been taken to avoid as much as possible data structures and algorithms whosetime or space complexity are proportional to the database size. Experimental evaluations on well-known benchmarks show the feasibility and efficiency of the approach. Castor is competitive with state-of-the-art SPARQL engines on simple queries, and outperforms them on complex queries where filters can be actively exploited during the search.
67

PrologPF : parallel logic and functions on the Delphi machine

Lewis, Ian January 1998 (has links)
PrologPF is a parallelising compiler targeting a distributed system of general purpose workstations connected by a relatively low performance network. The source language extends standard Prolog with the integration of higher-order functions. The execution of a compiled PrologPF program proceeds in a similar manner to standard Prolog, but uses oracles in one of two modes. An oracle represents the sequence of clauses used to reach a given point in the problem search tree, and the same PrologPF executable can be used to build oracles, or follow oracles previously generated. The parallelisation strategy used by PrologPF proceeds in two phases, which this research shows can be interleaved. An initial phase searches the problem tree to a limited depth, recording the discovered incomplete paths. In the second phase these paths are allocated to the available processors in the network. Each processor follows its assigned paths and fully searches the referenced subtree, sending solutions back to a control processor. This research investigates the use of the technique with a one-time partitioning of the problem and no further scheduling communication, and with the recursive application of the partitioning technique to effect dynamic work reassignment. For a problem requiring all solutions to be found, execution completes when all the distributed processors have completed the search of their assigned subtrees. If one solution is required, the execution of all the path processors is terminated when the control processor receives the first solution. The presence of the extra-logical Prolog predicate cut in the user program conflicts with the use of oracles to represent valid open subtrees. PrologPF promotes the use of higher-order functional programming as an alternative to the use of cut. The combined language shows that functional support can be added as a consistent extension to standard Prolog.
68

Spatio-temporal logic for the analysis of biochemical models

Banks, Christopher Jon January 2015 (has links)
Process algebra, formal specification, and model checking are all well studied techniques in the analysis of concurrent computer systems. More recently these techniques have been applied to the analysis of biochemical systems which, at an abstract level, have similar patterns of behaviour to concurrent processes. Process algebraic models and temporal logic specifications, along with their associated model-checking techniques, have been used to analyse biochemical systems. In this thesis we develop a spatio-temporal logic, the Logic of Behaviour in Context (LBC), for the analysis of biochemical models. That is, we define and study the application of a formal specification language which not only expresses temporal properties of biochemical models, but expresses spatial or contextual properties as well. The logic can be used to express, or specify, the behaviour of a model when it is placed into the context of another model. We also explore the types of properties which can be expressed in LBC, various algorithms for model checking LBC - each an improvement on the last, the implementation of the computational tools to support model checking LBC, and a case study on the analysis of models of post-translational biochemical oscillators using LBC. We show that a number of interesting and useful properties can be expressed in LBC and that it is possible to express highly useful properties of real models in the biochemistry domain, with practical application. Statements in LBC can be thought of as expressing computational experiments which can be performed automatically by means of the model checker. Indeed, many of these computational experiments can be higher-order meaning that one succinct and precise specification in LBC can represent a number of experiments which can be automatically executed by the model checker.
69

Abstract interpretation of domain-specific embedded languages

Backhouse, Kevin Stuart January 2002 (has links)
A domain-specific embedded language (DSEL) is a domain-specific programming language with no concrete syntax of its own. Defined as a set of combinators encapsulated in a module, it borrows the syntax and tools (such as type-checkers and compilers) of its host language; hence it is economical to design, introduce, and maintain. Unfortunately, this economy is counterbalanced by a lack of room for growth. DSELs cannot match sophisticated domain-specific languages that offer tools for domainspecific error-checking and optimisation. These tools are usually based on syntactic analyses, so they do not work on DSELs. Abstract interpretation is a technique ideally suited to the analysis of DSELs, due to its semantic, rather than syntactic, approach. It is based upon the observation that analysing a program is equivalent to evaluating it over an abstract semantic domain. The mathematical properties of the abstract domain are such that evaluation reduces to solving a mutually recursive set of equations. This dissertation shows how abstract interpretation can be applied to a DSEL by replacing it with an abstract implementation of the same interface; evaluating a program with the abstract implementation yields an analysis result, rather than an executable. The abstract interpretation of DSELs provides a foundation upon which to build sophisticated error-checking and optimisation tools. This is illustrated with three examples: an alphabet analyser for CSP, an ambiguity test for parser combinators, and a definedness test for attribute grammars. Of these, the ambiguity test for parser combinators is probably the most important example, due to the prominence of parser combinators and their rather conspicuous lack of support for the well-known LL(k) test. In this dissertation, DSELs and their signatures are encoded using the polymorphic lambda calculus. This allows the correctness of the abstract interpretation of DSELs to be proved using the parametricity theorem: safety is derived for free from the polymorphic type of a program. Crucially, parametricity also solves a problem commonly encountered by other analysis methods: it ensures the correctness of the approach in the presence of higher-order functions.
70

Algorithmes et structures de données compactes pour la visualisation interactive d’objets 3D volumineux / Algorithms and compact data structures for interactive visualization of gigantic 3D objects

Jamin, Clément 25 September 2009 (has links)
Les méthodes de compression progressives sont désormais arrivées à maturité (les taux de compression sont proches des taux théoriques) et la visualisation interactive de maillages volumineux est devenue une réalité depuis quelques années. Cependant, même si l’association de la compression et de la visualisation est souvent mentionnée comme perspective, très peu d’articles traitent réellement ce problème, et les fichiers créés par les algorithmes de visualisation sont souvent beaucoup plus volumineux que les originaux. En réalité, la compression favorise une taille réduite de fichier au détriment de l’accès rapide aux données, alors que les méthodes de visualisation se concentrent sur la rapidité de rendu : les deux objectifs s’opposent et se font concurrence. A partir d’une méthode de compression progressive existante incompatible avec le raffinement sélectif et interactif, et uniquement utilisable sur des maillages de taille modeste, cette thèse tente de réconcilier compression sans perte et visualisation en proposant de nouveaux algorithmes et structures de données qui réduisent la taille des objets tout en proposant une visualisation rapide et interactive. En plus de cette double capacité, la méthode proposée est out-of-core et peut traiter des maillages de plusieurs centaines de millions de points. Par ailleurs, elle présente l’avantage de traiter tout complexe simplicial de dimension n, des soupes de triangles aux maillages volumiques. / Progressive compression methods are now mature (obtained rates are close to theoretical bounds) and interactive visualization of huge meshes has been a reality for a few years. However, even if the combination of compression and visualization is often mentioned as a perspective, very few papers deal with this problem, and the files created by visualization algorithms are often much larger than the original ones. In fact, compression favors a low file size to the detriment of a fast data access, whereas visualization methods focus on rendering speed : both goals are opposing and competing. Starting from an existing progressive compression method incompatible with selective and interactive refinements and usable on small-sized meshes only, this thesis tries to reconcile lossless compression and visualization by proposing new algorithms and data structures which radically reduce the size of the objects while supporting a fast interactive navigation. In addition to this double capability, our method works out-of-core and can handle meshes containing several hundreds of millions vertices. Furthermore, it presents the advantage of dealing with any n-dimensional simplicial complex, which includes triangle soups or volumetric meshes.

Page generated in 0.0165 seconds