• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 47
  • 13
  • 10
  • 10
  • 9
  • 9
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Speculative evaluation of functional programs

Laney, Robin Charles January 2001 (has links)
No description available.
2

Linearity and laziness

Wakeling, David January 1990 (has links)
No description available.
3

Left-Incompatible Term Rewriting Systems and Functional Strategy

SAKAI, Masahiko 12 1900 (has links)
No description available.
4

A comparison of a Lazy PageRank and variants for common graph structures

Aziz Ali, Barkat January 2018 (has links)
The thesis first reviews the mathematics behind the Google’s PageRank, which is the state-of-the-art webpage ranking algorithm. The main focus of the thesis is on exploring a lazy PageRank and variants, related to a random walk, and by realizing that, they can be computed using the very same algorithm, find lazy PageRank and variants' expressions for some common graph structures, for example, a line-graph, a complete-graph, a complete-bipartite graph including a star graph, and try to get some understanding of the behavior of the PageRank, when a network evolves, for example either by a contraction or an expansion of graphs’ nodes or links.
5

A Theoretical Study of the Synergy and Lazy Annotation Algorithms

Jayaram, Sampath January 2013 (has links) (PDF)
Given a program with assertions, the assertion checking problem is to tell whether there is an execution of the program that violates one of the assertions. One approach to this problem is to explore different paths towards assertion violations, and to learn “blocking” conditions whenever a path is blocked from reaching the violations. The Synergy algorithm of Gulavani et al. [FSE 2006] and the Lazy Annotation algorithm of McMillan [CAV2010] are two recent algorithms that follow this approach to assertion checking. Each technique has its own advantages. Synergy uses concrete tests which are very cheap as compared to theorem prover calls. The tests also help by giving us the place to perform the refinement (called the frontier) for an abstraction which is too coarse. Synergy uses partition refinement while maintaining its abstraction. The Lazy Annotation algorithm basically partitions each location in to regions that are safe and unsafe. The safe regions are those from which we cannot reach the error states, and the unsafe regions are the remaining ones. The annotations that this algorithm maintains correspond to the safe regions. The advantage that annotations have over partition refinement is that annotations can recover from irrelevant predicates used for annotating, where as once a partition is refined with an irrelevant predicate, it cannot recover from it. In this work, we make a theoretical study of the algorithms mentioned above. The aim of the study is to answer questions like: Is one algorithm provably better than the other, in terms of the best-case execution (counting the number of refinement steps) on input programs? Is the termination behavior of one always better than the other? We show that the Synergy and Lazy Annotation algorithms are incomparable, i.e., neither of them is provably better than the other, in terms of their best-case execution times. We also show how we can view the two algorithms on a common ground, in the sense that we show how to translate a snapshot of one algorithm into a snapshot of the other. This allows us to import the heuristics of one algorithm into the other, and there by propose new and potentially improved versions of these algorithms. By viewing them o n a common ground, we are also able to view the final proofs generated by the algorithms in either representation. We go on to study the proposed new versions of the Synergy and Lazy Annotation, comparing their best-case running times and their termination behaviour. We show that the following pairs of algorithms are incomparable: Mod-Syn (Lazy Annotation-style refinement imported into Synergy) and Synergy, Mod-Syn and Lazy Annotation, Synergy and SEAL(Synergy heuristics imported into Lazy Annotation). We show that the SEAL algorithm always performs better than the Lazy Annotation algorithm.
6

Efektivní funkcionální knihovna pro konečné automaty / An Efficient Functional Library for Finite Automata

Říha, Jakub January 2017 (has links)
Finite automata are an important mathematical abstraction, and in formal verification, they are used for a concise representation of regular languages. Operations often used on finite automata in this setting are testing their universality and language inclusion. \mbox{A naive} approach to implement these operations leads to an explicit determinization of the automata, which can be costly and undesirable. There is, however, a more advanced method for performing those operations, called the Antichains algorithm, which avoids such an explicit determinization. This work shows how finite automata operations can be effectively implemented in Haskell and compares several approaches of their implementation. The obtained results are compared with VATA, an imperative implementation of a finite automata library.
7

Interoperation for Lazy and Eager Evaluation

Faught, William Jeffrey 01 May 2011 (has links)
Programmers forgo existing solutions to problems in other programming lan- guages where software interoperation proves too cumbersome; they remake so- lutions, rather than reuse them. To facilitate reuse, interoperation must resolve language incompatibilities transparently. To address part of this problem, we present a model of computation that resolves incompatible lazy and eager eval- uation strategies using dual notions of evaluation contexts and values to mirror the lazy evaluation strategy in the eager one. This method could be extended to resolve incompatible evaluation strategies for any pair of languages with common expressions.
8

Declarative modelling of parameter setting / Deklarativ modellering av parametersättning

Nordström, Didrik January 2015 (has links)
The parameter setting problem is part of a complex, automated process for customizing Scania's products; primarily trucks and buses. The problem is modelled as a stateless, acyclic graph of pure functions and variables. A subset of a deterministic, concurrent, demand-driven, declarative programming model is implemented under the Microsoft .NET framework. The implementation is evaluated based on suitability for solving the parameter setting problem, computational performance and general applicability within the organization. It is concluded that the model reduces the complexity of the parameter setting problem, mainly due to demand-driven (lazy) execution. The implementation scales as expected on sequential programs in time and memory with respect to input size. Parallel programs benefit partly from parallelism but bottlenecks in the .NET framework seem to limit the speedup. The general applicability of the programming model within the organization is potentially high and there are many extensions that can be added in the future, such as constraint programming. / Parametersättning är en del av en komplex, automatiserad process för att specialanpassa Scanias produkter – primärt lastbilar och bussar. Problemet är modellerat som en tillståndslös acyklisk graf av rena funktioner och variabler. En deterministisk, parallel, behovstyrd deklarativ programmeringsmodell har implementeras under Microsoft .NET-ramverket. Implementationen utvärderas utifrån lämplighet för parametersättning, prestanda och generell nytta inom organisationen. Modellen lyckas med att reducera komplexiteten för parametersättning, primärt tack vare behovstyrd (lat) exekvering. Implementationen skalar i både tid och minne i enlighet med förväntningarna för sekventiella program. Parallella program har delvis nytta av multipla processorkärnor men flaskhalsar i .NET-ramverket verkar begränsa prestandan. Programmeringsmodellens generella nytta inom organisationen är potentiellt hög och det finns många tillbyggnader som kan läggas till i framtiden, såsom villkorsprogrammering.
9

Lazy User Theory and Interpersonal Communication Networks

Hayes, James Dwight 09 May 2012 (has links)
No description available.
10

Classification automatique pour la compréhension de la parole : vers des systèmes semi-supervisés et auto-évolutifs / Machine learning applied to speech language understanding : towards semi-supervised and self-evolving systems

Gotab, Pierre 04 December 2012 (has links)
La compréhension automatique de la parole est au confluent des deux grands domaines que sont la reconnaissance automatique de la parole et l'apprentissage automatique. Un des problèmes majeurs dans ce domaine est l'obtention d'un corpus de données conséquent afin d'obtenir des modèles statistiques performants. Les corpus de parole pour entraîner des modèles de compréhension nécessitent une intervention humaine importante, notamment dans les tâches de transcription et d'annotation sémantique. Leur coût de production est élevé et c'est la raison pour laquelle ils sont disponibles en quantité limitée.Cette thèse vise principalement à réduire ce besoin d'intervention humaine de deux façons : d'une part en réduisant la quantité de corpus annoté nécessaire à l'obtention d'un modèle grâce à des techniques d'apprentissage semi-supervisé (Self-Training, Co-Training et Active-Learning) ; et d'autre part en tirant parti des réponses de l'utilisateur du système pour améliorer le modèle de compréhension.Ce dernier point touche à un second problème rencontré par les systèmes de compréhension automatique de la parole et adressé par cette thèse : le besoin d'adapter régulièrement leurs modèles aux variations de comportement des utilisateurs ou aux modifications de l'offre de services du système / Two wide research fields named Speech Recognition and Machine Learning meet with the Automatic Speech Language Understanding. One of the main problems in this domain is to obtain a sufficient corpus to train an efficient statistical model. Such speech corpora need a lot of human involvement to transcript and semantically annotate them. Their production cost is therefore quite high and they are difficultly available.This thesis mainly aims at reducing the need of human intervention in two ways: firstly, reducing the amount of corpus needed to build a model thanks to some semi-supervised learning methods (Self-Training, Co-Training and Active-Learning); And lastly, using the answers of the system end-user to improve the comprehension model.This last point addresses another problem related to automatic speech understanding systems: the need to adapt their models to the fluctuation of end-user habits or to the modification of the services list offered by the system

Page generated in 0.0268 seconds