• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6857
  • 2288
  • 1
  • 1
  • Tagged with
  • 9147
  • 9121
  • 8130
  • 8071
  • 1264
  • 925
  • 898
  • 703
  • 668
  • 661
  • 626
  • 552
  • 460
  • 426
  • 360
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Online engineering : on the nature of open computational systems /

Fredriksson, Martin January 2004 (has links)
Diss. Ronneby : Tekn. högsk., 2004.
2

Multiple cue object recognition

Furesjö, Fredrik January 2005 (has links)
<p>Nature is rich in examples of how vision can be successfully used for sensing and perceiving the world and how the gathered information can be utilized to perform a variety of different objectives. The key to successful vision is the internal representations of the visual agent, which enable the agent to successfully perceive properties about the world. Humans perceive a multitude of properties of the world through our visual sense, such as motion, shape, texture, and color. In addition we also perceive the world to be structured into objects which are clustered into different classes - categories. For such a rich perception of the world many different internal representations that can be combined in different ways are necessary. So far much work in computer vision has been focused on finding new and, out of some perspective, better descriptors and not much work has been done on how to combine different representations.</p><p>In this thesis a purposive approach in the context of a visual agent to object recognition is taken. When considering object recognition from this view point the situatedness in form of the context and task of the agent becomes central. Further a multiple feature representation of objects is proposed, since a single feature might not be pertinent to the task at hand nor be robust in a given context.</p><p>The first contribution of this thesis is an evaluation of single feature object representations that have previously been used in computer vision for object recognition. In the evaluation different interest operators combined with different photometric descriptors are tested together with a shape representation and a statistical representation of the whole appearance. Further a color representation, inspired from human color perception, is presented and used in combination with the shape descriptor to increase the robustness of object recognition in cluttered scenes.</p><p>In the last part, which contains the second contribution, of this thesis a vision system for object recognition based on multiple feature object representation is presented together with an architecture of the agent that utilizes the proposed representation. By taking a system perspective to object recognition we will consider the representations performance under a given context and task. The scenario considered here is derived from a fetch scenario performed by a service robot.</p>
3

Semantic Web Service Composition via Logic-based Program Synthesis

Rao, Jinghai January 2004 (has links)
<p>The ability to efficient selection and integration of inter-organizational heterogeneous Web services at runtime becomes an important requirement to the Web service provision. In an Web service application, if no single existing Web service can satisfy the functionality required by the user, there should be a program or an agent to automated combine existing services together in order to fulfill the request.</p><p>The aim of this thesis is to consider the Web service composition problem from the viewpoint of logic-based program synthesis, and to propose an agent-based framework for supporting the composition process in scalable and flexible manner. The approach described in this thesis uses Linear Logic-based theorem proving to assist and automate composition of Semantic Web services. The approach uses a Semantic Web service language (DAML-S) for external presentation of Web services, while, internally, the services are presented by extralogical axioms and proofs in Linear Logic. Linear Logic, as a resource conscious logic, enables us to capture the concurrent features of Web services formally (including parameters, states and non-functional attributes). The approach uses a process calculus to present the process model of the composite service. The process calculus is attached to the Linear Logic inference rules in the style of type theory. Thus the process model for a composite service can be generated directly from the complete proof. We introduce a set of subtyping rules that defines a valid dataflow for composite services.</p><p>The subtyping rules that are used for semantic reasoning are presented with Linear Logic inference figures. The composition system has been implemented based on a multi-agent architecture, AGORA. The agent-based design enables the different components for Web service composition system, such as the theorem prover, semantic reasoner and translator to integrated to each other in a loosely coupled manner.</p><p>We conclude with discussing how this approach has been directed to meet the main challenges in Web service composition. First, it is autonomous so that the users do not required to analyze the huge amount of available services manually. Second, it has good scalability and flexibility so that the composition is better performed in a dynamic environment. Third, it solves the heterogeneous problem because the Semantic Web information is used for matching and composing Web services.</p><p>We argue that LL theorem proving, combined with semantic reasoning offers a practical approach to the success to the composition of Web services. LL, as a logic for specifying concurrent programming, provides higher expressive powers in the modeling of Web services than classical logic. Further, the agent-based design enables the different components for Web service composition system to integrated to each other in a loosely coupled manner.</p><p>The main contributions of this thesis is summarized as follows. First, an generic framework is developed for the purpose of presenting an abstract process of the automated Semantic Web service composition. Second, a specific system based on the generic platform has been developed. The system focuses on the translation between the internal and external languages together with the process extraction from the proof. Third, applications of the subtyping inference rules that are used for semantic reasoning is discussed. Fourth, an agent architecture is developed as the platform for Web service provision and composition.</p>
4

Probing-Based Approaches to Bandwidth Measurements and Network Path Emulation

Melander, Bob January 2003 (has links)
<p>The current Internet design is based on a number of principles and paradigms. A central theme of these is that the interior of the network should provide a very simple but generic (and optimized) service, namely best-effort packet forwarding. For the users, a consequence of this design is that performance properties and the behavior of the network can typically only be determined by performing probing-based measurements. </p><p>Probing means that special packets are injected into the network by one computer while another one receives and collects statistics about those packets. Using this statistical information, it is possible to make inferences about the network and its characteristics. One important characteristic is the bandwidth that is available on the network path between two computers in the Internet. When communicating with each other, the computers should not persistently send data at a higher rate than the available bandwidth since that will eventually lead to packet loss due to the overload. </p><p>This thesis considers the problem of measuring the available bandwidth of a network path using probing-based methods. To that end, we propose a new method called TOPP. It is designed to probe the network non-intrusively so that the measurements do not jeopardize the stability of the network. In addition to estimating the available bandwidth, TOPP also provides an estimate of the link capacity of the link that limits the available bandwidth.</p><p>In the second part of this thesis we propose and evaluate different models for trace-driven network path emulation. We also investigate how to probe a network path for the purpose of trace-driven emulation. We show that relatively simple trace-driven models work well for non-responsive UDP-based flows. However, for adaptive TCP flows, these simple models do not seem to perform well. We also find that for the trace-driven models studied, strongly bursty probing schemes (which includes probing by TCP) have undesirable properties and should be avoided.</p>
5

A Generic Approach to Schedulability Analysis of Real-Time Systems

Fersman, Elena January 2003 (has links)
<p>This thesis presents a framework for design, analysis, and implementation of embedded systems. We adopt a model of timed automata extended with asynchronous processes i.e. tasks triggered by events. A task is an executable program characterized by its worst-case execution time and deadline, and possibly other parameters such as priorities etc. for scheduling. The main idea is to associate each location of an automaton with a task (or a set of tasks). A transition leading to a location denotes an event triggering the tasks and the clock constraint on the transition specifies the possible arrival times of the event. This yields a model for real-time systems expressive enough to describe concurrency and synchronization, and tasks with (or without) combinations of timing, precedence and resource constraints, which may be periodic, sporadic, preemptive and (or) non-preemptive. We believe that the model may serve as a bridge between scheduling theory and automata-theoretic approaches to system modelling and analysis.</p><p>Our main result is that the schedulability checking problem for this model is decidable. To our knowledge, this is the first general decidability result on dense-time models for real time scheduling without assuming that preemptions occur only at integer time points. The proof is based on a decidable class of updatable automata: timed automata with subtraction in which clocks may be updated by subtractions within a bounded zone. As the second contribution, we show that for fixed priority scheduling strategy, the schedulability checking problem can be solved by reachability analysis on standard timed automata using only two extra clocks in addition to the clocks used in the original model to describe task arrival times. The analysis can be done in a similar manner to response time analysis in classic Rate-Monotonic Scheduling. We believe that this is the optimal solution to the problem. The third contribution is an extension of the above results to deal with precedence and resource constraints. We present an operational semantics for the model, and show that the related schedulability analysis problem can be solved efficiently using the same techniques. Finaly, to demonstrate the applicability of the framework, we have modelled, analysed, and synthesised the control software for a production cell. The presented results have been implemented in the Times tool for automated schedulability analysis and code synthesis.</p>
6

Efficient and Flexible Characterization of Data Locality through Native Execution Sampling

Berg, Erik January 2005 (has links)
<p>Data locality is central to modern computer designs. The widening gap between processor speed and memory latency has introduced the need for a deep hierarchy of caches. Thus, the performance of an application is to a large extent dependent on the amount of data locality the caches can exploit. Some data locality comes naturally from the way most programs are written and the way their data is allocated in the memory. Compilers further try to create data locality by loop transformations and optimized data layout. Different ways of writing a program and/or laying out its data may improve an application’s locality even more. However, it is far from obvious how such a locality optimization can be achieved, especially since the optimizing compiler may have left the optimization job half done. Thus, efficient tools are needed to guide the software developers on their quest for data locality.</p><p>The main contribution of this dissertation is a sample-based novel method for analyzing the data locality of an application. Very sparse data is collected during a single execution of the studied application. The sparse sampling adds a minimum overhead to the execution time, which enables complex applications running realistic data sets to be studied. The architecturalindependent information collected during the execution is fed to a mathematical cache model for predicting the cache miss ratio. The sparsely-collected data can be used to characterize the application’s data locality in respect to almost any possible cache hierarchy, such as complicated multiprocessor memory systems with multilevel cache hierarchies. Any combination of cache size, cache line size and degree of sharing can be modeled. Each new modeled design point takes only a fraction of a second to evaluate, even though the application from which the sampled data was collected may have executed for hours. This makes the tool not just usable for software developers, but also for hardware developers who need to evaluate a huge memory-system design space.</p><p>We also discuss different ways of presenting data-locality information to a programmer in an intuitive and easily interpreted way. Some of the locality metrics we introduce utilize the flexibility of our algorithm and its ability to vary different cache parameters for one run. The dissertation also presents several prototype implementations of tools for profiling the memory system.</p>
7

Memory System Design for Chip-Multiprocessors

Karlsson, Martin January 2005 (has links)
<p>The continued decrease in transistor size and the increasing delay of wires relative to transistor switching speeds led to the development of chip multiprocessors (CMPs). The introduction of CMPs presents new challenges and trade-offs to computer architects. In particular, architects must now balance the allocation of chip resources to each processor against the number of processors on a chip. This thesis deals with some of the implications this new kind of processors have regarding the memory system and proposes several new designs based on the resource constraints of CMPs. In addition, it includes contributions on simulation technique and workload characterization, which is used to guide the design of new processors and systems.</p><p>The memory system is the key to performance in contemporary computer systems. This thesis targets multiple aspects of memory system performance. To conserve bandwidth, and thereby packaging costs, a fine-grained data fetching strategy is presented that exploits characteristics of runahead execution. Two cache organizations are proposed: The RASCAL cache organization, which target capacity misses through selective caching and the Elbow cache that targets conflict misses by extending a skewed cache with a relocation algorithm. Finally, to reduce complexity and cost when designing multi-chip systems, a new trap-based system architecture is described.</p><p>When designing a new processor or memory system, simulations are used to compare design alternatives. It is therefore very important to simulate workloads that accurately reflect the future use of the system. This thesis includes the first architectural characterization studies of Java-based middleware, which is a workload that is an important design consideration for the next generation of processors and servers.</p>
8

Semantic Web Service Composition via Logic-based Program Synthesis

Rao, Jinghai January 2004 (has links)
The ability to efficient selection and integration of inter-organizational heterogeneous Web services at runtime becomes an important requirement to the Web service provision. In an Web service application, if no single existing Web service can satisfy the functionality required by the user, there should be a program or an agent to automated combine existing services together in order to fulfill the request. The aim of this thesis is to consider the Web service composition problem from the viewpoint of logic-based program synthesis, and to propose an agent-based framework for supporting the composition process in scalable and flexible manner. The approach described in this thesis uses Linear Logic-based theorem proving to assist and automate composition of Semantic Web services. The approach uses a Semantic Web service language (DAML-S) for external presentation of Web services, while, internally, the services are presented by extralogical axioms and proofs in Linear Logic. Linear Logic, as a resource conscious logic, enables us to capture the concurrent features of Web services formally (including parameters, states and non-functional attributes). The approach uses a process calculus to present the process model of the composite service. The process calculus is attached to the Linear Logic inference rules in the style of type theory. Thus the process model for a composite service can be generated directly from the complete proof. We introduce a set of subtyping rules that defines a valid dataflow for composite services. The subtyping rules that are used for semantic reasoning are presented with Linear Logic inference figures. The composition system has been implemented based on a multi-agent architecture, AGORA. The agent-based design enables the different components for Web service composition system, such as the theorem prover, semantic reasoner and translator to integrated to each other in a loosely coupled manner. We conclude with discussing how this approach has been directed to meet the main challenges in Web service composition. First, it is autonomous so that the users do not required to analyze the huge amount of available services manually. Second, it has good scalability and flexibility so that the composition is better performed in a dynamic environment. Third, it solves the heterogeneous problem because the Semantic Web information is used for matching and composing Web services. We argue that LL theorem proving, combined with semantic reasoning offers a practical approach to the success to the composition of Web services. LL, as a logic for specifying concurrent programming, provides higher expressive powers in the modeling of Web services than classical logic. Further, the agent-based design enables the different components for Web service composition system to integrated to each other in a loosely coupled manner. The main contributions of this thesis is summarized as follows. First, an generic framework is developed for the purpose of presenting an abstract process of the automated Semantic Web service composition. Second, a specific system based on the generic platform has been developed. The system focuses on the translation between the internal and external languages together with the process extraction from the proof. Third, applications of the subtyping inference rules that are used for semantic reasoning is discussed. Fourth, an agent architecture is developed as the platform for Web service provision and composition.
9

Thread-based mobility for a distributed dataflow language

Havelka, Dragan January 2005 (has links)
<p>Strong mobility enables migration of entire computations combining code, data, and execution state (such as stack and program counter) between sites of computation. This is in contrast to weak mobility were migration is confined to just code and data. Strong mobility is essential for many applications where reconstruction of execution states is either difficult or even impossible. Typical application areas are load balancing, reduction of network latency and traffic, and resource-related migration, just to name a few.</p><p>This thesis presents a model, programming abstractions, an implementation, and an evaluation of thread-based strong mobility. The model extends a distributed programming model based on automatic synchronization via dataflow variables. The programming abstractions capture various migration scenarios. These scenarios differ in how migration source and destination relate to the site initiating migration. The implementation is based on replication of concurrent lightweight threads between sites controlled by migration managers. The model is implemented in the Mozart programming system. The first version is complete and a work concerning resource rebinding is still in progress.</p>
10

Generic distribution support for programming systems

Klintskog, Erik January 2005 (has links)
<p>This dissertation provides constructive proof, through the implementation of a middleware, that distribution transparency is practical, generic, and extensible. Fault tolerant distributed services can be developed by using the failure detection abilities of the middleware. By generic we mean that the middleware can be used for many different programming languages and paradigms. Distribution for each kind of language entity is done in terms of consistency protocols, which guarantee that the semantics of the entities are preserved in a distributed setting. The middleware allows new consistency protocols to be added easily. The efficiency of the middleware and the ease of integration are shown by coupling the middleware to a programming system, which encompasses the object oriented, the functional, and the concurrent-declarative programming paradigms. Our measurements show that the distribution middleware is competitive with the most popular distributed programming systems (JavaRMI, .NET, IBM CORBA).</p>

Page generated in 0.0619 seconds