• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6177
  • 1947
  • 1
  • Tagged with
  • 8125
  • 8125
  • 8125
  • 8064
  • 904
  • 830
  • 659
  • 626
  • 614
  • 560
  • 466
  • 400
  • 397
  • 335
  • 334
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Querying mediated web services

Sabesan, Manivasakan January 2007 (has links)
Web services provide a framework for data interchange between applications by incorporating standards such as XMLSchema, WSDL, SOAP, HTTP etc. They define operations to be invoked over a network to perform the actions. These operations are described publicly in a WSDL document with the data types of their argument and result. Searching data accessible via web services is essential in many applications. However, web services don’t provide any general query language or view capabilities. Current web services applications to access the data must be developed using a regular programming language such Java, or C#. The thesis provides an approach to simplify querying web services data and proposes efficient processing of database queries to views of wrapped web services. To show the effectiveness of the approach, a prototype, webService MEDiator system (WSMED), is developed. WSMED provides general view and query capabilities over data accessible through web services by automatically extracting basic meta-data from WSDL descriptions. Based on imported meta-data, the user can then define views that extract data from the results of calls to web service operations. The views can be queried using SQL. A given view can access many different web service operations in different ways depending on what view attributes are known. The views can be specified in terms of several declarative queries to be applied by the query processor. In addition, the user can provide semantic enrichments of the meta-data with key constraints to enable efficient query execution over the views by automatic query transformations. We evaluated the effectiveness of our approach over multilevel views of existing web services and show that the key constraint enrichments substantially improve query performance.
392

Market and resource allocation algorithms with application to energy control

Carlsson, Per January 2001 (has links)
The energy markets of today are markets with rather few active participants. The participants are, with few exceptions, large producers and distributors. The market mechanisms that are used are constructed with this kind of a market situation in mind. With an automatic or semiautomatic approach, the market mechanism would be able to incorporate a larger number of participants. Smaller producers, and even consumers, could take an active part in the market. The gain is in more efficient markets, and – due to smaller fluctuations in demand – better resource usage from an environmental perspective. The energy markets of the Nordic countries (as well as many others) were deregulated during the last few years. The change has been radical and the situation is still rather new. We believe that the market can be made more efficient with the help of the dynamics of the small actors. The idealised world of theory (of economics) often relies on assumptions such as continuous demand and supply curves. These assumptions are useful, and they do not introduce problems in the power market situation of today, with relatively few, large, participants. When consumers and small producers are introduced on the market, the situation is different. Then it is a drawback if the market mechanims cannot handle discontinuous supply and demand. The growth in accessibility to computational power and data communications that we have experienced in the last years (and are experiencing) could be utilised when constructing mechanisms for the energy markets of tomorrow. In this thesis we suggest a new market mechanism, ConFAst, that utilises the technological progress to make it possible to incorporate a large number of active participants on the market. The mechanism does not rely on the assumptions above. The gain is a more efficient market with less fluctuations in demand over the day. To make this possible there is a need for efficient algorithms, in particular this mechanism relies on an efficient aggregation algorithm. An algorithm for aggregation of objective functions is part of this thesis. The algorithm handles maximisation with nonconcave, even noisy, objective functions. Experimental results show that the approach, in practically relevant cases, is significantly faster than the standard algorithm.
393

New Strategies for Ensuring Time and Value Correctness in Dependable Real-Time Systems

Aysan, Hüseyin January 2009 (has links)
Dependable real-time embedded systems are typically composed of a number of heterogeneous computing nodes, heterogeneous networks that connect them and tasks with multiple criticality levels allocated to the nodes. The heterogeneous nature of the hardware, results in a varying vulnerability to different types of hardware failures. For example, a computing node with effective shielding shows higher resistance to transient failures caused by environmental conditions such as radiation or temperature changes than an unshielded node. Similarly, resistance to permanent failures can vary depending on the manufacturing procedures used. Vulnerability to different types of errors of a task which may lead to a system failure, depends on several factors, such as the hardware on which the task runs and communicates, the software architecture and the implementation quality of the software, and varies from task to task. This variance, as well as the different criticality levels and real-time requirements of tasks, necessitate novel fault-tolerance approaches to be developed and used, in order to meet the stringent dependability requirements of resource-constrained real-time systems.   In this thesis, the major contribution is four-fold. Firstly, we describe an error classification for real-time embedded systems and address error propagation aspects. The goal of this work is to perform the analysis on a given system, in order to find bottlenecks in satisfying dependability requirements and to provide guidelines on the usage of appropriate error detection and fault tolerance mechanisms.   Secondly, we present a time-redundancy approach to provide a priori guarantees in fixed-priority scheduling (FPS) such that the system will be able to tolerate one value error per every critical task instance by re-execution of every critical task instance or execution of alternate tasks before deadlines, while keeping the associated costs minimized.   Our third contribution is a new approach, Voting on Time and Value (VTV) which extends the N-modular redundancy approach by explicitly considering both value and timing errors, such that correct value is produced at a correct time, under specified assumptions. We illustrate our voting approach by instantiating it in the context of the well-known triple modular redundancy (TMR) approach. Further, we present a generalized voting algorithm targeting NMR that enables a high degree of customization from the user perspective.   Finally, we propose a novel cascading redundancy approach within a generic fault tolerant scheduling framework. The proposed approach is capable of tolerating errors with a wider coverage (with respect to error frequency and error types) than our proposed time and space redundancy approaches in isolation, allows tasks with mixed criticality levels, is independent of the scheduling technique and, above all, ensures that every critical task instance can be feasibly replicated in both time and/or space. The fault-tolerance techniques presented in this thesis address various different error scenarios that can be observed in real-time embedded systems with respect to the types of errors and frequency of occurrence, and can be used to achieve the ultra-high levels of dependability which is required in many critical systems. / PROGRESS
394

Personal service environments : Openness and user control in user-service interaction

Bylund, Markus January 2001 (has links)
This thesis describes my work with making the whole experience of using electronic services more pleasant and practical. More and more people use electronic services in their daily life — be it services for communicating with colleagues or family members, web-based bookstores, or network-based games for entertainment. However, electronic services in general are more difficult to use than they would have to be. They are limited in how and when users can access them. Services do not collaborate despite obvious advantages to their users, and they put the integrity and privacy of their users at risk. In this thesis, I argue that there are structural reasons for these problems rather than problems with content or the technology per se. The focus when designing electronic services tends to be on the service providers or on the artifacts that are used for accessing the services. I present an approach that focus on the user instead, which is based on the concept of personal service environments. These provide a mobile locale for storing and running electronic services of individual users. This gives the user increased control over which services to use, from where they can be accessed, and what personal information that services gather. The concept allows, and encourages, service collaboration, but not without letting the user maintain the control over the process. Finally, personal service environments allow continuous usage of services while switching between interaction devices and moving between places. The sView system, which is also described, implements personal service environments and serves as an example of how the concept can be realized. The system consists of two parts. The first part is a specification of how both services for sView and infrastructure for handling services should be developed. The second part is a reference implementation of the specification, which includes sample services that adds to and demonstrates the functionality of sView.
395

Market based programming and resource allocation

Karlsson, Maria January 2003 (has links)
The subject of this thesis is the concept of market-oriented programming and market protocols. We want to solve an allocation problem where some resources are to be divided among a number of agents. Each agent has a utility function telling how much the current allocation is worth for it. The goal is to allocate the resources among the agents in a way that maximizes the sum of the utilities of all agents. To solve this problem we use the concept of markets to create mechanisms for computational implementation. To achieve the advantages of market-oriented programming, we have to consider the conceptual view of the problem a main design issue. We want to investigate the possibilities to build computationally effective mechanisms which maintain the intuitive, easy-to-understand structure of market-based approaches. In the first paper we look at two examples from the literature and show that conceptual improvements of the approaches will make agent behavior more realistic. This will also make the examples fit into a more general theory. In the second paper we create a market mechanism for handling combinatorial markets. The mechanism includes an auction, where each iteration runs in polynomial time. The mechanism shows good performance when the number of resources is relatively small compared to the number of agents.
396

Bootstrapping and decentralizing recommender systems

Olsson, Tomas January 2003 (has links)
This thesis consists of three papers on recommender systems. The first paper addresses the problem of making decentralized recommendations using a peer-to-peer architecture. Collaborating recommender agents are connected into a network of neighbors that exchange user recommendations to find new items to recommend. We achieved a performance comparable to a centralized system. The second paper deals with the bootstrapping problem for centralized recommender systems. Most recommender systems learn from experience but initially there are no users and no rated items to learn from. To deal with this problem we have developed the Genesis method. The method bootstraps a recommender system with artificial user profiles sampled from a probabilistic model built from prior knowledge. The paper describes how this was done for a k-nearest neighbor recommender algorithm in the movie domain. We were able to improve the performance of a k-nearest neighbor algorithm for single predictions but not for rank ordered lists of recommendations. The third paper identifies a new domain for recommendations – product configuration – where new recommender algorithms are needed. A system that helps users configuring their own PCs is described. Recommendations and a cluster-based help system together with a rule-based configurator assist the users in selecting appropriate features or complete PC configurations. The configurator ensures that users cannot choose incompatible components while the recommender system adds social information based on what other users have chosen. This introduces new complexity in the recommendation process on how to combine the recommendations from the configurator and the recommender system. The paper proposes (1) three new recommender algorithms on how to make recommendations in the domain of product configuration, (2) a method for adding social recommendations to a rule-based configurator and (3) a method for applying the Genesis method in this domain. In this case the Genesis method is implemented by a Bayesian belief net that captures the designers' prior knowledge on how to configure PCs. Then instances of complete configurations are sampled from the model and added to the recommender algorithm.
397

High-level modelling and local search

Ågren, Magnus January 2005 (has links)
Combinatorial optimisation problems are ubiquitous in our society and appear in such varied guises as DNA sequencing, scheduling, configuration, airline-crew and nurse rostering, combinatorial auctions, vehicle routing, and financial portfolio design. Their efficient solution is crucial to many people and has been the target for much research during the last decades. One successful area of research for solving such problems is constraint programming. Yet, current-generation constraint programming languages are considered by many, especially in industry, to be too low-level, difficult, and large. In this thesis, we argue that solver-independent, high-level relational constraint modelling leads to a simpler and smaller language, to more concise, intuitive, and analysable models, as well as to more efficient and effective model formulation, maintenance, reformulation, and verification. All this can be achieved without sacrificing the possibility of efficient solving, so that even time-pressed modellers can be well assisted. Towards this, we propose the ESRA relational constraint modelling language, showcase its elegance on some real-life problems, and outline a compilation philosophy for such languages. In order to compile high-level languages such as ESRA to current generation constraint programming languages, it is essential that as much support as possible is available in these languages. This is already the case in the constructive search area of constraint programming where, e.g., different kinds of domain variables, such as integer variables and set variables, and expressive global constraints are readily available. However, in the local search area of constraint programming, this is not yet the case and, until now, set variables were for example not available. This thesis introduces set variables and set constraints in the local search area of constraint programming and, by doing this, considerably improves the possibilities for using local search. This is true both for modelling and solving problems using constraint-based local search, as well as for using it as a possible target for the compilation of ESRA models. Indeed, many combinatorial optimisation problems have natural models based on set variables and set constraints, three of which are successfully solved in this thesis. When a new set constraint is introduced in local search, much effort must be spent on the design and implementation of an appropriate incremental penalty function for the constraint. This thesis introduces a scheme that, from a high-level description of a set constraint in existential second-order logic with counting, automatically synthesises an incremental penalty function for that constraint. The performance of this scheme is demonstrated by solving real-life instances of a financial portfolio design problem that seem unsolvable in reasonable time by constructive search.
398

Ad hoc routing protocol validation

Wibling, Oskar January 2005 (has links)
We explore and evaluate methods for validation of ad hoc routing protocols which are used to set up forwarding paths in spontaneous networks of mobile devices. The focus is automatic formal verification but we also make an initial account of a protocol performance comparison using structured live testing. State explosion is a major problem in algorithmic verification of systems with concurrently executing components. We comment on some of the best reduction methods and their applicability to our work. For reactively operating ad hoc routing protocols we provide a broadcast abstraction which enables us to prove correct operation of the Lightweight Underlay Network Ad hoc Routing protocol (LUNAR) in scenarios of realistic size. Our live testing effort has thus far uncovered significant problems in the inter-operation between TCP and ad hoc routing protocols. Severe performance degradation results in networks of just a few nodes when very little mobility is introduced. This indicates that verification for smaller network sizes is also interesting since those configurations may be the only useful ones for certain types of applications.
399

Model-based test case selection and generation for real-time systems

Hessel, Anders January 2006 (has links)
Testing is the dominating verification technique used in industry today, and many man-hours and resources are invested in the testing of software products. To cut down the cost of testing, automated test execution becomes more and more popular. However, the selection of which tests to be executed is still mainly a manual process that is error prone, and often without sufficient guarantees that the system will be systematically tested. A way to achieve systematic testing is to ensure that the tests satisfy a required coverage criterion. In this thesis two main problems are addressed: the problem of how to formally specify coverage criteria, and the problem of how to generate a test suite from a formal system model, such that the test suite satisfies a given coverage criterion. We also address the problem of how to generate an optimal test suite, e.g., with respect to the total time required to execute the test suite. Our approach is to convert the test case generation problem into a reachability problem. We observe that a coverage criterion consists of a set of items, which we call coverage items. The problem of generating a test case for each coverage item can be treated as separate reachability problems. We use an on-the-fly method, where coverage information is added to the states of the analyzed system model, to solve the reachability problem of a coverage item. The coverage information is used to select a set of test cases that together satisfy all the coverage items, and thus the full coverage criterion. Based on the view of coverage items we define a language, in the form of parameterized observer automata, to formally describe coverage criteria. We show that the language is expressive enough to describe a variety of common coverage criteria in the literature. Two different ways to generate test suites form a system model and a given coverage criterion are presented. The first way is based on model annotations and uses the model checker Uppaal. The second way, where no annotations are needed, is a modified reachability analysis algorithm that is implemented in an extended version of the Uppaal tool.
400

Efficient memory management for message-passing concurrency, Part I : Single-threaded execution

Wilhelmsson, Jesper January 2005 (has links)
Manual memory management is error prone. Some of the errors it causes, in particular memory leaks and dangling pointers, are hard to find. Manual memory management becomes even harder when concurrency enters the picture. It therefore gets more and more important to overcome the problems of manual memory management in concurrent software as the interest in these applications increases with the development of new, multi-threaded, hardware. To ease the implementation of concurrent software many programming languages these days come with automatic memory management and support for concurrency. This support, called the concurrency model of the language, comes in many flavors (shared data structures, message passing, etc). The performance and scalability of applications implemented using such programming languages depends on the concurrency model, the memory architecture, and the memory manager used by the language. It is therefore important to investigate how different memory architectures and memory management schemes affect the implementation of concurrent software and what performance tradeoffs are involved. This thesis investigates ways of efficiently implementing the memory architecture and memory manager in a concurrent programming language. The concurrency model which we explore is based on message passing with copying semantics. We start by presenting the implementation of three different memory architectures for concurrent software and give a detailed characterization of their pros and cons regarding message passing and efficiency in terms of memory management. The issue examined is whether to use private memory for all processes in a program or if there may be benefits in sharing all or parts of the memory. In one of the architectures looked at, called hybrid, a static analysis called message analysis is used to guide the allocation of message data. Because the hybrid architecture is the enabling technology for a scalable multi-threaded implementation, we then focus on the hybrid architecture and investigate how to manage the memory using different garbage collection techniques. We present pros and cons of these techniques and discuss their characteristics and their performance in concurrent applications. Finally our experiences from turning the garbage collector incremental are presented. The effectiveness of the incremental collector is compared to the non-incremental version. On a wide range of benchmarks, the incremental collector we present is able to sustain high mutator utilization (about 80% during collection cycles) at a low cost. This work is implemented in an industrial-strength implementation of the concurrent functional programming language Erlang. Our eventual goal is to use the hybrid architecture and the incremental garbage collector as the basis for an efficient multi-threaded implementation of Erlang. The work described in this thesis is a big step in that direction.

Page generated in 0.4071 seconds