• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 513
  • 182
  • 3
  • 1
  • Tagged with
  • 702
  • 702
  • 702
  • 553
  • 542
  • 516
  • 514
  • 120
  • 112
  • 102
  • 101
  • 101
  • 100
  • 100
  • 100
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An intelligent approach to design tasks

Prasad, Naga P 08 1900 (has links)
An intelligent approach to design tasks
2

RA: A memory organization to model the evolution of scientific knowledge

Swaminathan, Kishore S 01 January 1990 (has links)
This dissertation addresses the dichotomy between semantic and episodic knowledge by focusing on the evolution of scientific knowledge. Even timeless scientific knowledge about the nature of the world accrues only through discrete episodes, with each scientist building upon the work of his/her predecessors. Hence, a memory organization to model the knowledge of a scientific field should reflect not only the knowledge pertaining to the field, but also the knowledge pertaining to the evolution of the field. A computer program called RA is described: RA proposes a memory organization for scientific knowledge in terms of a representational idea called Research Schemas. Research Schemas view research papers, not as isolated pieces of text, but as related episodes that contribute to the growth of a scientific discipline. This memory organization is validated by showing that it supports a number of different capabilities: it enables RA to suggest new research directions, acquire new research schemas, retrieve papers that have similar research strategies, and generate both chronological and analogical summaries of research papers. A combination of these capabilities constitutes a framework for 'Computer-Aided Research.' The RA system also includes a learning technique to acquire new research schemas. While similarity-based techniques use multiple examples (and some form of encoded bias) and explanation-based techniques use a domain theory as the basis for generalization, there is no apparent basis for RA's generalization. An analysis of RA's learning strategy shows that the category structure of RA's world provides a basis for its generalization: RA generalizes instantiations into categories that are both associative and discriminative. Interestingly, this turns out to be precisely the property that characterizes basic-level categories that have been studied by psychologists. This dissertation explores the implication of this results to learning and knowledge representation.
3

Self-management for large-scale distributed systems

Al-Shishtawy, Ahmad January 2012 (has links)
Autonomic computing aims at making computing systems self-managing by using autonomic managers in order to reduce obstacles caused by management complexity. This thesis presents results of research on self-management for large-scale distributed systems. This research was motivated by the increasing complexity of computing systems and their management. In the first part, we present our platform, called Niche, for programming self-managing component-based distributed applications. In our work on Niche, we have faced and addressed the following four challenges in achieving self-management in a dynamic environment characterized by volatile resources and high churn: resource discovery, robust and efficient sensing and actuation, management bottleneck, and scale. We present results of our research on addressing the above challenges. Niche implements the autonomic computing architecture, proposed by IBM, in a fully decentralized way. Niche supports a network-transparent view of the system architecture simplifying the design of distributed self-management. Niche provides a concise and expressive API for self-management. The implementation of the platform relies on the scalability and robustness of structured overlay networks. We proceed by presenting a methodology for designing the management part of a distributed self-managing application. We define design steps that include partitioning of management functions and orchestration of multiple autonomic managers. In the second part, we discuss robustness of management and data consistency, which are necessary in a distributed system. Dealing with the effect of churn on management increases the complexity of the management logic and thus makes its development time consuming and error prone. We propose the abstraction of Robust Management Elements, which are able to heal themselves under continuous churn. Our approach is based on replicating a management element using finite state machine replication with a reconfigurable replica set. Our algorithm automates the reconfiguration (migration) of the replica set in order to tolerate continuous churn. For data consistency, we propose a majority-based distributed key-value store supporting multiple consistency levels that is based on a peer-to-peer network. The store enables the tradeoff between high availability and data consistency. Using majority allows avoiding potential drawbacks of a master-based consistency control, namely, a single-point of failure and a potential performance bottleneck. In the third part, we investigate self-management for Cloud-based storage systems with the focus on elasticity control using elements of control theory and machine learning. We have conducted research on a number of different designs of an elasticity controller, including a State-Space feedback controller and a controller that combines feedback and feedforward control. We describe our experience in designing an elasticity controller for a Cloud-based key-value store using state-space model that enables to trade-off performance for cost. We describe the steps in designing an elasticity controller. We continue by presenting the design and evaluation of ElastMan, an elasticity controller for Cloud-based elastic key-value stores that combines feedforward and feedback control.
4

Semantic annotation and summarization of biomedical text /

Reeve, Lawrence H. Han Hyoil. January 2007 (has links)
Thesis (Ph. D.)--Drexel University, 2007. / Includes abstract and vita. Includes bibliographical references (leaves 196-207).
5

A framework for reasoning about Erlang code

Fredlund, Lars-Åke January 2001 (has links)
We present a framework for formal reasoning about the behaviour of software written in Erlang, a functional programming language with prominent support for process based concurrency, message passing communication and distribution. The framework contains the following key ingredients: a specification language based on the mu-calculus and first-order predicate logic, a hierarchical small-step structural operational semantics of Erlang, a judgement format allowing parameterised behavioural assertions, and a Gentzen style proof system for proving validity of such assertions. The proof system supports property decomposition through a cut rule and handles program recursion through well-founded induction. An implementation is available in the form of a proof assistant tool for checking the correctness of proof steps. The tool offers support for automatic proof discovery through higher--level rules tailored to Erlang. As illustrated in several case / <p>Trita-IT. AVH ; 01:04, URI: urn:nbn:se:kth:diva-3210</p>
6

Network overload avoidance by traffic engineering and content caching

Abrahamsson, Henrik January 2012 (has links)
The Internet traffic volume continues to grow at a great rate, now driven by video and TV distribution. For network operators it is important to avoid congestion in the network, and to meet service level agreements with their customers. This thesis presents work on two methods operators can use to reduce links loads in their networks: traffic engineering and content caching. This thesis studies access patterns for TV and video and the potential for caching. The investigation is done both using simulation and by analysis of logs from a large TV-on-Demand system over four months. The results show that there is a small set of programs that account for a large fraction of the requests and that a comparatively small local cache can be used to significantly reduce the peak link loads during prime time. The investigation also demonstrates how the popularity of programs changes over time and shows that the access pattern in a TV-on-Demand system very much depends on the content type. For traffic engineering the objective is to avoid congestion in the network and to make better use of available resources by adapting the routing to the current traffic situation. The main challenge for traffic engineering in IP networks is to cope with the dynamics of Internet traffic demands. This thesis proposes L-balanced routings that route the traffic on the shortest paths possible but make sure that no link is utilised to more than a given level L. L-balanced routing gives efficient routing of traffic and controlled spare capacity to handle unpredictable changes in traffic. We present an L-balanced routing algorithm and a heuristic search method for finding L-balanced weight settings for the legacy routing protocols OSPF and IS-IS. We show that the search and the resulting weight settings work well in real network scenarios.
7

Coding for Improved Perceived Quality of 2D and 3D Video over Heterogeneous Networks

Karlsson, Linda Sofia January 2010 (has links)
The rapid development of video applications for TV, the internet and mobile phones is being taken one step further in 2010 with the introduction of stereo 3D TV. The 3D experience can be further improved using multiple views in the visualization. The transmission of 2D and 3D video at a sufficiently perceived quality is a challenge considering the diversity in content, the resources of the network and the end-users.Two problems are addressed in this thesis. Firstly, how to improve the perceived quality for an application with a limited bit rate. Secondly, how to ensure the best perceived quality for all end-users in a heterogeneous network. A solution to the first problem is region-of-interest (ROI) video coding, which adapts the coding to provide a better quality in regions of interest to the viewer. A spatio-temporal filter is proposed to provide codec and standard independent ROI video coding. The filter reduces the number of bits necessary to encode the background and successfully re-allocate these bits to the ROI. The temporal part of the filter reduces the complexity compared to only using a spatial filter. Adaption to the requirements of the transmission channel is possible by controlling the standard deviation of the filter. The filter has also been successfully applied to 3D video in the form of 2D-plus-depth, where the depth data was used in the detection of the ROI. The second problem can be solved by providing a video sequence that has the best overall quality. Hence, the best quality for each part of the network and for each 2D and 3D visualization system over time. Scalable video coding enables the extraction of the parts of the data to adapt to the requirements of the network and the end-user. A scheme is proposed in this thesis that provides scalability in the depth and view domain of multi-view plus depth video. The data are divided into enhancement layers depending on the content’s distance to the camera. Schemes to divide the data into layers within a view and between adjacent views have been analyzed. The quality evaluation indicates that the position of the layers in depth as well as the number of layers should be determined by analyzing the depth distribution. The front-most layers in adjacent views should be given priority over the others unless the application requires a high quality of the center views. / Den snabba utvecklingen av videoapplikation för TV, Internet och mobiltelefoner tar ytterliggare ett steg i och med introduceringen av stereo 3D TV under 2010. Upplevelsen av 3D kan förstärkas ytterliggare genom att använda multipla vyer i visualiseringen. Skillnaden i innehåll, nätverksresurser och slutanvändare gör överföring av 2D och 3D video med en tillräcklig hög upplevd kvalitet till en utmaning. För det första, hur man ökar den upplevda kvalitén hos en applikation med en begränsad överföringshastighet. För det andra, hur man tillhandahåller den bästa upplevda kvalitén hos alla slutanvändare i ett heterogent nätverk. Region-of-interest (ROI) videokodning är en lösning till det första problemet, vilken anpassar kodningen för att ge högre kvalitet i regioner som är intressanta för användaren. Ett spatio-temporalt filter är föreslaget för att tillhandahålla codec- och standardoberoende ROI videokodning. Filtret reducerar antalet bitar som krävs för att koda bakgrunden och omfördelar dessa till ROI:t. Den temporala delen av filtret minskar komplexiteten jämfört med att använda enbart spatiala filter. Filtret kan anpassas till överföringshastigheten genom att ändra standardavvikelsen för filtret. Filtret har också˙ använts på˙ 3D video i formen 2D-plus-depth, där djupdata användes i ROI detektionen. Det andra problemet kan lösas genom att tillhandahålla en videosekvens som har högsta möjliga kvalitet i hela nätverket. Därmed även den bästa kvaliteten för for varje del av nätverket och för varje 2D- och 3D-skärm. Skalbar videokodning gör det möjligt att extrahera delar av datan för anpassning till de rådande förutsättningarna. En metod som ger skalbarhet i djupet och mellan kameravyer hos multi-view plus depth video har föreslagits. Videosekvensen delas upp i lager beroende på innehållets avstånd till kameran. Metoder för att fördela data över lager i djupet och mellan närliggande vyer har analyserats. Kvalitetsutvärderingen visar att lagrens position i djupet och antalet lager bör bestämmas utifrån fördelningen av djupdata. De främsta lagren i närliggande vyer bör ges högre prioritet om inte applikationen kräver hög kvalitet hos vyer i centrum. / Medi3D / 3D-reklam / MediaSense
8

Making reliable distributed systems in the presence of software errors

Armstrong, Joe January 2003 (has links)
The work described in this thesis is the result of a research program started in 1981 to find better ways of programming Telecom applications. These applications are large programs which despite careful testing will probably contain many errors when the program is put into service. We assume that such programs do contain errors, and investigate methods for building reliable systems despite such errors. The research has resulted in the development of a new programming language (called Erlang), together with a design methodology, and set of libraries for building robust systems (called OTP). At the time of writing the technology described here is used in a number of major Ericsson, and Nortel products. A number of small companies have also been formed which exploit the technology. The central problem addressed by this thesis is the problem of constructing reliable systems from programs which may themselves contain errors. Constructing such systems imposes a number of requirements on any programming language that is to be used for the construction. I discuss these language requirements, and show how they are satisfied by Erlang. Problems can be solved in a programming language, or in the standard libraries which accompany the language. I argue how certain of the requirements necessary to build a fault-tolerant system are solved in the language, and others are solved in the standard libraries. Together these form a basis for building fault-tolerant software systems. No theory is complete without proof that the ideas work in practice. To demonstrate that these ideas work in practice I present a number of case studies of large commercially successful products which use this technology. At the time of writing the largest of these projects is a major Ericsson product, having over a million lines of Erlang code. This product (the AXD301) is thought to be one of the most reliable products ever made by Ericsson. Finally, I ask if the goal of finding better ways to program Telecom applications was fulfilled --- I also point to areas where I think the system could be improved.
9

Individual service provisioning

Espinoza, Fredrik January 2003 (has links)
Computer usage is once again going through changes. Leaving behind the experiences of mainframes with terminal access and personal computers with graphical user interfaces, we are now headed for handheld devices and ubiquitous computing; we are facing the prospect of interacting with electronic services. These network-enabled functional components provide benefit to users regardless of their whereabouts, access method, or access device. The market place is also changing, from suppliers of monolithic oÆ-the-shelf applications, to open source and collaboratively developed specialized services. It is within this new arena of computing that we describe Individual Service Provisioning, a design and implementation that enables end users to create and provision their own services. Individual Service Provisioning consists of three components: a personal service environment, in which users can access and manage their services; ServiceDesigner, a tool with which to create new services; and the provisioning system, which turns end users into service providers.
10

Architectures for service differentiation in overloaded Internet servers

Voigt, Thiemo January 2002 (has links)
Web servers become overloaded when one or several server resources such as network interface, CPU and disk become overutilized. Server overload leads to low server throughput and long response times experienced by the clients. Traditional server design includes only marginal or no support for overload protection. This thesis presents the design, implementation and evaluation of architectures that provide overload protection and service differentiation in web servers. During server overload not all requests can be processed in a timely manner. Therefore, it is desirable to perform service differentiation, i.e., to service requests that are regarded as more important than others. Since requests that are eventually discarded also consume resources, admission control should be performed as early as possible in the lifetime of a web transaction. Depending on the workload, some server resources can be overutilized while the demand on other resources is low because certain types of requests utilize one resource more than others. The implementation of admission control in the kernel of the operating system shows that this approach is more efficient and scalable than implementing the same scheme in user space. We also present an admission control architecture that performs admission control based on the current server resource utilization combined with knowledge about resource consumption of requests. Experiments demonstrate more than 40% higher throughput during overload compared to a standard server and several magnitudes lower response times. This thesis also presents novel architectures and implementations of operating system support for predictable service guarantees. The Nemesis operating system provides applications with a guaranteed communication service using the developed TCP/IP implementation and the scheduling of server resources. SILK (Scout in the Linux kernel) is a new networking stack for the Linux operating system that is based on the Scout operating system. Experiments show that SILK enables prioritizing and other forms of service differentiation between network connections while running unmodified Linux applications. / <p>DoCS 02/119, ISSN 0283-0574.</p>

Page generated in 0.1645 seconds