• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 763
  • 170
  • 24
  • 21
  • 21
  • 21
  • 21
  • 21
  • 21
  • 6
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 2872
  • 2872
  • 2521
  • 2129
  • 1312
  • 553
  • 527
  • 462
  • 443
  • 382
  • 373
  • 306
  • 262
  • 223
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

An elementary proposition on the dynamic routing problem in wireless networks of sensors

Koliousis, Alexandros K. January 2010 (has links)
The routing problem (finding an optimal route from one point in a computer network to another) is surrounded by impossibility results. These results are usually expressed as lower and upper bounds on the set of nodes (or the set of links) of a network and represent the complexity of a solution to the routing problem (a routing function). The routing problem dealt with here, in particular, is a dynamic one (it accounts for network dynamics) and concerns wireless networks of sensors. Sensors form wireless links of limited capacity and time-variable quality to route messages amongst themselves. It is desired that sensors self-organize ad hoc in order to successfully carry out a routing task, e.g. provide daily soil erosion reports for a monitored watershed, or provide immediate indications of an imminent volcanic eruption, in spite of network dynamics. Link dynamics are the first barrier to finding an optimal route between a node x and a node y in a sensor network. The uncertainty of the outcome (the best next hop) of a routing function lies partially with the quality fluctuations of wireless links. Take, for example, a static network. It is known that, given the set of nodes and their link weights (or costs), a node can compute optimal routes by running, say, Dijkstra's algorithm. Link dynamics however suggest that costs are not static. Hence, sensors need a metric (a measurable quantity of uncertainty) to monitor for fluctuations, either improvements or degradations of quality or load; when a fluctuation is sufficiently large (say, by Delta), sensors ought to update their costs and seek another route. Therein lies the other fundamental barrier to find an optimal route - complexity. A crude argument would suggest that sensors (and their links) have an upper bound on the number of messages they can transmit, receive and store due to resource constraints. Such messages can be application traffic, in which case it is desirable, or control traffic, in which case it should be kept minimal. The first type of traffic is demand, and a user should provision for it accordingly. The second type of traffic is overhead, and it is necessary if a routing system (or scheme) is to ensure its fidelity to the application requirements (policy). It is possible for a routing scheme to approximate optimal routes (by Delta) by reducing its message and/or memory complexity. The common denominator of the routing problem and the desire to minimize overhead while approximating optimal routes is Delta, the deviation (or stretch) of a computed route from an optimal one, as computed by a node that has instantaneous knowledge of the set of all nodes and their interaction costs (an oracle). This dissertation deals with both problems in unison. To do so, it needs to translate the policy space (the user objectives) into a metric space (routing objectives). It does so by means of a cost function that normalizes metrics into a number of hops. Then it proceeds to devise, design, and implement a scheme that computes minimum-hop-count routes with manageable complexity. The theory presented is founded on (well-ordered) sets with respect to an elementary proposition, that a route from a source x to a destination y can be computed either by y sending an advertisement to the set of all nodes, or by x sending a query to the set of all nodes; henceforth the proactive method (of y) and the reactive method (of x), respectively. The debate between proactive and reactive routing protocols appears in many instances of the routing problem (e.g. routing in mobile networks, routing in delay-tolerant networks, compact routing), and it is focussed on whether nodes should know a priori all routes and then select the best one (with the proactive method), or each node could simply search for a (hopefully best) route on demand (with the reactive method). The proactive method is stateful, as it requires the entire metric space - the set of nodes and their interaction costs - in memory (in a routing table). The routes computed by the proactive method are optimal and the lower and upper bounds of proactive schemes match those of an oracle. Any attempt to reduce the proactive overhead, e.g. by introducing hierarchies, will result in sub-optimal routes (of known stretch). The reactive method is stateless, as it requires no information whatsoever to compute a route. Reactive schemes - at least as they are presently understood - compute sub-optimal routes (and thus far, of unknown stretch). This dissertation attempts to answer the following question: "what is the least amount of state required to compute an optimal route from a source to a destination?" A hybrid routing scheme is used to investigate this question, one that uses the proactive method to compute routes to near destinations and the reactive method for distant destinations. It is shown that there are cases where hybrid schemes can converge to optimal routes, despite possessing incomplete routing state, and that the necessary and sufficient condition to compute optimal routes with local state alone is related neither to the size nor the density of a network; it is rather the circumference (the size of the largest cycle) of a network that matters. Counterexamples, where local state is insufficient, are discussed to derive the worst-case stretch. The theory is augmented with simulation results and a small experimental testbed to motivate the discussion on how policy space (user requirements) can translate into metric spaces and how different metrics affect performance. On the debate between proactive and reactive protocols, it is shown that the two classes are equivalent. The dissertation concludes with a discussion on the applicability of its results and poses some open problems.
202

The construction of high-performance virtual machines for dynamic languages

Shannon, Mark January 2011 (has links)
Dynamic languages, such as Python and Ruby, have become more widely used over the past decade. Despite this, the standard virtual machines for these languages have disappointing performance. These virtual machines are slow, not because methods for achieving better performance are unknown, but because their implementation is hard. What makes the implementation of high-performance virtual machines difficult is not that they are large pieces of software, but that there are fundamental and complex interdependencies between their components. In order to work together correctly, the interpreter, just-in-time compiler, garbage collector and library must all conform to the same precise low-level protocols. In this dissertation I describe a method for constructing virtual machines for dynamic languages, and explain how to design a virtual machine toolkit by building it around an abstract machine. The design and implementation of such a toolkit, the Glasgow Virtual Machine Toolkit, is described. The Glasgow Virtual Machine Toolkit automatically generates a just-in-time compiler, integrates precise garbage collection into the virtual machine, and automatically manages the complex inter-dependencies between all the virtual machine components. Two different virtual machines have been constructed using the GVMT. One is a minimal implementation of Scheme; which was implemented in under three weeks to demonstrate that toolkits like the GVMT can enable the easy construction of virtual machines. The second, the HotPy VM for Python, is a high-performance virtual machine; it demonstrates that a virtual machine built with a toolkit can be fast and that the use of a toolkit does not overly constrain the high-level design. Evaluation shows that HotPy outperforms the standard Python interpreter, CPython, by a large margin, and has performance on a par with PyPy, the fastest Python VM currently available.
203

Compact routing for the future internet

Strowes, Stephen D. January 2012 (has links)
The Internet relies on its inter-domain routing system to allow data transfer between any two endpoints regardless of where they are located. This routing system currently uses a shortest path routing algorithm (modified by local policy constraints) called the Border Gateway Protocol. The massive growth of the Internet has led to large routing tables that will continue to grow. This will present a serious engineering challenge for router designers in the long-term, rendering state (routing table) growth at this pace unsustainable. There are various short-term engineering solutions that may slow the growth of the inter-domain routing tables, at the expense of increasing the complexity of the network. In addition, some of these require manual configuration, or introduce additional points of failure within the network. These solutions may give an incremental, constant factor, improvement. However, we know from previous work that all shortest path routing algorithms require forwarding state that grows linearly with the size of the network in the worst case. Rather than attempt to sustain inter-domain routing through a shortest path routing algorithm, compact routing algorithms exist that guarantee worst-case sub-linear state requirements at all nodes by allowing an upper-bound on path length relative to the theoretical shortest path, known as path stretch. Previous work has shown the promise of these algorithms when applied to synthetic graphs with similar properties to the known Internet graph, but they haven't been studied in-depth on Internet topologies derived from real data. In this dissertation, I demonstrate the consistently strong performance of these compact routing algorithms for inter-domain routing by performing a longitudinal study of two compact routing algorithms on the Internet Autonomous System (AS) graph over time. I then show, using the k-cores graph decomposition algorithm, that the structurally important nodes in the AS graph are highly stable over time. This property makes these nodes suitable for use as the "landmark" nodes used by the most stable of the compact routing algorithms evaluated, and the use of these nodes shows similar strong routing performance. Finally, I present a decentralised compact routing algorithm for dynamic graphs, and present state requirements and message overheads on AS graphs using realistic simulation inputs. To allow the continued long-term growth of Internet routing state, an alternative routing architecture may be required. The use of the compact routing algorithms presented in this dissertation offer promise for a scalable future Internet routing system.
204

ISSUES IN DISTRIBUTED PROGRAMMING LANGUAGES: THE EVOLUTION OF SR (CONCURRENT).

Olsson, Ronald Arthur January 1986 (has links)
This dissertation examines fundamental issues that face the designers of any distributed programming language. It considers how programs are structured, how processes communicate and synchronize, and how hardware failures are represented and handled. We discuss each of these issues and argue for a particular approach based on our application domain: distributed systems (such as distributed operating systems) and distributed user applications. We conclude that a language for such applications should include the following mechanisms: dynamic modules, shared variables (within a module), dynamic processes, synchronous and asynchronous forms of message passing, rendezvous, concurrent invocation, and early reply. We then describe the current SR language, which has evolved considerably based on our experience. SR provides the above mechanisms in a way that is expressive yet simple. SR resolves the tension between expressiveness and simplicity by providing a variety of mechanisms based on only a few underlying concepts. The main language constructs are still resources and operations. Resources encapsulate processes and the variables they share; operations provide the primary mechanism for process interaction. One way in which SR has changed is that both resources and processes are now created dynamically. Another change is that all the common mechanisms for process interaction--local and remote procedure call, rendezvous, dynamic process creation, asynchronous message passing, and semaphores--are now supported by a novel integration of the mechanisms for invoking and servicing operations. Many small and several larger examples illustrate SR's mechanisms and the interplay between them; these examples also demonstrate the language's expressiveness and flexibility. We then describe our implementation of SR. The compiler, linker, and run-time support are summarized. We then focus on how the generated code and run-time support interact to provide dynamic resources and to generate and service invocations. We also describe optimizations for certain operations. Measurements of the implementation's size and cost are given. The implementation has been in use since November 1985 and is currently being improved. Finally, we justify SR's syntax and semantics and examine how its mechanisms compare to other approaches to distributed programming. We also discuss how SR balances expressiveness, simplicity, and efficiency.
205

DESIGN OF PORTABLE DIRECT EXECUTING LANGUAGES FOR INTERACTIVE SIMULATION.

VAKILZADIAN, HAMID. January 1985 (has links)
DESIRE P is a general purpose continuous time simulation language suitable for interactive simulation, dynamic system study, mathematical modeling, process control analysis. It includes an interactive editor, file manipulation facilities, and graphic packages, making it a completely self-contained system. The PDP-11 version of DESIRE P handles 20 state variables, while the VAX/VMS version runs 150 or more. An interpreted job-control language serves for interactive program entry, editing and file operations, and for programming multirun simulation studies. The dynamic segment, containing differential equations in first-order form, is entered just like the job-control statments and accesses the same variables. DESIRE P is largely written in PASCAL, and most of it can be transferred to different computers, with little change. The PASCAL implementation proves that the high-level language can be used to program direct executing languages, still keeping efficiency and speed comparable to assembly language. The runtime compiler of DESIRE P generates fast and efficient code. DESIRE P can incorporate existing and new precompiled FORTRAN numerical integration algorithms.
206

AUTOMATIC HARDWARE COMPILER FOR THE CMOS GATE ARRAY

Hu, Jhy-Fang, 1961- January 1986 (has links)
No description available.
207

Deforestation for higher-order functional programs

Marlow, Simon David January 1995 (has links)
Functional programming languages are an ideal medium for program optimisations based on source-to-source transformation techniques. Referential transparency affords opportunities for a wide range of correctness-preserving transformations leading to potent optimisation strategies. This thesis builds on deforestation, a program transformation technique due to Wadler that removes intermediate data structures from first-order functional programs. Our contribution is to reformulate deforestation for higher-order functional programming languages, and to show that the resulting algorithm terminates given certain syntactic and typing constraints on the input. These constraints are entirely reasonable, indeed it is possible to translate any typed program into the required syntactic form. We show how this translation can be performed automatically and optimally. The higher-order deforestation algorithm is transparent. That is, it is possible to determine by examination of the source program where the optimisation will be applicable. We also investigate the relationship of deforestation to cut-elimination, the normalisation property for the logic of sequent calculus. By combining a cut-elimination algorithm and first-order deforestation, we derive an improved higher-order deforestation algorithm. The higher-order deforestation algorithm has been implemented in the Glasgow Haskell Compiler. We describe how deforestation fits into the framework of Haskell, and design a model for the implementation that allows automatic list removal, with additional deforestation being performed on the basis of programmer supplied annotations. Results from applying the deforestation implementation to several example Haskell programs are given.
208

Cognitive error analysis in accident and incident investigation in safety-critical domains

Busse, Daniela Karin January 2002 (has links)
A database of 10 years' worth of medical incident data gathered in an Edinburgh Intensive Care Unit was analyzed using the proposed cognitive error analysis approach. In the second live case study, the error analysis approach was evaluated in the field by applying it to incident reporting data that was collected with a newly implemented incident reporting scheme in a Glasgow Neonatal Intensive Care Unit. The insights gained by analyzing the Edinburgh incident scheme were used to inform the design and implementation of the Glasgow incident scheme as part of the unit's existing safety management. Since both were local incident reporting schemes, it was seen as an important factor for its success to take the local context and conditions into account while situating the cognitive error analysis approach as part of these hospitals' safety management strategies. The evaluation of this incident reporting and analysis framework demonstrated the benefits of a structured, psychological “human error” analysis approach that centres on the human aspect of the incident, without isolating it from its context. It is argued that not only could the understanding of the underlying error mechanisms be improved for individual incidents, but the generation of safety recommendations could be supported, and these could then also be evaluated as to their impact on the human "in the loop". The resulting error analysis models could further be used as basis for comparing competing analyses, and also improve analysis traceability by documenting the analysis process and its resulting safety recommendations. Further work is needed in providing "best practices" for the application of the cognitive analytical framework. Further work is also needed in formalizing a way to situate the cognitive error analysis approach within the investigation of local work system factors in the search for the overall incident and accident causation. This thesis aims at demonstrating the benefits of grounding the analysis of human error as part of incident and accident reporting in a cognitive theoretical framework. This will provide the means and the vocabulary to reason about alternative causal hypotheses while also acting as a tool to document and communicate the psychological analysis of human error and its resulting safety recommendations. This approach is proposed as complementing the analysis of human error data by means of error taxonomies grounded in psychological theory.
209

Decentralising resource management in operating systems

Neugebauer, Rolf January 2003 (has links)
This dissertation explores operating system mechanisms to allow resource-aware applications to be involved in the process of managing resources under the premise that these applications (1) potentially have some (implicit) notion of their future resource demands and (2) can adapt their resource demands. The general idea is to provide feedback to resource-aware applications so that they can proactively participate in the management of resources. This approach has the benefit that resource management policies can be removed from central entities and the operating system has only to provide mechanism. Furthermore, in contrast to centralised approaches, application specific features can be more easily exploited. To achieve this aim, I propose to deploy a microeconomic theory, namely congestion or shadow pricing, which has recently received attention for managing congestion in communication networks. Applications are charged based on the potential "damage" they cause to other consumers by using resources. Consumers interpret these congestion charges as feedback signals which they use to adjust their resource consumption. It can be shown theoretically that such a system with consumers merely acting in their own self-interest will converge to a social optimum. This dissertation focuses on the operating system mechanisms required to decentralise resource management this way. In particular it identifies four mechanisms: pricing & charging, credit accounting, resource usage accounting, and multiplexing. While the latter two are mechanisms generally required for the accurate management of resources, pricing & charging and credit accounting present novel mechanisms. It is argued that congestion prices are the correct economic model in this context and provide appropriate feedback to applications. The credit accounting mechanism is necessary to ensure the overall stability of the system by assigning value to credits.
210

Distortion-constraint compression of three-dimensional CLSM images using image pyramid and vector quantization

Tao, Yegang January 2005 (has links)
The confocal microscopy imaging techniques, which allow optical sectioning, have been successfully exploited in biomedical studies. Biomedical scientists can benefit from more realistic visualization and much more accurate diagnosis by processing and analysing on a three-dimensional image data. The lack of efficient image compression standards makes such large volumetric image data slow to transfer over limited bandwidth networks. It also imposes large storage space requirements and high cost in archiving and maintenance. Conventional two-dimensional image coders do not take into account inter-frame correlations in three-dimensional image data. The standard multi-frame coders, like video coders, although they have good performance in capturing motion information, are not efficiently designed for coding multiple frames representing a stack of optical planes of a real object. Therefore a real three-dimensional image compression approach should be investigated. Moreover the reconstructed image quality is a very important concern in compressing medical images, because it could be directly related to the diagnosis accuracy. Most of the state-of-the-arts methods are based on transform coding, for instance JPEG is based on discrete-cosine-transform CDCT) and JPEG2000 is based on discrete- wavelet-transform (DWT). However in DCT and DWT methods, the control of the reconstructed image quality is inconvenient, involving considerable costs in computation, since they are fundamentally rate-parameterized methods rather than distortion-parameterized methods. Therefore it is very desirable to develop a transform-based distortion-parameterized compression method, which is expected to have high coding performance and also able to conveniently and accurately control the final distortion according to the user specified quality requirement. This thesis describes our work in developing a distortion-constraint three-dimensional image compression approach, using vector quantization techniques combined with image pyramid structures. We are expecting our method to have: 1. High coding performance in compressing three-dimensional microscopic image data, compared to the state-of-the-art three-dimensional image coders and other standardized two-dimensional image coders and video coders. 2. Distortion-control capability, which is a very desirable feature in medical 2. Distortion-control capability, which is a very desirable feature in medical image compression applications, is superior to the rate-parameterized methods in achieving a user specified quality requirement. The result is a three-dimensional image compression method, which has outstanding compression performance, measured objectively, for volumetric microscopic images. The distortion-constraint feature, by which users can expect to achieve a target image quality rather than the compressed file size, offers more flexible control of the reconstructed image quality than its rate-constraint counterparts in medical image applications. Additionally, it effectively reduces the artifacts presented in other approaches at low bit rates and also attenuates noise in the pre-compressed images. Furthermore, its advantages in progressive transmission and fast decoding make it suitable for bandwidth limited tele-communications and web-based image browsing applications.

Page generated in 0.0678 seconds