• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

An elementary proposition on the dynamic routing problem in wireless networks of sensors

Koliousis, Alexandros K. January 2010 (has links)
The routing problem (finding an optimal route from one point in a computer network to another) is surrounded by impossibility results. These results are usually expressed as lower and upper bounds on the set of nodes (or the set of links) of a network and represent the complexity of a solution to the routing problem (a routing function). The routing problem dealt with here, in particular, is a dynamic one (it accounts for network dynamics) and concerns wireless networks of sensors. Sensors form wireless links of limited capacity and time-variable quality to route messages amongst themselves. It is desired that sensors self-organize ad hoc in order to successfully carry out a routing task, e.g. provide daily soil erosion reports for a monitored watershed, or provide immediate indications of an imminent volcanic eruption, in spite of network dynamics. Link dynamics are the first barrier to finding an optimal route between a node x and a node y in a sensor network. The uncertainty of the outcome (the best next hop) of a routing function lies partially with the quality fluctuations of wireless links. Take, for example, a static network. It is known that, given the set of nodes and their link weights (or costs), a node can compute optimal routes by running, say, Dijkstra's algorithm. Link dynamics however suggest that costs are not static. Hence, sensors need a metric (a measurable quantity of uncertainty) to monitor for fluctuations, either improvements or degradations of quality or load; when a fluctuation is sufficiently large (say, by Delta), sensors ought to update their costs and seek another route. Therein lies the other fundamental barrier to find an optimal route - complexity. A crude argument would suggest that sensors (and their links) have an upper bound on the number of messages they can transmit, receive and store due to resource constraints. Such messages can be application traffic, in which case it is desirable, or control traffic, in which case it should be kept minimal. The first type of traffic is demand, and a user should provision for it accordingly. The second type of traffic is overhead, and it is necessary if a routing system (or scheme) is to ensure its fidelity to the application requirements (policy). It is possible for a routing scheme to approximate optimal routes (by Delta) by reducing its message and/or memory complexity. The common denominator of the routing problem and the desire to minimize overhead while approximating optimal routes is Delta, the deviation (or stretch) of a computed route from an optimal one, as computed by a node that has instantaneous knowledge of the set of all nodes and their interaction costs (an oracle). This dissertation deals with both problems in unison. To do so, it needs to translate the policy space (the user objectives) into a metric space (routing objectives). It does so by means of a cost function that normalizes metrics into a number of hops. Then it proceeds to devise, design, and implement a scheme that computes minimum-hop-count routes with manageable complexity. The theory presented is founded on (well-ordered) sets with respect to an elementary proposition, that a route from a source x to a destination y can be computed either by y sending an advertisement to the set of all nodes, or by x sending a query to the set of all nodes; henceforth the proactive method (of y) and the reactive method (of x), respectively. The debate between proactive and reactive routing protocols appears in many instances of the routing problem (e.g. routing in mobile networks, routing in delay-tolerant networks, compact routing), and it is focussed on whether nodes should know a priori all routes and then select the best one (with the proactive method), or each node could simply search for a (hopefully best) route on demand (with the reactive method). The proactive method is stateful, as it requires the entire metric space - the set of nodes and their interaction costs - in memory (in a routing table). The routes computed by the proactive method are optimal and the lower and upper bounds of proactive schemes match those of an oracle. Any attempt to reduce the proactive overhead, e.g. by introducing hierarchies, will result in sub-optimal routes (of known stretch). The reactive method is stateless, as it requires no information whatsoever to compute a route. Reactive schemes - at least as they are presently understood - compute sub-optimal routes (and thus far, of unknown stretch). This dissertation attempts to answer the following question: "what is the least amount of state required to compute an optimal route from a source to a destination?" A hybrid routing scheme is used to investigate this question, one that uses the proactive method to compute routes to near destinations and the reactive method for distant destinations. It is shown that there are cases where hybrid schemes can converge to optimal routes, despite possessing incomplete routing state, and that the necessary and sufficient condition to compute optimal routes with local state alone is related neither to the size nor the density of a network; it is rather the circumference (the size of the largest cycle) of a network that matters. Counterexamples, where local state is insufficient, are discussed to derive the worst-case stretch. The theory is augmented with simulation results and a small experimental testbed to motivate the discussion on how policy space (user requirements) can translate into metric spaces and how different metrics affect performance. On the debate between proactive and reactive protocols, it is shown that the two classes are equivalent. The dissertation concludes with a discussion on the applicability of its results and poses some open problems.
92

The construction of high-performance virtual machines for dynamic languages

Shannon, Mark January 2011 (has links)
Dynamic languages, such as Python and Ruby, have become more widely used over the past decade. Despite this, the standard virtual machines for these languages have disappointing performance. These virtual machines are slow, not because methods for achieving better performance are unknown, but because their implementation is hard. What makes the implementation of high-performance virtual machines difficult is not that they are large pieces of software, but that there are fundamental and complex interdependencies between their components. In order to work together correctly, the interpreter, just-in-time compiler, garbage collector and library must all conform to the same precise low-level protocols. In this dissertation I describe a method for constructing virtual machines for dynamic languages, and explain how to design a virtual machine toolkit by building it around an abstract machine. The design and implementation of such a toolkit, the Glasgow Virtual Machine Toolkit, is described. The Glasgow Virtual Machine Toolkit automatically generates a just-in-time compiler, integrates precise garbage collection into the virtual machine, and automatically manages the complex inter-dependencies between all the virtual machine components. Two different virtual machines have been constructed using the GVMT. One is a minimal implementation of Scheme; which was implemented in under three weeks to demonstrate that toolkits like the GVMT can enable the easy construction of virtual machines. The second, the HotPy VM for Python, is a high-performance virtual machine; it demonstrates that a virtual machine built with a toolkit can be fast and that the use of a toolkit does not overly constrain the high-level design. Evaluation shows that HotPy outperforms the standard Python interpreter, CPython, by a large margin, and has performance on a par with PyPy, the fastest Python VM currently available.
93

Compact routing for the future internet

Strowes, Stephen D. January 2012 (has links)
The Internet relies on its inter-domain routing system to allow data transfer between any two endpoints regardless of where they are located. This routing system currently uses a shortest path routing algorithm (modified by local policy constraints) called the Border Gateway Protocol. The massive growth of the Internet has led to large routing tables that will continue to grow. This will present a serious engineering challenge for router designers in the long-term, rendering state (routing table) growth at this pace unsustainable. There are various short-term engineering solutions that may slow the growth of the inter-domain routing tables, at the expense of increasing the complexity of the network. In addition, some of these require manual configuration, or introduce additional points of failure within the network. These solutions may give an incremental, constant factor, improvement. However, we know from previous work that all shortest path routing algorithms require forwarding state that grows linearly with the size of the network in the worst case. Rather than attempt to sustain inter-domain routing through a shortest path routing algorithm, compact routing algorithms exist that guarantee worst-case sub-linear state requirements at all nodes by allowing an upper-bound on path length relative to the theoretical shortest path, known as path stretch. Previous work has shown the promise of these algorithms when applied to synthetic graphs with similar properties to the known Internet graph, but they haven't been studied in-depth on Internet topologies derived from real data. In this dissertation, I demonstrate the consistently strong performance of these compact routing algorithms for inter-domain routing by performing a longitudinal study of two compact routing algorithms on the Internet Autonomous System (AS) graph over time. I then show, using the k-cores graph decomposition algorithm, that the structurally important nodes in the AS graph are highly stable over time. This property makes these nodes suitable for use as the "landmark" nodes used by the most stable of the compact routing algorithms evaluated, and the use of these nodes shows similar strong routing performance. Finally, I present a decentralised compact routing algorithm for dynamic graphs, and present state requirements and message overheads on AS graphs using realistic simulation inputs. To allow the continued long-term growth of Internet routing state, an alternative routing architecture may be required. The use of the compact routing algorithms presented in this dissertation offer promise for a scalable future Internet routing system.
94

Deforestation for higher-order functional programs

Marlow, Simon David January 1995 (has links)
Functional programming languages are an ideal medium for program optimisations based on source-to-source transformation techniques. Referential transparency affords opportunities for a wide range of correctness-preserving transformations leading to potent optimisation strategies. This thesis builds on deforestation, a program transformation technique due to Wadler that removes intermediate data structures from first-order functional programs. Our contribution is to reformulate deforestation for higher-order functional programming languages, and to show that the resulting algorithm terminates given certain syntactic and typing constraints on the input. These constraints are entirely reasonable, indeed it is possible to translate any typed program into the required syntactic form. We show how this translation can be performed automatically and optimally. The higher-order deforestation algorithm is transparent. That is, it is possible to determine by examination of the source program where the optimisation will be applicable. We also investigate the relationship of deforestation to cut-elimination, the normalisation property for the logic of sequent calculus. By combining a cut-elimination algorithm and first-order deforestation, we derive an improved higher-order deforestation algorithm. The higher-order deforestation algorithm has been implemented in the Glasgow Haskell Compiler. We describe how deforestation fits into the framework of Haskell, and design a model for the implementation that allows automatic list removal, with additional deforestation being performed on the basis of programmer supplied annotations. Results from applying the deforestation implementation to several example Haskell programs are given.
95

Cognitive error analysis in accident and incident investigation in safety-critical domains

Busse, Daniela Karin January 2002 (has links)
A database of 10 years' worth of medical incident data gathered in an Edinburgh Intensive Care Unit was analyzed using the proposed cognitive error analysis approach. In the second live case study, the error analysis approach was evaluated in the field by applying it to incident reporting data that was collected with a newly implemented incident reporting scheme in a Glasgow Neonatal Intensive Care Unit. The insights gained by analyzing the Edinburgh incident scheme were used to inform the design and implementation of the Glasgow incident scheme as part of the unit's existing safety management. Since both were local incident reporting schemes, it was seen as an important factor for its success to take the local context and conditions into account while situating the cognitive error analysis approach as part of these hospitals' safety management strategies. The evaluation of this incident reporting and analysis framework demonstrated the benefits of a structured, psychological “human error” analysis approach that centres on the human aspect of the incident, without isolating it from its context. It is argued that not only could the understanding of the underlying error mechanisms be improved for individual incidents, but the generation of safety recommendations could be supported, and these could then also be evaluated as to their impact on the human "in the loop". The resulting error analysis models could further be used as basis for comparing competing analyses, and also improve analysis traceability by documenting the analysis process and its resulting safety recommendations. Further work is needed in providing "best practices" for the application of the cognitive analytical framework. Further work is also needed in formalizing a way to situate the cognitive error analysis approach within the investigation of local work system factors in the search for the overall incident and accident causation. This thesis aims at demonstrating the benefits of grounding the analysis of human error as part of incident and accident reporting in a cognitive theoretical framework. This will provide the means and the vocabulary to reason about alternative causal hypotheses while also acting as a tool to document and communicate the psychological analysis of human error and its resulting safety recommendations. This approach is proposed as complementing the analysis of human error data by means of error taxonomies grounded in psychological theory.
96

Decentralising resource management in operating systems

Neugebauer, Rolf January 2003 (has links)
This dissertation explores operating system mechanisms to allow resource-aware applications to be involved in the process of managing resources under the premise that these applications (1) potentially have some (implicit) notion of their future resource demands and (2) can adapt their resource demands. The general idea is to provide feedback to resource-aware applications so that they can proactively participate in the management of resources. This approach has the benefit that resource management policies can be removed from central entities and the operating system has only to provide mechanism. Furthermore, in contrast to centralised approaches, application specific features can be more easily exploited. To achieve this aim, I propose to deploy a microeconomic theory, namely congestion or shadow pricing, which has recently received attention for managing congestion in communication networks. Applications are charged based on the potential "damage" they cause to other consumers by using resources. Consumers interpret these congestion charges as feedback signals which they use to adjust their resource consumption. It can be shown theoretically that such a system with consumers merely acting in their own self-interest will converge to a social optimum. This dissertation focuses on the operating system mechanisms required to decentralise resource management this way. In particular it identifies four mechanisms: pricing & charging, credit accounting, resource usage accounting, and multiplexing. While the latter two are mechanisms generally required for the accurate management of resources, pricing & charging and credit accounting present novel mechanisms. It is argued that congestion prices are the correct economic model in this context and provide appropriate feedback to applications. The credit accounting mechanism is necessary to ensure the overall stability of the system by assigning value to credits.
97

Distortion-constraint compression of three-dimensional CLSM images using image pyramid and vector quantization

Tao, Yegang January 2005 (has links)
The confocal microscopy imaging techniques, which allow optical sectioning, have been successfully exploited in biomedical studies. Biomedical scientists can benefit from more realistic visualization and much more accurate diagnosis by processing and analysing on a three-dimensional image data. The lack of efficient image compression standards makes such large volumetric image data slow to transfer over limited bandwidth networks. It also imposes large storage space requirements and high cost in archiving and maintenance. Conventional two-dimensional image coders do not take into account inter-frame correlations in three-dimensional image data. The standard multi-frame coders, like video coders, although they have good performance in capturing motion information, are not efficiently designed for coding multiple frames representing a stack of optical planes of a real object. Therefore a real three-dimensional image compression approach should be investigated. Moreover the reconstructed image quality is a very important concern in compressing medical images, because it could be directly related to the diagnosis accuracy. Most of the state-of-the-arts methods are based on transform coding, for instance JPEG is based on discrete-cosine-transform CDCT) and JPEG2000 is based on discrete- wavelet-transform (DWT). However in DCT and DWT methods, the control of the reconstructed image quality is inconvenient, involving considerable costs in computation, since they are fundamentally rate-parameterized methods rather than distortion-parameterized methods. Therefore it is very desirable to develop a transform-based distortion-parameterized compression method, which is expected to have high coding performance and also able to conveniently and accurately control the final distortion according to the user specified quality requirement. This thesis describes our work in developing a distortion-constraint three-dimensional image compression approach, using vector quantization techniques combined with image pyramid structures. We are expecting our method to have: 1. High coding performance in compressing three-dimensional microscopic image data, compared to the state-of-the-art three-dimensional image coders and other standardized two-dimensional image coders and video coders. 2. Distortion-control capability, which is a very desirable feature in medical 2. Distortion-control capability, which is a very desirable feature in medical image compression applications, is superior to the rate-parameterized methods in achieving a user specified quality requirement. The result is a three-dimensional image compression method, which has outstanding compression performance, measured objectively, for volumetric microscopic images. The distortion-constraint feature, by which users can expect to achieve a target image quality rather than the compressed file size, offers more flexible control of the reconstructed image quality than its rate-constraint counterparts in medical image applications. Additionally, it effectively reduces the artifacts presented in other approaches at low bit rates and also attenuates noise in the pre-compressed images. Furthermore, its advantages in progressive transmission and fast decoding make it suitable for bandwidth limited tele-communications and web-based image browsing applications.
98

Application and network traffic correlation of grid applications

Paisley, Jonathan January 2006 (has links)
Dynamic engineering of application-specific network traffic is becoming more important for applications that consume large amounts of network resources, in particular, bandwidth. Since traditional traffic engineering approaches are static they cannot address this trend; hence there is a need for real-time traffic classification to enable dynamic traffic engineering. A packet flow monitor has been developed that operates at full Gigabit Ethernet line rate, reassembling all TCP flows in real-time. The monitor can be used to classify and analyse both plain text and encrypted application traffic. This dissertation shows, under reasonable assumptions, 100% accuracy for the detection of bulk data traffic for applications when control traffic is clear text and also 100% accuracy for encrypted GridFTP file transfers when data channels are authenticated. For non-authenticated GridFTP data channels, 100% accuracy is also achieved, provided the transferred files are tens of megabytes or more in size. The monitor is able to identify bulk flows resulting from clear text control protocols before they begin. Bulk flows resulting from encrypted GridFTP control sessions are identified before the onset of bulk data (with data channel authentication) or within two seconds (without data channel authentication). Finally, the system is able to deliver an event to a local publish/subscribe server within 1 ms of identification within the monitor. Therefore, the event delivery introduces negligible delay in the ability of the network management system to react to the event.
99

Positioning articulated figures

Etienne, Stéphane January 1998 (has links)
Many animation systems rely on key-frames or poses to produce animated sequences of figures we interpret as articulated, e.g. the skeleton of a character. The production of poses is a difficult problem which can be solved by using techniques such as forward and inverse kinematics. However, animators often find these techniques difficult to work with. The work, presented in this thesis, proposes an innovative technique which approaches this problem from a totally different direction from conventional techniques, and is based on Interactive Genetic Algorithms (IGAs). IGAs are evolutionary tools based on the theory of evolution which was first described by Darwin in 1859. They are derived from Genetic Algorithms (GAs) themselves based on the theory of evolution. IGAs have been successfully used to produce abstract pictures, sculptures and abstract animation sequences. Conventional techniques assist the animator in producing poses. On the contrary, when working with IGAs, users assist the computer in its search for a good solution. Unfortunately, this concept is too weak to allow for an efficient exploration of the space of poses as the user requires more control over the evolutionary process. So, a new concept was introduced to let the user specify directly what is of interest, that is a limb or a set of limbs. This information is efficiently used by the computer to greatly enhance the search. Users build a pose by selecting limbs which are of interest. That pose is provided to the computer as a seed to produce a new generation of poses. The degree of similarity is specified directly by the user. Typically, it is small at the beginning and increases as the process reaches convergences. The power of this new technique is demonstrated by two evaluations, one which uses a set of non expert users and another one which uses myself as the sole but expert user. The first evaluation highlighted the high cognitive requirement of the new technique whereas the second evaluation showed that given sufficient training, the new technique becomes much faster than the other two conventional techniques.
100

SUIT : a methodology and framework for Selection of User Interface development Tools

Lumsden, Joanna Marie January 2001 (has links)
This thesis describes the findings of an industrial survey that identified the context of use for software development projects. This context of use is parameterised and combined with a categorisation of UIDT functionality to produce an extensible and tailorable reference model or framework for UIDT evaluation and selection. An accompanying methodology - which together with the framework is known as SUIT (Selection of User Interface Development Tools) - guides the use of the framework such that project-specific context of use can be modelled and thereafter systematically considered during UIDT selection. This thesis proposes that such focussed and documented consideration of context of use during UIDT selection increases the quality of a selection decision and therefore facilitates reuse of UIDT evaluation and selection results. An evaluative study is described which demonstrates the effectiveness and viability of the SUIT framework and methodology as a paper-based UIDT evaluation facility. The same study also identifies the need for a computer-based tool to support the management of UIDT evaluation data and to assist its comparison and analysis. Experiences with this study, the results of the industrial study, and the structure of the framework and methodology provided input into a set of requirements for a computer-based visualisation environment that supports the comparison and analysis of UIDT data. The SUIT data visualisation environment and its qualitative evaluation are described. The evaluation results identify the usefulness and practicability of the SUIT approach when supported by the visualisation environment. They also suggest a number of refinements and extensions to the tool. The results provide an initial corpus of knowledge regarding practical strategies used by evaluators to compare and analyse UIDT evaluation data. These strategies are modelled using a novel purpose-built graphical notation that focuses on sequencing, flexibility, and patterns of activity.

Page generated in 0.1058 seconds