• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 1
  • Tagged with
  • 937
  • 143
  • 105
  • 73
  • 73
  • 63
  • 44
  • 39
  • 35
  • 21
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Disk quality of service in a general-purpose operating system

Childs, S. O. January 2001 (has links)
Users of general-purpose operating systems run a range of multimedia, productivity, and system housekeeping applications. Many of these applications are disk-bound, or have significant disk usage requirements. CPU scheduling is insufficient to ensure reliable performance for such applications, as it cannot control contention for the disk. User-controllable disk scheduling is necessary; the disk scheduler should respect Quality of Service (QoS) specifications defined by users when scheduling disk requests. Disks have a number of distinctive features that influence scheduler design: context-switches that involve seek operations are expensive, disk operations are non-preemptible, and the cost of data transfer varies according to the amount of seek overhead. Any new scheduler must recognise these factors if it is to provide acceptable performance. We present a new disk scheduler for Linux-SRT, a version of Linux with support for CPU QoS. This disk scheduler provides multiple scheduling classes: periodic allocation, static priority, best-effort, and idle. The scheduler makes disk QoS available as a low-level system service, independent of the particular file system used. Applications need not be modified to benefit from disk QoS. The structure of the Linux disk subsystem causes requests from different clients to be executed in an interleaved fashion. This results in many expensive seek operations. We implement laxity, a technique for batching together multiple requests from a single client. This feature that greatly improves the performance of applications performing synchronous I/O, and provides better isolation between applications. We perform experiments to test the effectiveness of our research system in typical scenarios. The results demonstrate that the system can be used to protect time-critical applications from the effects of contention, to regulate low-importance disk-bound tasks, and to limit the disk utilisation of particular processes (allowing resource partitioning). We use the accounting features of our disk scheduler to measure the disk resource usage of typical desktop applications. Based on these measurements, we classify applications and suggest suitable scheduling policies. We also present techniques for determining appropriate parameters for these policies. Scheduling features are of little use unless users can employ them effectively. We extend Linux-SRT's QoS architecture to accommodate control of disk scheduling; the resulting system provides a coherent interface for managing QoS across multiple devices. The disk scheduler exports status information to user space; we construct tools for monitoring and controlling processes' disk utilisation.
282

Software visualization in Prolog

Grant, C. A. McK. January 2000 (has links)
Software visualization (SV) uses computer graphics to communicate the structure and behaviour of complex software and algorithms. One of the important issues in this field is how to specify SV, because existing systems are very cumbersome to specify and implement, which limits their effectiveness and hinders SV from being integrated into professional software development tools. In this dissertation the visualization process is decomposed into a series of formal mappings, which provides a formal foundation, and allows separate aspects of visualization to be specified independently. The first mapping specifies the information content of each view. The second mapping specifies a graphical representation of the information, and a third mapping specifies the graphical components that make up the graphical representation. By combining different mappings, completely different views can be generated. The approach has been implemented in Prolog to provide a very high level specification language for information visualization, and a knowledge engineering environment that allows data queries to tailor the information in a view. The output is generated by a graphical constraint solver that assembles the graphical components into a scene. This system provides a framework for SV called Vmax. Source code and run-time data are analyzed by Prolog to provide access to information about the program structure and run-time data for a wide range of highly interconnected browsable views. Different views and means of visualization can be selected from menus. An automatic legend describes each view, and can be interactively modified to customize how data is presented. A text window for editing source code is synchronized with the graphical view. Vmax is a complete Java development environment and end user SV system. Vmax compares favourably to existing SV systems in many taxonometric criteria, including automation, scope, information content, graphical output form, specification, tailorability, navigation, granularity and elision control. The performance and scalability of the new approach is very reasonable.
283

Modelling and interpretation of architecture from several images

Dick, A. January 2002 (has links)
This thesis focuses on the automatic acquisition of 3-dimensional (3D) architectural models from short image sequences. An architectural model is defined as a set of planes corresponding to walls which contain a variety of parameterised primitives such as doors and windows. As well as parameters defining its shape and texture, each primitive has a label that identifies it as a particular architectural component. Assigning a label to each primitive means that model estimation involves <I>interpreting </I>the scene as well as recovering its shape and texture. This enables reasoning about the scene, which makes estimation of the visible parts of the model more reliable, and means that structure and texture can be inferred in areas of the model which are unseen in any image. Having semantic knowledge of the scene also enables model enhancement, as all windows in a scene, for example, can be given a standard window texture, or made shiny and semi-transparent for increased realism. The representation of a model as a set of simple, compact primitives allows it to be estimated accurately from a small number of images, and manipulated and rendered in a straightforward manner. In this thesis a Bayesian probabilistic framework is developed in which model acquisition is formulated as a search for maximum a posteriori (MAP) model parameters. A prior distribution is defined for the parameters of the model, and its validity is tested by simulating draws from it and verifying that it does indeed generate plausible buildings under varying conditions. Two likelihood functions are also defined and tested. A practical algorithm is then developed to find MAP model parameters based on these likelihood and prior distributions. The algorithm is tested on a variety of architectural image sequences.
284

Implementations of information theory in molecular dynamics, quantum measurement and genetic data analysis

Ahnert, S. E. January 2005 (has links)
The research presented in this thesis is divided into three topics, all of which are related to different areas of Information Theory. Firstly I have worked with my supervisor Professor Mike Payne on proposals for the implementation of two different types of quantum measurements of single photons, namely generalized measurements and weak measurements. This research belongs to the field of Quantum Information Theory. My second area of research has been Molecular Dynamics. With Gabor Csanyi I have worked on the development of a new way of constructing empirical potentials, which are needed for the large-scale simulation of atomic systems. Until now the construction of such potentials has been a laborious task and highly specific to the particular species of atoms involved. Our method, which employs a fitting technique known as Gaussian Processes, aims to provide a comparatively simple and very general way of constructing empirical potentials, using data from quantum-mechanical methods, such as Tight-Binding schemes. In this thesis I present results which demonstrate the feasibility of fitting a function in the configuration space of atomic neigh­bourhoods. My third research topic has been the analysis of biological data series using algorithmic information theory, together with Thomas Fink of the Institut Curie in Paris. By calculating a bound on the Algorithmic Information Content (AIC) of a given data curve we are able to identify biologically significant curves, thus providing a tool for biological data analysis.
285

On using fuzzy data in security mechanisms

Hao, F. January 2007 (has links)
Biometric measurements create a strong binding between a person and his unique features, which may conflict with personal privacy. In this dissertation, we study these problems in detail and propose solutions. First, we propose a scheme to derive error-free keys from fuzzy data, such as iris codes. There are two types of errors within iris codes: background-noise errors and burst errors. Accordingly, we devise a two-layer error correction technique, which first corrects the background-noise errors using a Hadamard code, then the burst errors using a Reed-Solomon code. Based on a database of 700 iris images, we demonstrate that an error-free key of 140 bits can be reliably reproduced from genuine iris codes with a 99.5% success rate. In addition, despite the irrevocability of biometric data, the keys produced using our technique can be easily revoked or updated. Second, we address the search problem for a large fuzzy database that stores iris codes or data with a similar structure. Currently, the algorithm used in all public deployments of iris recognition is to search exhaustively through a database of iris codes, looking for a match that is close enough. We propose a much more efficient search algorithm: Beacon Guided Search (BGS). BGS works by indexing iris codes, adopting a “multiple colliding segments principle” and an early termination strategy to reduce the search range dramatically. We evaluate this algorithm using 632,500 real-world iris codes, showing a substantial speed-up over exhaustive search with negligible loss of precision. In addition, we demonstrate that our empirical findings match theoretical analysis. Finally, we study the veto problem in a biometrically-enabled threshold control scheme. In such a scheme, the access, say to a nuclear device, is controlled by several delegates. Each delegate’s biometrics act as one key, and the access is only granted when all keys are correctly supplied. We propose an Anonymous Veto Network (AV-net), which assigns each delegate the power to veto the biometric enrolments anonymously. Compared with past work, the AV-net construction provides the strongest protection of each delegate’s privacy against collusion.
286

Code size optimization for embedded processors

Johnson, N. E. January 2004 (has links)
This thesis studies the problem of reducing code size produced by an optimizing compiler. We develop the Value State Dependence Graph (VSDG) as a powerful intermediate form. Nodes represent computation, and edges represent value (data) and state (control) dependencies between nodes. The edges specify a partial ordering of the nodes—sufficient ordering to maintain the I/O semantics of the source program, while allowing optimizers greater freedom to move nodes within the program to achieve better (smaller) code. Optimizations, both classical and new, transform the graph through graph rewriting rules prior to code generation. Additional (se-mantically inessential) state edges are added to transform the VSDG into a Control Flow Graph, from which target code is generated. We show how procedural abstraction can be advantageously applied to the VSDG. Graph patterns are extracted from a program's VSDG. We then select repeated patterns giving the greatest size reduction, generate new functions from these patterns, and replace all occurrences of the patterns in the original VSDG with calls to these abstracted functions. Several embedded processors have load- and store-multiple instructions, representing several loads (or stores) as one instruction. We present a method, benefiting from the VSDG form, for using these instructions to reduce code size by provisionally combining loads and stores before code generation. The final contribution of this thesis is a combined register allocation and code motion (RACM) algorithm. We show that our RACM algorithm formulates these two previously antagonistic phases as one combined pass over the VSDG, transforming the graph (moving or cloning nodes, or spilling edges) to fit within the physical resources of the target processor. We have implemented our ideas within a prototype C compiler and suite of VSDG optimizers, generating code for the Thumb 32-bit processor. Our results show improvements for each optimization and that we can achieve code sizes comparable to, and in some cases better than, that produced by commercial compilers with significant investments in optimization technology.
287

End-user programming in multiple languages

Hague, R. G. January 2005 (has links)
Advances in user interface technology have removed the need for the majority of users to program, but they do not allow the automation of repetitive or indirect tasks. End-user programming facilities solve this problem without requiring users to learn and use a conventional programming language, but must be tailored to specific types of end user. In situations where the user population is particularly diverse, this presents a problem. In addition, studies have shown that the performance of tasks based on the manipulation and interpretation of data depends on the way in which the data is represented. Different representations may facilitate different tasks, and there is not necessarily a single, optimal representation that is best for all tasks. In many cases, the choice of representation is also constrained by other factors, such as display size. It would be advantageous for an end-user programming system to provide multiple, interchangeable representations of programs. This dissertation describes an architecture for providing end-user programming facilities in the networked home, a context with a diverse user population, and a wide variety of input and output devices. The Media Cubes language, a novel end-user programming language, is introduced as the context that lead to the development of the architecture. A framework for translation between languages via a common intermediate form is then described, with particular attention paid to the requirements of mappings between languages and the intermediate form. The implementation of Lingua Franca, a system realizing this framework in the given context, is described. Finally, the system is evaluated by considering several end-user programming languages implemented within this system. It is concluded that translation between programming languages, via a common intermediate form, is viable for systems within a limited domain, and discuss the wider applicability of the tech­nique.
288

A theory of dependent record types with structural subtyping

Feng, Yangyue January 2010 (has links)
No description available.
289

Multi-feature Space Optimisation and Semantic Infer3ence for Visual Information Retrieval

Zhang, Qianni January 2007 (has links)
No description available.
290

Text Luminance Modulation for Harcopy Watermaking

Borges, Paulo Vinicius Koerich January 2008 (has links)
No description available.

Page generated in 0.0234 seconds