181 |
The improvement of program behaviour in paged computer systemsPavelin, C. J. January 1970 (has links)
No description available.
|
182 |
Achieving parallel performance in scientific computationsClarke, Lyndon J. January 1990 (has links)
The exploitation of high performance computing will be a major factor in the future advancement of science as computational methods are increasingly becoming a third discipline alongside theory and experiment. Despite advances which are being made in VLSI technology, enabling the construction of faster uniprocessor machines, it is now widely recognised that the future of high performance computing will be dominated by parallel architectures. It is of prime importance that the scientific community is able to effectively program such machines. In the case of distributed memory MIMD architectures support is required for arbitrary communictions between processing entities located at different processors, and operating on data stored in different memory units. If a message routing facility is not available in hardware it is necessary to provide a software implementation. Where this facility is available in hardware then a layer of software is required which presents the programmer with an interface in this hardware. This thesis discusses a number of issues which arise in the implementation of message routing systems and the application interface. We have constructed such a system for use on arrays of INMOS transputers, called TINY, and the methods used in the implementation of this software are described. The system shares processor time with the application and we demonstrate that the processor bandwidth required by TINY is very small. We have selected a concrete, but simple, application which utilises the services provided by this system. The implementation of this application was considerably simplified by the use of TINY and we show that the overheads induced by this software layer are insignificant. The application selected performs rendering of space filing molecular models, reflecting the growing importance of visualisation in science.
|
183 |
Design and simulation of an MIMD shared memory multiprocessor with interleaved instruction streamsStiemerling, Thomas R. January 1991 (has links)
The design of the Epp1 MIMD shared memory multiprocessor is described, and its performance evaluated by simulation. The Epp1 has a dancehall architecture with <i>p</i> instruction interleaved RISC processors connected to <i>p</i> shared memories by a packet switched, combining, indirect binary n-cube multistage network composed of (<i>p</i>/2)log<SUB>2</SUB><i>p</i> 2 x 2 cross bar switches. There is no processor cache or local memory, and no paged virtual memory. Memory addresses are low order interleaved across the memories. The fetch-and-add instruction is used for inter-process synchronisation, and the switches support the combining of load and fetch-and-add memory requests. Simulation results of a single Epp1 processor with varying interleaving level and instruction mix are presented, and of an isolated network with varying queue size and network load. A distributed time-driven, instruction level simulator of the Epp1 design has been implemented in Occam, and runs on a transputer based, distributed memory multiprocessor. Three parallel benchmark programs: matrix multiply, bitonic merge sort and Moore shortest path, have been written in the processor assembly language, and are used as workloads in the simulations. The programs use the fetch-and-add instruction to implement process control primitives. A number of simulation experiments have been carried out using the Epp1 simulator which investigate the effect on performance of increasing the system size (speed-up), varying the switch queue and wait-buffer size, increasing the combining level, increasing the interleaving level, and varying the network and memory speed relative to the processor. These experiments are repeated for each benchmark program, and detailed execution statistics are presented for each simulation. A dynamic execution profile for each benchmark program is also presented.
|
184 |
Leveraging weak supervision for video understandingGarcia Cifuentes, C. January 2013 (has links)
This research deals with the challenging task of video classification, with a particular focus on action recognition, which is essential for a comprehensive understanding of videos. In the typical scenario, there is a list of semantic categories to be modeled, and example clips are given together with their associated category label, indicating which action of interests happens in that clip. No information is given about where or when the action happens, even less about why the annotator considered the clip to belong to a sometimes ambiguous category. Within the framework of the bag-of-words representation of videos, we explore how to leverage such weak labels from three points of view: (i) the use of coherent supervision from the earliest stages of the pipeline; (ii) the combination of heterogeneous features in nature and scale; and (iii) mid-level representations of videos based on regions, so as to increase the ability to discriminate relevant locations in the video. For the quantization of local features, we propose and evaluate a novel form of supervision to train random forests which explicitly aims at the discriminative power of the resulting bags of words. We show that our forests are better than traditional ones at incorporating contextual elements during quantization, and draw attention to the risk of naive combination of features. We also show that mid-level representations carry complementary information that can improve classification. Moreover, we propose a novel application of video classification to tracking. We show that weak clip labels can be used to successfully classify videos into categories of dynamic models. In this way, we improve tracking by performing classification-based dynamic model selection.
|
185 |
Design and analysis for TCP-friendly window-based congestion controlChoi, S.-H. January 2006 (has links)
The current congestion control mechanisms for the Internet date back to the early 1980’s and were primarily designed to stop congestion collapse with the typical traffic of that era. In recent years the amount of traffic generated by real-time multimedia applications has substantially increased, and the existing congestion control often does not opt to those types of applications. By this reason, the Internet can be fall into a uncontrolled system such that the overall throughput oscillates too much by a single flow which in turn can lead a poor application performance. Apart from the network level concerns, those types of applications greatly care of end-to-end delay and smoother throughput in which the conventional congestion control schemes do not suit. In this research, we will investigate improving the state of congestion control for real-time and interactive multimedia applications. The focus of this work is to provide fairness among applications using different types of congestion control mechanisms to get a better link utilization, and to achieve smoother and predictable throughput with suitable end-to-end packet delay.
|
186 |
Simultaneous localisation and mapping with prior informationParsley, M. P. January 2011 (has links)
This thesis is concerned with Simultaneous Localisation and Mapping (SLAM), a technique by which a platform can estimate its trajectory with greater accuracy than odometry alone, especially when the trajectory incorporates loops. We discuss some of the shortcomings of the "classical" SLAM approach (in particular EKF-SLAM), which assumes that no information is known about the environment a priori. We argue that in general this assumption is needlessly stringent; for most environments, such as cities some prior information is known. We introduce an initial Bayesian probabilistic framework which considers the world as a hierarchy of structures, and maps (such as those produced by SLAM systems) as consisting of features derived from them. Common underlying structure between features in maps allows one to express and thus exploit geometric relations between them to improve their estimates. We apply the framework to EKF-SLAM for the case of a vehicle equipped with a range-bearing sensor operating in an urban environment, building up a metric map of point features, and using a prior map consisting of line segments representing building footprints. We develop a novel method called the Dual Representation, which allows us to use information from the prior map to not only improve the SLAM estimate, but also reduce the severity of errors associated with the EKF. Using the Dual Representation, we investigate the effect of varying the accuracy of the prior map for the case where the underlying structures and thus relations between the SLAM map and prior map are known. We then generalise to the more realistic case, where there is "clutter" - features in the environment that do not relate with the prior map. This involves forming a hypothesis for whether a pair of features in the SLAMstate and prior map were derived from the same structure, and evaluating this based on a geometric likelihood model. Initially we try an incrementalMultiple Hypothesis SLAM(MHSLAM) approach to resolve hypotheses, developing a novel method called the Common State Filter (CSF) to reduce the exponential growth in computational complexity inherent in this approach. This allows us to use information from the prior map immediately, thus reducing linearisation and EKF errors. However we find that MHSLAM is still too inefficient, even with the CSF, so we use a strategy that delays applying relations until we can infer whether they apply; we defer applying information from structure hypotheses until their probability of holding exceeds a threshold. Using this method we investigate the effect of varying degrees of "clutter" on the performance of SLAM.
|
187 |
Quality of experience in digital mobile multimedia servicesKnoche, H. O. January 2011 (has links)
People like to consume multimedia content on mobile devices. Mobile networks can deliver mobile TV services but they require large infrastructural investments and their operators need to make trade-offs to design worthwhile experiences. The approximation of how users experience networked services has shifted from the inadequate packet level Quality of Service (QoS) to the user perceived Quality of Experience (QoE) that includes content, user context and their expectations. However, QoE is lacking concrete operationalizations for the visual experience of content on small, sub-TV resolution screens displaying transcoded TV content at low bitrates. The contribution of my thesis includes both substantive and methodological results on which factors contribute to the QoE in mobile multimedia services and how. I utilised a mix of methods in both lab and field settings to assess the visual experience of multimedia content on mobile devices. This included qualitative elicitation techniques such as 14 focus groups and 75 hours of debrief interviews in six experimental studies. 343 participants watched 140 hours of realistic TV content and provided feedback through quantitative measures such as acceptability, preferences and eye-tracking. My substantive findings on the effects of size, resolution, text quality and shot types can improve multimedia models. My substantive findings show that people want to watch mobile TV at a relative size (at least 4cm of screen height) similar to living room TV setups. In order to achieve these sizes at 35cm viewing distance users require at least QCIF resolution and are willing to scale it to a much lower angular resolution (12ppd) then what video quality research has found to be the best visual quality (35ppd). My methodological findings suggest that future multimedia QoE research should use a mixed methods approach including qualitative feedback and viewing ratios akin to living room setups to meet QoE’s ambitious scope.
|
188 |
Search interfaces for known-item and exploratory search tasksDiriye, A. M. January 2012 (has links)
People’s online search tasks vary considerably from simple known-item search tasks to complex and exploratory ones. Designing user search interfaces that effectively support this range of search tasks is a difficult and challenging problem, made so by the variety of search goals, information needs and search strategies. Over the last few years, this topic has gained more attention from several research communities, but designing more effective search interfaces requires us to understand how they are used during search tasks, and the role they play in people’s information seeking. The aim of the research reported here was to understand how search interfaces support known-item and exploratory search tasks, and how we can leverage this to design better Information Retrieval systems that improve user experience and performance. We begin this thesis by reporting on an initial exploratory user study that investigates the relationship between richer search interfaces and search tasks. We find, through qualitative data analysis, that richer search interfaces that provide more sophisticated search strategies better support exploratory search tasks than simple search interfaces, which were shown to be more effective for known-item search tasks. This analysis revealed several ways search interface features affect information seeking (impede, distract, facilitate, augment, etc.). A follow-up study further developed and validated these findings by analyzing their impact in terms of task completion time, interactive precision and user preference. To expand our knowledge of search tasks, a definition synthesizing the constituent elements from the literature is proposed. Using this definition, our final study builds on our earlier work, and identifies differences in how people interact and use search interfaces for different search tasks. We conclude the thesis by discussing the implications of our user studies, and our novel search interfaces, for the design of future user search interfaces. The contributions of this thesis are a demonstration of the impact of search interfaces on information seeking; an analysis and synthesis of the constituent elements of search tasks based on the research in the Information Science community; and a series of novel search interfaces that address existing shortcomings, and support more complex and exploratory search tasks.
|
189 |
Automated realistic test input generation and cost reduction in service-centric system testingBozkurt, M. January 2013 (has links)
Service-centric System Testing (ScST) is more challenging than testing traditional software due to the complexity of service technologies and the limitations that are imposed by the SOA environment. One of the most important problems in ScST is the problem of realistic test data generation. Realistic test data is often generated manually or using an existing source, thus it is hard to automate and laborious to generate. One of the limitations that makes ScST challenging is the cost associated with invoking services during testing process. This thesis aims to provide solutions to the aforementioned problems, automated realistic input generation and cost reduction in ScST. To address automation in realistic test data generation, the concept of Service-centric Test Data Generation (ScTDG) is presented, in which existing services used as realistic data sources. ScTDG minimises the need for tester input and dependence on existing data sources by automatically generating service compositions that can generate the required test data. In experimental analysis, our approach achieved between 93% and 100% success rates in generating realistic data while state-of-the-art automated test data generation achieved only between 2% and 34%. The thesis addresses cost concerns at test data generation level by enabling data source selection in ScTDG. Source selection in ScTDG has many dimensions such as cost, reliability and availability. This thesis formulates this problem as an optimisation problem and presents a multi-objective characterisation of service selection in ScTDG, aiming to reduce the cost of test data generation. A cost-aware pareto optimal test suite minimisation approach addressing testing cost concerns during test execution is also presented. The approach adapts traditional multi-objective minimisation approaches to ScST domain by formulating ScST concerns, such as invocation cost and test case reliability. In experimental analysis, the approach achieved reductions between 69% and 98.6% in monetary cost of service invocations during testing
|
190 |
A framework for the characterization and analysis of software systems scalabilityde Cerqueira Leite Duboc, A. L. January 2010 (has links)
The term scalability appears frequently in computing literature, but it is a term that is poorly defined and poorly understood. It is an important attribute of computer systems that is frequently asserted but rarely validated in any meaningful, systematic way. The lack of a consistent, uniform and systematic treatment of scalability makes it difficult to identify and avoid scalability problems, clearly and objectively describe the scalability of software systems, evaluate claims of scalability, and compare claims from different sources. This thesis provides a definition of scalability and describes a systematic framework for the characterization and analysis of software systems scalability. The framework is comprised of a goal-oriented approach for describing, modeling and reasoning about scalability requirements, and an analysis technique that captures the dependency relationships that underlie typical notions of scalability. The framework is validated against a real-world data analysis system and is used to recast a number of examples taken from the computing literature and from industry in order to demonstrate its use across different application domains and system designs.
|
Page generated in 0.0355 seconds