• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

An information theoretic approach to the expressiveness of programming languages

Davidson, Joseph Ray January 2016 (has links)
The conciseness conjecture is a longstanding notion in computer science that programming languages with more built-in operators, that is more expressive languages with larger semantics, produce smaller programs on average. Chaitin defines the related concept of an elegant program such that there is no smaller program in some language which, when run, produces the same output. This thesis investigates the conciseness conjecture in an empirical manner. Influenced by the concept of elegant programs, we investigate several models of computation, and implement a set of functions in each programming model. The programming models are Turing Machines, λ-Calculus, SKI, RASP, RASP2, and RASP3. The information content of the programs and models are measured as characters. They are compared to investigate hypotheses relating to how the mean program size changes as the size of the semantics change, and how the relationship of mean program sizes between two models compares to that between the sizes of their semantics. We show that the amount of information present in models of the same paradigm, or model family, is a good indication of relative expressivity and average program size. Models that contain more information in their semantics have smaller average programs for the set of tested functions. In contrast, the relative expressiveness of models from differing paradigms, is not indicated by their relative information contents. RASP and Turing Machines have been implemented as Field Programmable Gate Array (FPGA) circuits to investigate hardware analogues of the hypotheses above. Namely that the amount of information in the semantics for a model directly influences the size of the corresponding circuit, and that the relationship of mean circuit sizes between models is comparable to the relationship of mean program sizes. We show that the number of components in the circuits that realise the semantics and programs of the models correlates with the information required to implement the semantics and program of a model. However, the number of components to implement a program in a circuit for one model does not relate to the number of components implementing the same program in another model. This is in contrast to the more abstract implementations of the programs. Information is a computational resource and therefore follows the rules of Blum’s axioms. These axioms and the speedup theorem are used to obtain an alternate proof of the undecidability of elegance. This work is a step towards unifying the formal notion of expressiveness with the notion of algorithmic information theory and exposes a number of interesting research directions. A start has been made on integrating the results of the thesis with the formal framework for the expressiveness of programming languages.
372

The effects of encumbrance and mobility on interactions with touchscreen mobile devices

Ng, Alexander Wing Ho January 2016 (has links)
Mobile handheld devices such as smartphones are now convenient as they allow users to make calls, reply to emails, find nearby services and many more. The increase in functionality and availability of mobile applications also allow mobile devices to be used in many different everyday situations (for example, while on the move and carrying items). While previous work has investigated the interaction difficulties in walking situations, there is a lack of empirical work in the literature on mobile input when users are physically constrained by other activities. As a result, how users input on touchscreen handheld devices in encumbered and mobile contexts is less well known and deserves more attention to examine the usability issues that are often ignored. This thesis investigates targeting performance on touchscreen mobile phones in one common encumbered situation - when users are carrying everyday objects while on the move. To identify the typical objects held during mobile interactions and define a set of common encumbrance scenarios to evaluate in subsequent user studies, Chapter 3 describes an observational study that examined users in different public locations. The results showed that people carried different types of bags and boxes the most frequently. To measure how much tapping performance on touchscreen mobile phones is affected, Chapter 4 examines a range of encumbrance scenarios, which includes holding a bag in-hand or a box underarm, either on the dominant or non-dominant side, during target selections on a mobile phone. Users are likely to switch to a more effective input posture when encumbered and on the move, so Chapter 5 investigates one- and two- handed encumbered interactions and evaluates situations where both hands are occupied with multiple objects. Touchscreen devices afford various multi-touch input types, so Chapter 6 compares the performance of four main one- and two- finger gesture inputs: tapping, dragging, spreading & pinching and rotating, while walking and encumbered. Several main evaluation approaches have been used in previous walking studies, but more attention is required when the effects of encumbrance is also being examined. Chapter 7 examines the appropriateness of two methods (ground and treadmill walking) for encumbered and walking studies, justifies the need to control walking speed and examines the effects of varying walking speed (i.e. walking slower or faster than normal) on encumbered targeting performance. The studies all showed a reduction in targeting performance when users were walking and encumbered, so Chapter 8 explores two ways to improve target selections. The first approach defines a target size, based on the results collected from earlier studies, to increase tapping accuracy and subsequently, a novel interface arrangement was designed which optimises screen space more effectively. The second approach evaluates a benchmark pointing technique, which has shown to improve the selection of small targets, to see if it is useful in walking and encumbered contexts.
373

On the enhancement of data quality in security incident response investigations

Grispos, George January 2016 (has links)
Security incidents detected by information technology-dependent organisations are escalating in both scale and complexity. As a result, security incident response has become a critical mechanism for organisations in an effort to minimise the damage from security incidents. To help organisations develop security incident response capabilities, several security incident response approaches and best practice guidelines have been published in both industry and academia. The final phase within many of these approaches and best practices is the ‘feedback’ or ‘follow-up’ phase. Within this phase, it is expected that an organisation will learn from a security incident and use this information to improve its overall information security posture. However, researchers have argued that many organisations tend to focus on eradication and recovery instead of learning from a security incident. An exploratory case study was undertaken in a Fortune 500 Organisation to investigate security incident learning in practice within organisations. At a high-level, the challenges and problems identified from the case study suggests that security incident response could benefit from improving the quality of data generated from and during security investigations. Therefore, the objective of this research was to improve the quality of data in security incident response, so that organisations can develop deeper insights into security incident causes and to assist with security incident learning. A supplementary challenge identified was the need to minimise the time-cost associated with any changes to organisational processes. Therefore, several lightweight measures were created and implemented within the case study organisation. These measures were evaluated in a series of longitudinal studies that collected both quantitative and qualitative data from the case study organisation.
374

A portfolio of acoustic/electroacoustic music compositions & computer algorithms that investigate the development of polymodality, polyharmony, chromaticism & extended timbre in my musical language

Hughes, Gareth Olubunmi January 2016 (has links)
The emphasis of this PhD is in the field of original/contemporary musical composition and I have submitted a portfolio of original compositions (volume 1/2, comprising of music scores of both acoustic and electroacoustic music compositions [totalling c. 114:30 minutes of music] as well as written material relating to notation and artistic motivation), along with an academic commentary (volume 2/2 [totalling c. 19,500 words], which places the original compositional work in the portfolio in its academic context). The composition works in first volume are varied and broad ranging in scope. In terms of pitch organisation, the majority of works adopt some form of modality or polymodality, whilst certain works also incorporate post-tonal chromaticism and serialism into their syntax. Certain key works also explore extended timbre and colouration (in particular for bowed strings, voices, flute and electronics) and adopt the use of timbral modifications, harmonics, microtones, multiphonics, sprechgesang (i.e. ‘speech-song’), phonetics and the incorporation of electroacoustic sampling, sound synthesis and processing. The academic commentary in the second volume sets out several initial theoretical pitch organisation models (namely relating to modes, polymodes, rows, serial techniques and intervallic cells), with a particular emphasis placed on the formation of a melodic/harmonic language which is fundamentally polymodal, polychordal and polyharmonic. The commentary then takes a closer look at various works within the portfolio which adopt modal, polymodal and chromatic forms of pitch-organisation (whilst intermittently discussing wider musical parameters, such as rhythm, counterpoint, timbre, structure etc.). Separate chapters also discuss a work for flute and electronics and a lengthy work for string quartet (inspired by urban dystopian film) in greater depth. The commentary also discusses my style of writing, placing individual works within the portfolio in their academic context alongside key influences as well as contextualising non-musical aesthetics and sources of artistic inspiration relating to my work.
375

A static, transaction based design methodology for hard real-time systems

Sleat, Philip M. January 1991 (has links)
This thesis is concerned with the design and implementation stages of the development lifecycle of a class of systems known as hard real-time systems. Many of the existing methodologies are appropriate for meeting the functional requirements of this class of systems. However, it is proposed that these methodologies are not entirely appropriate for meeting the non-functional requirement of deadlines for work within these real-time systems. After discussing the concept of real-time systems and their characteristic requirements, this thesis proposes the use of a general transaction model of execution for the implementation of the system. Whereas traditional methodologies consider the system from the flow of data or control in the system, we consider the system from the viewpoint of the role of each shared data entity. A control dependency is implied between otherwise independent processes that make use of a shared data entity; our viewpoint is known as the data dependency viewpoint. This implied control dependency between independent processes, necessary to preserve the consistency of the entity in the face of concurrent access, is ignored during the design stages of other methodologies. In considering the role of each data entity, it is possible to generate other viewpoints, such as the dataflow through the processes, automatically. This however, is not considered in the work. This thesis describes a staged methodology for taking the requirements specification for a system and generating a design and implementation for that system. The methodology is intended to be more than a set of vague guidelines for implementation; a more rigid approach to the design and implementation stages is sought. The methodology begins by decomposing the system into more manageable units of processing. These units are known as tasks with a very low degree of coupling and high degree of cohesion. Following the system decomposition, the data dependency viewpoint is constructed; a descriptive notation and CASE tool support this viewpoint. From this viewpoint, implementation issues such as generating control flow; task and data allocation and hard real-time scheduling concerns, are addressed. A complete runtime environment to support the transaction model is described. This environment is hierarchical and can be adapted to many distributed implementations. Finally, the stages of the methodology are applied to a large example, a Ship Control System. Starting with a specification of the requirements, the methodology is applied to generate a design and implementation of the system.
376

Quantitative characterisation of surface finishes on stainless steel sheet using 3D surface topography analysis

Waterworth, Adelle January 2006 (has links)
The main aim of this project was to quantitatively characterise the developed surface topography of finishes on stainless steel sheet using three-dimensional surface analysis techniques. At present surface topography is measured using (mainly) stylus profilometry and analysed with 2D parameters, such as Ra, Rq and Rz. These 2D measurements are not only unreliable due to a lack of standardised measurement methodology, but are also difficult to relate directly to the actual shape of the topography in 3 dimensions. They bear little direct relation to the functional properties of the surface of stainless steel, making them less useful than their 3D counterparts. Initially it is crucial to ensure that the surface topography data collected is correct, accurate and relevant, by defining a measurement strategy. Models of the surface topography are developed encompassing the usual features of the topography and variations in the topography caused by production or 'defects'. The functional features are discussed and predicted relevant parameters are presented. The protocol covers the selection of the correct measuring instrument based on the surface model and the size of the relevant functional features so that the desired lateral and vertical resolution and range is achievable. Measurement data is then analysed using Fast Fourier Transforms (FFTs) to separate the different frequencies within the spatial frequencies detected on the surface. The frequency of the important features shows up dominantly on a Power Spectral Density (PSD) plot and this is used to find the correct sampling interval to accurately reconstruct the 3D surface data. The correct instrument for further measurements is then selected using a Steadman diagram. Operational details of the measuring instruments available for this project are given and variables for these instruments are discussed. Finally, measurement method recommendations are made for each of the four finishes modelled. Based on this surface characterisation an attempt is made to identify the 3D parameters that give a quantitative description of common stainless steel sheet finishes with respect to some aspects of their production and functional performance. An investigation of the differences in manufacturing processes, gauge and grade of material is presented, providing an insight into the effect on topography of such divergences. The standardised 3D parameter set is examined to determine its sensitivity to common variations in the topography of the 2B finish and therefore their potential relevance. A new data separation technique of the material probability curve for use on the 3D datasets establishes a cut-off (transition point) between the two main functionally relevant features of the 2B surface (plateaus and valleys) by finding the intersection of the asymptotes of a fitted conic section, giving a non subjective methodology to establish the section height. The standardised 3D parameters are then used on the separated data, with the aim of being more functionally relevant to the main surface studied. Functional tests to rate capability of these parameters in the areas of optical appearance, lubricant retention and corrosion are carried out and the appropriate topography parameters are related to their performance.
377

The use of 3D surface analysis techniques to investigate the wear of matt surface finish femoral stems in total hip replacement

Brown, Leigh January 2006 (has links)
Total hip replacement is one of the most common surgical procedures carried out both in the UK and Worldwide. With an increasing number of younger patients undergoing the procedure, there is an emphasis on increasing the longevity of prostheses. The following reports on a number of component studies which, when combined give an insight into the mechanism of wear behind the loosening and failure of matt surface finish femoral stems. By examining stems which have been explanted from patients, a method of wear classification has been developed, and also 3D surface measurement techniques have been employed to quantify wear through parametric characterisation and also volume analysis. Initial findings suggested that the wear of matt finish femoral stems differs to that of smoother polished femoral stems. Studies also provide information regarding the nature of bone cement, its behaviour and the interaction between stem and cement following insertion of the stem. It was found that geometric change in bone cement occurred during polymerisation, and following curing. This geometric change presented itself in the form of differential shrinkage. This shrinkage of cement was observed initially through 3D surface topography analysis and later confirmed with geometric measurement techniques. The presence of voids between stem and cement give rise to the possibility of debris creation and transportation, adding to the evidence for a difference in wear mechanism between polished and matt surface finish femoral stems. Some progress was made towards replication of wear in vitro which has future possibilities for wear screening of materials and designs of future prostheses. The overall conclusion of the study suggests that the dominant wear mechanism which occurred between the stem and bone cement was abrasive in nature and this is likely to explain the accelerated wear of matt stems which has been reported by clinicians and researchers.
378

Parallel corpus multi stream question answering with applications to the Qu'ran

Jilani, Aisha January 2013 (has links)
Question-Answering (QA) is an important research area, which is concerned with developing an automated process that answers questions posed by humans in a natural language. QA is a shared task for the Information Retrieval (IR), Information Extraction (IE), and Natural Language Processing communities (NLP). A technical review of different QA system models and methodologies reveals that a typical QA system consists of different components to accept a natural language question from a user and deliver its answer(s) back to the user. Existing systems have been usually aimed at structured/ unstructured data collected from everyday English text, i.e. text collected from television programmes, news wires, conversations, novels and other similar genres. Despite all up-to-date research in the subject area, a notable fact is that none of the existing QA Systems has been tested on a Parallel Corpus of religious text with the aim of question answering. Religious text has peculiar characteristics and features which make it more challenging for traditional QA methods than other kinds of text. This thesis proposes PARMS (Parallel Corpus Multi Stream) Methodology; a novel method applying existing advanced IR (Information Retrieval) techniques, and combining them with NLP (Natural Language Processing) methods and additional semantic knowledge to implement QA (Question Answering) for a parallel corpus. A parallel Corpus involves use of multiple forms of the same corpus where each form differs from others in a certain aspect, e.g. translations of a scripture from one language to another by different translators. Additional semantic knowledge can be referred as a stream of information related to a corpus. PARMS uses Multiple Streams of semantic knowledge including a general ontology (WordNet) and domain-specific ontologies (QurTerms, QurAna, QurSim). This additional knowledge has been used in embedded form for Query Expansion, Corpus Enrichment and Answer Ranking. The PARMS Methodology has wider applications. This thesis applies it to the Quran – the core text of Islam; as a first case study. The PARMS Method uses parallel corpus comprising ten different English translations of the Quran. An individual Quranic verse is treated as an answer to questions asked in a natural language, English. This thesis also implements PARMS QA Application as a proof of concept for the PARMS methodology. The PARMS Methodology aims to evaluate the range of semantic knowledge streams separately and in combination; and also to evaluate alternative subsets of the DATA source: QA from one stream vs. parallel corpus. Results show that use of Parallel Corpus and Multiple Streams of semantic knowledge have obvious advantages. To the best of my knowledge, this method is developed for the first time and it is expected to be a benchmark for further research area.
379

Banking theory based distributed resource management and scheduling for hybrid cloud computing

Li, Hao January 2013 (has links)
Cloud computing is a computing model in which the network offers a dynamically scalable service based on virtualized resources. The resources in the cloud environment are heterogeneous and geographically distributed. The user does not need to know how to manage those who support the cloud computing infrastructure. From the view of cloud computing, all hardware, software and networks are resources. All of the resources are dynamically scalable on demand. It can offer a complete service for the user even when these service resources are geographically distributed. The user pays for only what they use (pay-per-use). Meanwhile, the transaction environment will decide how to manage resource usage and cost, because all of the transactions have to follow the rule of the market. How to manage and schedule resources effectively becomes a very important part of cloud computing, and how to setup a new framework to offer a reliable, safe and executable service are very important issues. The approach herein is a new contribution to cloud computing. It not only proposes a hybrid cloud computing model based on banking theory to manage transactions among all participants in the hybrid cloud computing environment, but also proposes a "Cloud Bank" framework to support all the related issues. There are some of technology and theory been used to offer contributions as below: 1. This thesis presents an Optimal Deposit-loan Ratio Theory to adjust the pricing between the resource provider and resource consumer to realize both benefit maximization and cloud service optimization for all participants. 2. It also offers a new pricing schema using Centralized Synchronous Algorithm and Distributed Price Adjustment Algorithm to control all lifecycles and dynamically price all resources. 3. Normally, commercial banks apply four factors mitigation and to predict the risk: Probability of Default, Loss Given Default, Exposure at Default and Maturity. This thesis applies Probability of Default model of credit risk to forecast the safety supply of the resource. The Logistic Regression Model been used to control some factors in resource allocation. At the same time, the thesis uses Multivariate Statistical analysis to predict risk. 4. The Cloud Bank model applies an improved Pareto Optimality Algorithm to build its own scheduling system. 5. In order to archive the above purpose, this thesis proposes a new QoS-based SLA-CBSAL to describe all the physical resource and the processing of thread. In order to support all the related algorithms and theories, the thesis uses the CloudSim simulation tools give a test result to support some of the Cloud Bank management strategies and algorithms. The experiment shows us that the Cloud Bank Model is a new possible solution for hybrid cloud computing. For future research direction, the author will focus on building real hybrid cloud computing and simulate actual user behaviour in a real environment, and continue to modify and improve the feasibility and effectiveness of the project. For the risk mitigation and prediction, the risks can be divided into the four categories: credit risk, liquidity risk, operational risk, and other risks. Although this thesis uses credit risk and liquidity risk research, in a real trading environment operational risks and other risks exist. Only through improvements to the designation of all risk types of analysis and strategy can our Cloud Bank be considered relatively complete.
380

Garbage collection optimization for non uniform memory access architectures

Alnowaiser, Khaled Abdulrahman January 2016 (has links)
Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.

Page generated in 0.048 seconds