• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A network-aware virtual machine placement approach for data-intensive applications in a cloud environment

Alharbi, Yasser January 2018 (has links)
Cloud computing provides beneficial services to users, enabling them to share large amounts of information, employ Storage Nodes (SN), utilise Computing Nodes (CN) and gather knowledge for research. Virtual Machines (VMs) usually host data-intensive applications, which submit thousands of jobs that access subsets of the petabytes of data distributed over Clouds Datacentres (DCs). The VMs scheduling allocation decisions in cloud environments are based on different parameters, such as cost, resource utilisation, performance, time and resource availability. In the case of application performance, the decisions are often made on the basis of jobs being either data intensive or computation intensive. In data-intensive situations, jobs may be pushed to the data; in computation intensive situations, data may be pulled to the jobs. This kind of scheduling, in which there is no consideration of network characteristics, can lead to performance degradation in a cloud environment and may result in large processing queues and job execution delays due to site overloads. This thesis proposes a novel service framework, the network- aware VM placement approach for data- intensive applications (NADI), to address the need for improved application performance . NADI takes into account a jobs time cost based on a mechanism that maps VMs against the resources when making scheduling decisions across multiple DCs. So, it not only allocates the best available resources to a VM to minimise the time needed to complete its jobs but also checks the global state of jobs and resources so that the output of the whole cloud is maximised. The thesis begins with a statement of the problem addressed and the objectives of the research. The methodology adopted for the research is described subsequently, and the outline of the thesis is presented. This is followed by a brief introduction highlighting the current approaches in VM placement and migration in cloud computing. Next, this thesis presents a framework for the proposed NADI with a description of its various components and enabling functionalities, which are required to realise this framework. Multi-objective strategies suitable for the problems in NADI are presented. Novel algorithms for managing applications and their data are proposed; they aim to improve each jobs performance and minimise the traffic between the application and its related data. The results indicate that there are considerable performance improvements and that the completion time is reduced by 25% to 51%, which can be gained by adopting the NADI scheduling approach.
52

Silicon-based non-volatile nano-electro-mechanical switch with controlled van der Waals force

Boodhoo, Liam January 2015 (has links)
No description available.
53

Models of higher-order, type-safe, distributed computation over autonomous persistent object stores

Mira da Silva, Miguel Leitão Bignolas January 1996 (has links)
A remote procedure call (RPC) mechanism permits the calling of procedures in another address space. RPC is a simple but highly effective mechanism for interprocess communication and enjoys nowadays a great popularity as a tool for building distributed applications. This popularity is partly a result of their overall simplicity but also partly a consequence of more than 20 years of research in transpaxent distribution that have failed to deliver systems that meet the expectations of real-world application programmers. During the same 20 years, persistent systems have proved their suitability for building complex database applications by seamlessly integrating features traditionally found in database management systems into the programming language itself. Some research. effort has been invested on distributed persistent systems, but the outcomes commonly suffer from the same problems found with transparent distribution. In this thesis I claim that a higher-order persistent RPC is useful for building distributed persistent applications. The proposed mechanism is: realistic in the sense that it uses current technology and tolerates partial failures; understandable by application programmers; and general to support the development of many classes of distributed persistent applications. In order to demonstrate the validity of these claims, I propose and have implemented three models for distributed higher-order computation over autonomous persistent stores. Each model has successively exposed new problems which have then been overcome by the next model. Together, the three models provide a general yet simple higher-order persistent RPC that is able to operate in realistic environments with partial failures. The real strength of this thesis is the demonstration of realism and simplicity. A higherorder persistent RPC was not only implemented but also used by programmers without experience of programming distributed applications. Furthermore, a distributed persistent application has been built using these models which would not have been feasible with a traditional (non-persistent) programming language.
54

Value-gradient learning

Fairbank, Michael January 2014 (has links)
This thesis presents an Adaptive Dynamic Programming method, Value-Gradient Learning, for solving a control optimisation problem, using a neural network to represent a critic function in a large continuous-valued state space. The algorithm developed, called VGL(λ), requires a learned differentiable model of the environment. VGL(λ) is an extension of Dual Heuristic Programming (DHP) to include a bootstrapping parameter, λ, analogous to that used in the reinforcement learning algorithm TD(λ). Online and batch-mode implementations of the algorithm are provided, and its theoretical relationships to its precursor algorithms, DHP and TD(λ), are described. A theoretical result is given which shows that to achieve trajectory optimality in a continuous-valued state space, the critic must learn the value-gradient, and this fact affects any critic-learning algorithm. The connection of this result to Pontryagin's Minimum Principle is made clear. Hence it is proven that learning this value-gradient directly will obviate the need for local exploration of the value function, and this motivates value-gradient learning methods in terms of automatic local value exploration and improved learning speed. Empirical results for the algorithm are given for several benchmark problems, and the improved speed, convergence, and ability to work without local value exploration, is demonstrated in comparison to its precursor algorithms, TD(λ) and DHP. A convergence proof for one instance of the VGL(λ) algorithm is given, which is valid for control problems with a greedy policy, and a general nonlinear function approximator to represent the critic. This is a non-trivial accomplishment, since most or all other related algorithms can be made to diverge under similar conditions, and new divergence proofs demonstrating this for certain algorithms are given in the thesis. Several technical problems must be overcome to make a robust VGL(λ) implementation, and these solutions are described. These include implementing an efficient greedy policy, implementing trajectory clipping correctly, and the efficient computation of second-order gradients with a neural network.
55

Security aware service composition

Pino, Luca January 2015 (has links)
Security assurance of Service-Based Systems (SBS) is a necessity and a key challenge in Service Oriented Computing. Several approaches have been introduced in order to take care of the security aspect of SBSs, from the design to the implementation stages. Such solutions, however, require expertise with regards to security languages and technologies or modelling formalisms. Furthermore, existing approaches allow only limited verification of security properties over a service composition, as they focus just on specific properties and require expressing compositions and properties in a model based formalism. In this thesis we present a unified security aware service composition approach capable of validation of arbitrary security properties. This approach allows SBS designers to build secure applications without the need to learn formal models thanks to security descriptors for services, being they self-appointed or certified by an external third-party. More specifically, the framework presented in this thesis allows expressing and propagating security requirements expressed for a security composition to requirements for the single activities of the composition, and checking security requirements over security service descriptors. The approach relies on the new core concept of secure composition patterns, modelling proven implications of security requirements within an orchestration pattern. The framework has been implemented and tested extensively in both a SBS design-time and runtime scenario, based respectively on Eclipse BPEL Designer and the Runtime Service Discovery Tool.
56

The routine health information system in Palestine : determinants and performance

Mimi, Y. January 2015 (has links)
A health information system (HIS) plays an important role in ensuring that reliable and timely health information is available for operational and strategic decision making that saves lives and enhances health. Despite their importance for evidence-based decisions, health information systems in many developing countries are weak, fragmented and often focused exclusively on disease-specific programme areas. There is a broad consensus in the literature that strengthening of national HIS is desirable. An integrated HIS will provide the basis for public health professionals to look at the health system from broader more comprehensive points of view. The routine health information system (RHIS) in Palestine does not store data at the case level but aggregates them at the Facility level only. Additionally, establishment of multiple information databases in different Ministry of Health (MoH) departments causes incompatibility between the different databases and ineffective use of information. This study examines the availability and the utilisation of information in support of health care organisation and delivery in Palestine which entailed an assessment of the current situation to identify determinants of the RHIS performance. The Palestinian Ministry of Health at the Ministry, District and Facility levels was the study setting while systems and staff operating at these three levels were the target population. Employing a purposive sampling method a total of 123 respondents participated in the study. Performance of Routine Information System Management (PRISM) framework and its four tools package was used to assess the performance of RHIS at the Palestinian MoH. The PRISM framework empirically tests the relationships among technical, behavioural and organisational determinants on health management information system (HMIS) process and performance. Data quality is measured in terms of accuracy and completeness at the Facility level. However, at Ministry HMIS and District levels it is measured in terms of timeliness, data accuracy and completeness. Data quality was good at the Ministry HMIS level. However, data completeness and accuracy at the District level were good while timeliness was immeasurable on the basis of currently adopted procedures. At the Facility level, data completeness and data accuracy were only acceptable. Use of information was poor at all three levels; the Ministry HMIS level, District and Facility. The displaying of updated data on mother‘s health, child health, Facility utilisation, and disease 12 surveillance at both the District level and at the Facility levels were poor. RHIS processes at the Ministry HMIS level were good. However, they were poor at the two levels of District and Facility. Overall, technical and behavioural determinants fared poorly at all three levels while organisational determinants at the Ministry HMIS level were very good for RHIS governance and planning but were poor for supervision, training and finance. These findings provide evidence on the need to establish a national RHIS the utilisation of which is made legally compulsory for all. Investing heavily and systematically in building relevant staff capacity and technical infrastructure to improve performance is a key conclusion from this project.
57

The SSPNet-Mobile Corpus : from the detection of non-verbal cues to the inference of social behaviour during mobile phone conversations

Polychroniou, Anna January 2014 (has links)
Mobile phones are one of the main channels of communication in contemporary society. However, the effect of the mobile phone on both the process of and, also, the non-verbal behaviours used during conversations mediated by this technology, remain poorly understood. This thesis aims to investigate the role of the phone on the negotiation process as well as, the automatic analysis of non-verbal behavioural cues during conversations using mobile telephones, by following the Social Signal Processing approach. The work in this thesis includes the collection of a corpus of 60 mobile phone conversations involving 120 subjects, development of methods for the detection of non-verbal behavioural events (laughter, fillers, speech and silence) and the inference of characteristics influencing social interactions (personality traits and conflict handling style) from speech and movements while using the mobile telephone, as well as the analysis of several factors that influence the outcome of decision-making processes while using mobile phones (gender, age, personality, conflict handling style and caller versus receiver role). The findings show that it is possible to recognise behavioural events at levels well above chance level, by employing statistical language models, and that personality traits and conflict handling styles can be partially recognised. Among the factors analysed, participant role (caller versus receiver) was the most important in determining the outcome of negotiation processes in the case of disagreement between parties. Finally, the corpus collected for the experiments (the SSPNet-Mobile Corpus) has been used in an international benchmarking campaign and constitutes a valuable resource for future research in Social Signal Processing and more generally in the area of human-human communication.
58

Modelling uncertainty in touch interaction

Weir, Daryl January 2014 (has links)
Touch interaction is an increasingly ubiquitous input modality on modern devices. It appears on devices including phones, tablets, smartwatches and even some recent laptops. Despite its popularity, touch as an input technology suffers from a high level of measurement uncertainty. This stems from issues such as the ‘fat finger problem’, where the soft pad of the finger creates an ambiguous contact region with the screen that must be approximated by a single touch point. In addition to these physical uncertainties, there are issues of uncertainty of intent when the user is unsure of the goal of a touch. Perhaps the most common example is when typing a word, the user may be unsure of the spelling leading to touches on the wrong keys. The uncertainty of touch leads to an offset between the user’s intended target and the touch position recorded by the device. While numerous models have been proposed to model and correct for these offsets, existing techniques in general have assumed that the offset is a deterministic function of the input. We observe that this is not the case — touch also exhibits a random component. We propose in this dissertation that this property makes touch an excellent target for analysis using probabilistic techniques from machine learning. These techniques allow us to quantify the uncertainty expressed by a given touch, and the core assertion of our work is that this allows useful improvements to touch interaction to be obtained. We show this through a number of studies. In Chapter 4, we apply Gaussian Process regression to the touch offset problem, producing models which allow very accurate selection of small targets. In the process, we observe that offsets are both highly non-linear and highly user-specific. In Chapter 5, we make use of the predictive uncertainty of the GP model when applied to a soft keyboard — this allows us to obtain key press probabilities which we combine with a language model to perform autocorrection. In Chapter 6, we introduce an extension to this framework in which users are given direct control over the level of uncertainty they express. We show that not only can users control such a system succesfully, they can use it to improve their performance when typing words not known to the language model. Finally, in Chapter 7 we show that users’ touch behaviour is significantly different across different tasks, particularly for typing compared to pointing tasks. We use this to motivate an investigation of the use of a sparse regression algorithm, the Relevance Vector Machine, to train offset models using small amounts of data.
59

A semi-automated FAQ retrieval system for HIV/AIDS

Thuma, Edwin January 2015 (has links)
This thesis describes a semi-automated FAQ retrieval system that can be queried by users through short text messages on low-end mobile phones to provide answers on HIV/AIDS related queries. First we address the issue of result presentation on low-end mobile phones by proposing an iterative interaction retrieval strategy where the user engages with the FAQ retrieval system in the question answering process. At each iteration, the system returns only one question-answer pair to the user and the iterative process terminates after the user's information need has been satisfied. Since the proposed system is iterative, this thesis attempts to reduce the number of iterations (search length) between the users and the system so that users do not abandon the search process before their information need has been satisfied. Moreover, we conducted a user study to determine the number of iterations that users are willing to tolerate before abandoning the iterative search process. We subsequently used the bad abandonment statistics from this study to develop an evaluation measure for estimating the probability that any random user will be satisfied when using our FAQ retrieval system. In addition, we used a query log and its click-through data to address three main FAQ document collection deficiency problems in order to improve the retrieval performance and the probability that any random user will be satisfied when using our FAQ retrieval system. Conclusions are derived concerning whether we can reduce the rate at which users abandon their search before their information need has been satisfied by using information from previous searches to: Address the term mismatch problem between the users' SMS queries and the relevant FAQ documents in the collection; to selectively rank the FAQ document according to how often they have been previously identified as relevant by users for a particular query term; and to identify those queries that do not have a relevant FAQ document in the collection. In particular, we proposed a novel template-based approach that uses queries from a query log for which the true relevant FAQ documents are known to enrich the FAQ documents with additional terms in order to alleviate the term mismatch problem. These terms are added as a separate field in a field-based model using two different proposed enrichment strategies, namely the Term Frequency and the Term Occurrence strategies. This thesis thoroughly investigates the effectiveness of the aforementioned FAQ document enrichment strategies using three different field-based models. Our findings suggest that we can improve the overall recall and the probability that any random user will be satisfied by enriching the FAQ documents with additional terms from queries in our query log. Moreover, our investigation suggests that it is important to use an FAQ document enrichment strategy that takes into consideration the number of times a term occurs in the query when enriching the FAQ documents. We subsequently show that our proposed enrichment approach for alleviating the term mismatch problem generalise well on other datasets. Through the evaluation of our proposed approach for selectively ranking the FAQ documents, we show that we can improve the retrieval performance and the probability that any random user will be satisfied when using our FAQ retrieval system by incorporating the click popularity score of a query term t on an FAQ document d into the scoring and ranking process. Our results generalised well on a new dataset. However, when we deploy the click popularity score of a query term t on an FAQ document d on an enriched FAQ document collection, we saw a decrease in the retrieval performance and the probability that any random user will be satisfied when using our FAQ retrieval system. Furthermore, we used our query log to build a binary classifier for detecting those queries that do not have a relevant FAQ document in the collection (Missing Content Queries (MCQs))). Before building such a classifier, we empirically evaluated several feature sets in order to determine the best combination of features for building a model that yields the best classification accuracy in identifying the MCQs and the non-MCQs. Using a different dataset, we show that we can improve the overall retrieval performance and the probability that any random user will be satisfied when using our FAQ retrieval system by deploying a MCQs detection subsystem in our FAQ retrieval system to filter out the MCQs. Finally, this thesis demonstrates that correcting spelling errors can help improve the retrieval performance and the probability that any random user will be satisfied when using our FAQ retrieval system. We tested our FAQ retrieval system with two different testing sets, one containing the original SMS queries and the other containing the SMS queries which were manually corrected for spelling errors. Our results show a significant improvement in the retrieval performance and the probability that any random user will be satisfied when using our FAQ retrieval system.
60

A toolkit of resource-sensitive, multimodal widgets

Crease, Murray January 2001 (has links)
This thesis describes an architecture for a toolkit of user interface components which allows the presentation of the widgets to use multiple output modalities - typically, audio and visual. Previously there was no toolkit of widgets which would use the most appropriate presentational resources according to their availability and suitability. Typically the use of different forms of presentation was limited to graphical feedback with the addition of other forms of presentation, such as sound, being added in an ad hoc fashion with only limited scope for managing the use of the different resources. A review of existing auditory interfaces provided some requirements that the toolkit would need to fulfil for it to be effective. In addition, it was found that a strand of research in this area required further investigation to ensure that a full set of requirements was captured. It was found that no formal evaluation of audio being used to provide background information has been undertaken. A sonically-enhanced progress indicator was designed and evaluated showing that audio feedback could be used as a replacement for visual feedback rather than simply as an enhancement. The experiment also completed the requirements capture for the design of the toolkit of multimodal widgets. A review of existing user interface architectures and systems, with particular attention paid to the way they manage multiple output modalities presented some design guidelines for the architecture of the toolkit. Building on these guidelines a design for the toolkit which fulfils all the previously captured requirements is presented. An implementation of this design is given, with an evaluation of the implementation showing that it fulfils all the requirements of the design.

Page generated in 0.0773 seconds