• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 763
  • 170
  • 24
  • 21
  • 21
  • 21
  • 21
  • 21
  • 21
  • 6
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 2872
  • 2872
  • 2521
  • 2129
  • 1312
  • 553
  • 527
  • 462
  • 443
  • 382
  • 373
  • 306
  • 262
  • 223
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Massively Parallel Computing and Polynomial GCD's

Santavy, Martin January 1987 (has links)
Note:
142

Models of higher-order, type-safe, distributed computation over autonomous persistent object stores

Mira da Silva, Miguel Leitão Bignolas January 1996 (has links)
A remote procedure call (RPC) mechanism permits the calling of procedures in another address space. RPC is a simple but highly effective mechanism for interprocess communication and enjoys nowadays a great popularity as a tool for building distributed applications. This popularity is partly a result of their overall simplicity but also partly a consequence of more than 20 years of research in transpaxent distribution that have failed to deliver systems that meet the expectations of real-world application programmers. During the same 20 years, persistent systems have proved their suitability for building complex database applications by seamlessly integrating features traditionally found in database management systems into the programming language itself. Some research. effort has been invested on distributed persistent systems, but the outcomes commonly suffer from the same problems found with transparent distribution. In this thesis I claim that a higher-order persistent RPC is useful for building distributed persistent applications. The proposed mechanism is: realistic in the sense that it uses current technology and tolerates partial failures; understandable by application programmers; and general to support the development of many classes of distributed persistent applications. In order to demonstrate the validity of these claims, I propose and have implemented three models for distributed higher-order computation over autonomous persistent stores. Each model has successively exposed new problems which have then been overcome by the next model. Together, the three models provide a general yet simple higher-order persistent RPC that is able to operate in realistic environments with partial failures. The real strength of this thesis is the demonstration of realism and simplicity. A higherorder persistent RPC was not only implemented but also used by programmers without experience of programming distributed applications. Furthermore, a distributed persistent application has been built using these models which would not have been feasible with a traditional (non-persistent) programming language.
143

PERFORMANCE OF HIERARCHICALLY FLEXIBLE ADAPTIVE COMPUTER ARCHITECTURE APPLIED TO SORTING PROBLEMS

Ferng, Ming-Jehn, 1958- January 1987 (has links)
In this thesis existing models of adaptive computer architecture were modified to adapt actual sorting problems to "divide 'n' conquer" (DQ) coordinator type configuration in which the children processors were expanded from three to four. Two hire/fire strategies, one using packets waiting in queue and the other using the average turn around time, were applied to maintain the hierarchical tree structure. More than 1200 simulation runs were analyzed and compared, finding that the first strategy was best at fast packet arrival rate and the second strategy was best at slow packets arrival rate. Comparing the hire/fire signal generation policies, the "fc-root" was best and the "root-fp" was worst. While comparing the effect of variable weighting factors in processors, using smaller weighting factor in either "partitioner" for the first strategy or "f-computer" for the second strategy may improve the system performance. (Abstract shortened with permission of author.)
144

Multi-objective tools for the vehicle routing problem with time windows

Castro-Gutierrez, Juan January 2012 (has links)
Most real-life problems involve the simultaneous optimisation of two or more, usually conflicting, objectives. Researchers have put a continuous effort into solving these problems in many different areas, such as engineering, finance and computer science. Over time, thanks to the increase in processing power, researchers have created methods which have become increasingly sophisticated. Most of these methods have been based on the notion of Pareto dominance, which assumes, sometimes erroneously, that the objectives have no known ranking of importance. The Vehicle Routing Problem with Time Windows (VRPTW) is a logistics problem which in real-life applications appears to be multi-objective. This problem consists of designing the optimal set of routes to serve a number of customers within certain time slots. Despite this problem’s high applicability to real-life domains (e.g. waste collection, fast-food delivery), most research in this area has been conducted with hand-made datasets. These datasets sometimes have a number of unrealistic features (e.g. the assumption that one unit of travel time corresponds to one unit of travel distance) and are therefore not adequate for the assessment of optimisers. Furthermore, very few studies have focused on the multi-objective nature of the VRPTW. That is, very few have studied how the optimisation of one objective affects the others. This thesis proposes a number of novel tools (methods + dataset) to address the above- mentioned challenges: 1) an agent-based framework for cooperative search, 2) a novel multi-objective ranking approach, 3) a new dataset for the VRPTW, 4) a study of the pair-wise relationships between five common objectives in VRPTW, and 5) a simplified Multi-objective Discrete Particle Swarm Optimisation for the VRPTW.
145

Compiling concurrency correctly : verifying software transactional memory

Hu, Liyang January 2013 (has links)
Concurrent programming is notoriously difficult, but with multi-core processors becoming the norm, is now a reality that every programmer must face. Concurrency has traditionally been managed using low-level mutual exclusion /locks/, which are error-prone and do not naturally support the compositional style of programming that is becoming indispensable for today's large-scale software projects. A novel, high-level approach that has emerged in recent years is that of /software transactional memory/ (STM), which avoids the need for explicit locking, instead presenting the programmer with a declarative approach to concurrency. However, its implementation is much more complex and subtle, and ensuring its correctness places significant demands on the compiler writer. This thesis considers the problem of formally verifying an implementation of STM. Utilising a minimal language incorporating only the features that we are interested in studying, we first explore various STM design choices, along with the issue of compiler correctness via the use of automated testing tools. Then we outline a new approach to concurrent compiler correctness using the notion of bisimulation, implemented using the Agda theorem prover. Finally, we show how bisimulation can be used to establish the correctness of a low-level implementation of software transactional memory.
146

Value-gradient learning

Fairbank, Michael January 2014 (has links)
This thesis presents an Adaptive Dynamic Programming method, Value-Gradient Learning, for solving a control optimisation problem, using a neural network to represent a critic function in a large continuous-valued state space. The algorithm developed, called VGL(λ), requires a learned differentiable model of the environment. VGL(λ) is an extension of Dual Heuristic Programming (DHP) to include a bootstrapping parameter, λ, analogous to that used in the reinforcement learning algorithm TD(λ). Online and batch-mode implementations of the algorithm are provided, and its theoretical relationships to its precursor algorithms, DHP and TD(λ), are described. A theoretical result is given which shows that to achieve trajectory optimality in a continuous-valued state space, the critic must learn the value-gradient, and this fact affects any critic-learning algorithm. The connection of this result to Pontryagin's Minimum Principle is made clear. Hence it is proven that learning this value-gradient directly will obviate the need for local exploration of the value function, and this motivates value-gradient learning methods in terms of automatic local value exploration and improved learning speed. Empirical results for the algorithm are given for several benchmark problems, and the improved speed, convergence, and ability to work without local value exploration, is demonstrated in comparison to its precursor algorithms, TD(λ) and DHP. A convergence proof for one instance of the VGL(λ) algorithm is given, which is valid for control problems with a greedy policy, and a general nonlinear function approximator to represent the critic. This is a non-trivial accomplishment, since most or all other related algorithms can be made to diverge under similar conditions, and new divergence proofs demonstrating this for certain algorithms are given in the thesis. Several technical problems must be overcome to make a robust VGL(λ) implementation, and these solutions are described. These include implementing an efficient greedy policy, implementing trajectory clipping correctly, and the efficient computation of second-order gradients with a neural network.
147

Security aware service composition

Pino, Luca January 2015 (has links)
Security assurance of Service-Based Systems (SBS) is a necessity and a key challenge in Service Oriented Computing. Several approaches have been introduced in order to take care of the security aspect of SBSs, from the design to the implementation stages. Such solutions, however, require expertise with regards to security languages and technologies or modelling formalisms. Furthermore, existing approaches allow only limited verification of security properties over a service composition, as they focus just on specific properties and require expressing compositions and properties in a model based formalism. In this thesis we present a unified security aware service composition approach capable of validation of arbitrary security properties. This approach allows SBS designers to build secure applications without the need to learn formal models thanks to security descriptors for services, being they self-appointed or certified by an external third-party. More specifically, the framework presented in this thesis allows expressing and propagating security requirements expressed for a security composition to requirements for the single activities of the composition, and checking security requirements over security service descriptors. The approach relies on the new core concept of secure composition patterns, modelling proven implications of security requirements within an orchestration pattern. The framework has been implemented and tested extensively in both a SBS design-time and runtime scenario, based respectively on Eclipse BPEL Designer and the Runtime Service Discovery Tool.
148

The routine health information system in Palestine : determinants and performance

Mimi, Y. January 2015 (has links)
A health information system (HIS) plays an important role in ensuring that reliable and timely health information is available for operational and strategic decision making that saves lives and enhances health. Despite their importance for evidence-based decisions, health information systems in many developing countries are weak, fragmented and often focused exclusively on disease-specific programme areas. There is a broad consensus in the literature that strengthening of national HIS is desirable. An integrated HIS will provide the basis for public health professionals to look at the health system from broader more comprehensive points of view. The routine health information system (RHIS) in Palestine does not store data at the case level but aggregates them at the Facility level only. Additionally, establishment of multiple information databases in different Ministry of Health (MoH) departments causes incompatibility between the different databases and ineffective use of information. This study examines the availability and the utilisation of information in support of health care organisation and delivery in Palestine which entailed an assessment of the current situation to identify determinants of the RHIS performance. The Palestinian Ministry of Health at the Ministry, District and Facility levels was the study setting while systems and staff operating at these three levels were the target population. Employing a purposive sampling method a total of 123 respondents participated in the study. Performance of Routine Information System Management (PRISM) framework and its four tools package was used to assess the performance of RHIS at the Palestinian MoH. The PRISM framework empirically tests the relationships among technical, behavioural and organisational determinants on health management information system (HMIS) process and performance. Data quality is measured in terms of accuracy and completeness at the Facility level. However, at Ministry HMIS and District levels it is measured in terms of timeliness, data accuracy and completeness. Data quality was good at the Ministry HMIS level. However, data completeness and accuracy at the District level were good while timeliness was immeasurable on the basis of currently adopted procedures. At the Facility level, data completeness and data accuracy were only acceptable. Use of information was poor at all three levels; the Ministry HMIS level, District and Facility. The displaying of updated data on mother‘s health, child health, Facility utilisation, and disease 12 surveillance at both the District level and at the Facility levels were poor. RHIS processes at the Ministry HMIS level were good. However, they were poor at the two levels of District and Facility. Overall, technical and behavioural determinants fared poorly at all three levels while organisational determinants at the Ministry HMIS level were very good for RHIS governance and planning but were poor for supervision, training and finance. These findings provide evidence on the need to establish a national RHIS the utilisation of which is made legally compulsory for all. Investing heavily and systematically in building relevant staff capacity and technical infrastructure to improve performance is a key conclusion from this project.
149

The SSPNet-Mobile Corpus : from the detection of non-verbal cues to the inference of social behaviour during mobile phone conversations

Polychroniou, Anna January 2014 (has links)
Mobile phones are one of the main channels of communication in contemporary society. However, the effect of the mobile phone on both the process of and, also, the non-verbal behaviours used during conversations mediated by this technology, remain poorly understood. This thesis aims to investigate the role of the phone on the negotiation process as well as, the automatic analysis of non-verbal behavioural cues during conversations using mobile telephones, by following the Social Signal Processing approach. The work in this thesis includes the collection of a corpus of 60 mobile phone conversations involving 120 subjects, development of methods for the detection of non-verbal behavioural events (laughter, fillers, speech and silence) and the inference of characteristics influencing social interactions (personality traits and conflict handling style) from speech and movements while using the mobile telephone, as well as the analysis of several factors that influence the outcome of decision-making processes while using mobile phones (gender, age, personality, conflict handling style and caller versus receiver role). The findings show that it is possible to recognise behavioural events at levels well above chance level, by employing statistical language models, and that personality traits and conflict handling styles can be partially recognised. Among the factors analysed, participant role (caller versus receiver) was the most important in determining the outcome of negotiation processes in the case of disagreement between parties. Finally, the corpus collected for the experiments (the SSPNet-Mobile Corpus) has been used in an international benchmarking campaign and constitutes a valuable resource for future research in Social Signal Processing and more generally in the area of human-human communication.
150

Modelling uncertainty in touch interaction

Weir, Daryl January 2014 (has links)
Touch interaction is an increasingly ubiquitous input modality on modern devices. It appears on devices including phones, tablets, smartwatches and even some recent laptops. Despite its popularity, touch as an input technology suffers from a high level of measurement uncertainty. This stems from issues such as the ‘fat finger problem’, where the soft pad of the finger creates an ambiguous contact region with the screen that must be approximated by a single touch point. In addition to these physical uncertainties, there are issues of uncertainty of intent when the user is unsure of the goal of a touch. Perhaps the most common example is when typing a word, the user may be unsure of the spelling leading to touches on the wrong keys. The uncertainty of touch leads to an offset between the user’s intended target and the touch position recorded by the device. While numerous models have been proposed to model and correct for these offsets, existing techniques in general have assumed that the offset is a deterministic function of the input. We observe that this is not the case — touch also exhibits a random component. We propose in this dissertation that this property makes touch an excellent target for analysis using probabilistic techniques from machine learning. These techniques allow us to quantify the uncertainty expressed by a given touch, and the core assertion of our work is that this allows useful improvements to touch interaction to be obtained. We show this through a number of studies. In Chapter 4, we apply Gaussian Process regression to the touch offset problem, producing models which allow very accurate selection of small targets. In the process, we observe that offsets are both highly non-linear and highly user-specific. In Chapter 5, we make use of the predictive uncertainty of the GP model when applied to a soft keyboard — this allows us to obtain key press probabilities which we combine with a language model to perform autocorrection. In Chapter 6, we introduce an extension to this framework in which users are given direct control over the level of uncertainty they express. We show that not only can users control such a system succesfully, they can use it to improve their performance when typing words not known to the language model. Finally, in Chapter 7 we show that users’ touch behaviour is significantly different across different tasks, particularly for typing compared to pointing tasks. We use this to motivate an investigation of the use of a sparse regression algorithm, the Relevance Vector Machine, to train offset models using small amounts of data.

Page generated in 0.0756 seconds