• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • Tagged with
  • 258
  • 258
  • 258
  • 222
  • 221
  • 219
  • 48
  • 26
  • 18
  • 18
  • 17
  • 17
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Parallel Markov Chain Monte Carlo

Byrd, Jonathan Michael Robert January 2010 (has links)
The increasing availability of multi-core and multi-processor architectures provides new opportunities for improving the performance of many computer simulations. Markov Chain Monte Carlo (MCMC) simulations are widely used for approximate counting problems, Bayesian inference and as a means for estimating very highdimensional integrals. As such MCMC has found a wide variety of applications in fields including computational biology and physics,financial econometrics, machine learning and image processing. This thesis presents a number of new method for reducing the runtime of Markov Chain Monte Carlo simulations by using SMP machines and/or clusters. Two of the methods speculatively perform iterations in parallel, reducing the runtime of MCMC programs whilst producing statistically identical results to conventional sequential implementations. The other methods apply only to problem domains that can be presented as an image, and involve using various means of dividing the image into subimages that can be proceed with some degree of independence. Where possible the thesis includes a theoretical analysis of the reduction in runtime that may be achieved using our technique under perfect conditions, and in all cases the methods are tested and compared on selection of multi-core and multi-processor architectures. A framework is provided to allow easy construction of MCMC application that implement these parallelisation methods.
52

Parallelisation for data-intensive applications over peer-to-peer networks

Chen, Xinuo January 2009 (has links)
In Data Intensive Computing, properties of the data that are the input for an application decide running performance in most cases. Those properties include the size of the data, the relationships inside data, and so forth. There is a class of data intensive applications (BLAST, SETI@home, Folding@Home and so on so forth) whose performances solely depend on the amount of input data. Another important characteristic of those applications is that the input data can be split into units and these units are not related to each other during the runs of the applications. This characteristic helps this class of data intensive applications to be parallelised in the way where the input data is split into units and application runs on different computer nodes for certain portion of the units. SETI@home and Folding@Home have been successfully parallelised over peer-to-peer networks. However, they suffer from the problems of single point of failure and poor scalability. In order to solve these problems, we choose BLAST as our example data intensive applications and parallelise BLAST over a fully distributed peer-to-peer network. BLAST is a popular bioinformatics toolset which can be used to compare two DNA sequences. The major usage of BLAST is searching a query of sequences inside a database for their similarities so as to identify whether they are new. When comparing single pair of sequences, BLAST is efficient. However, due to growing size of the databases, executing BLAST jobs locally produces prohibitively poor performance. Thus, methods for parallelising BLAST are sought. Traditional BLAST parallelisation approaches are all based on clusters. Clusters employ a number of computing nodes and high bandwidth interlinks between nodes. Cluster-based BLAST exhibits higher performance; nevertheless, clusters suffer from limited resources and scalability problems. Clusters are expensive, prohibitively so when the growth of the sequence database are taken into account. It involves high cost and complication when increasing the number of nodes to adapt to the growth of BLAST databases. Hence a Peer-to-Peer-based BLAST service is required. This thesis demonstrates our parallelisation of BLAST over Peer-to-Peer networks (termed ppBLAST), which utilises the free storage and computing resources in the Peer-to-Peer networks to complete BLAST jobs in parallel. In order to achieve the goal, we build three layers in ppBLAST each of which is responsible for particular functions. The bottom layer is a DHT infrastructure with the support of range queries. It provides efficient range-based lookup service and storage for BLAST tasks. The middle layer is the BitTorrent-based database distribution. The upper layer is the core of ppBLAST which schedules and dispatches task to peers. For each layer, we conduct comprehensive research and the achievements are presented in this thesis. For the DHT layer, we design and implement our DAST-DHT. We analyse balancing, maximum number of children and the accuracy of the range query. We also compare the DAST with other range query methodology and state that if the number of children is adjusted to more two, the performance of DAST overcomes others. For the BitTorrent-like database distribution layer, we investigate the relationship between the seeding strategies and the selfish leechers (freeriders and exploiters). We conclude that OSS works better than TSS in a normal situation.
53

High-fidelity rendering on shared computational resources

Aggarwal, Vibhor January 2010 (has links)
The generation of high-fidelity imagery is a computationally expensive process and parallel computing has been traditionally employed to alleviate this cost. However, traditional parallel rendering has been restricted to expensive shared memory or dedicated distributed processors. In contrast, parallel computing on shared resources such as a computational or a desktop grid, offers a low cost alternative. But, the prevalent rendering systems are currently incapable of seamlessly handling such shared resources as they suffer from high latencies, restricted bandwidth and volatility. A conventional approach of rescheduling failed jobs in a volatile environment inhibits performance by using redundant computations. Instead, clever task subdivision along with image reconstruction techniques provides an unrestrictive fault-tolerance mechanism, which is highly suitable for high-fidelity rendering. This thesis presents novel fault-tolerant parallel rendering algorithms for effectively tapping the enormous inexpensive computational power provided by shared resources. A first of its kind system for fully dynamic high-fidelity interactive rendering on idle resources is presented which is key for providing an immediate feedback to the changes made by a user. The system achieves interactivity by monitoring and adapting computations according to run-time variations in the computational power and employs a spatio-temporal image reconstruction technique for enhancing the visual fidelity. Furthermore, algorithms described for time-constrained offline rendering of still images and animation sequences, make it possible to deliver the results in a user-defined limit. These novel methods enable the employment of variable resources in deadline-driven environments.
54

Varqa : a functional query language based on an algebraic approach and conventional mathematical notation

Golshani, Forouzan January 1982 (has links)
We propose a functional query language for databases where both syntax and semantics are based on conventional mathematics. We argue that database theory should not be separated from other fields of Computer Science, and that database languages should have the same properties as those of other non-procedural languages. The data are represented in our database as a collection of sets, and the relationships between the data are represented by functions mapping these sets to each other. A database is therefore a many-sorted algebra; i.e. a collection of indexed sets and indexed operations. As in abstract data type specification, we specify the consequences of applying operations to the data without reference to any particular internal structure of the data. A query is simply an expression which is built up from symbols in the signature of the algebra and which complies with the formation rules given by the language. The meaning of a query is the value which is assigned to it by the algebra. There are several ways of extending our language. Twos ways are studied here. The first extension is to allow queries in which sets are defined inductively (i.e. recursively). This mechanism is essential for queries dealing with transitive closures over some interrelated objects. Secondly, since incomplete information is common to many databases, we extend our language to handle partially available data. One main principle guides our extensions: ‘whatever information is added to an incomplete database, subsequent answers to queries must not be less informative than previously’. Finally, we show the correspondence between Varqa and methods used in current database software. A subset of Varqa, including all features whose implementation is not obvious, is mapped to relational algebra thus showing that our language, though it has been designed with no reference to internal structure, is not incompatible with present database software.
55

Learning and approximation algorithms for problems motivated by evolutionary trees

Cryan, Mary Elizabeth January 1999 (has links)
In this thesis we consider some computational problems motivated by the biological problem of reconstructing evolutionary trees. In this thesis, we are concerned with the design and analysis of efficient algorithms for clearly defined combinatorial problems motived by this application area. We present results for two different kinds of problem. Our first problem is motivated by models of evolution that describe the evolution of biological species in terms of a stochastic process that alters the DNA of species. The particular stochastic model that we considered is called the Two-State General Markov Model. In this model, an evolutionary tree can be associated with a distribution on the different "patterns" that may appear among the sequences for all the species in the evolutionary tree. Then the data for a collection of species whose evolutionary tree is unknown can be viewed as samples from this (unknown) distribution. An interesting problem asks whether we can use samples from an unknown evolutionary tree M to find another tree M*for those species, so that the distribution of M* is similar to that of M. This is essentially a PAC-learning problem ("Probably Approximately Correct") in the sense of Valiant and Kearns et al. Our results show that evolutionary trees in the Two-State General Markov can be efficiently PAC-learned in the variation distance metric using a "reasonable" number of samples. The two other problems that we consider are combinatorial problems that are also motivated by evolutionary tree construction. The input to each of these problems consists of a fixed tree topology whose leaves are bijectively labelled by the elements of a species set, as well as data for those species. Both problems involve labelling the internal nodes in the fixed topology in order to minimize some function on that tree (both functions that we consider are assumed to test the quality of the tree topology in some way). The two problems that we consider are known to be NP-hard. Our contribution is to present efficient approximation algorithms for both problems.
56

Segmentation of branching structures from medical images

Wang, Li January 2004 (has links)
Segmentation is a preliminary but important stage in most applications that use medical image data. The work in this thesis mainly focuses on branching structure segmentation on 2D retinal images, by applying image processing and statistical pattern recognition techniques. This thesis presents a vascular modelling algorithm based on a multiresolution image representation. A 2D Hermite polynomial is introduced to model the blood vessel profile in a quad-tree structure over a range of spatial/spatial-frequency resolutions. The use of a multi-resolution representation allows robust analysis by combining information across scales and to help improve computational efficiency. A Fourier based modelling and estimation process is developed, followed by an EM type of optimisation scheme to estimate model parameters. An information based process is then presented to select the most appropriate scale/model for modelling each region of the image. In the final stage, a deterministic graph theoretic approach and a stochastic approach within a Bayesian framework are employed for linking the local features and inferring the global vascular structure. Experimental results on a number of retinal images have been shown to demonstrate the effective application of the proposed algorithms. Some preliminary results on 3D data are also presented showing the possible extension of the algorithms.
57

Knowledge sharing in the introduction of a new technology : psychological contracts, subculture interactions and non-codified knowledge in CRM systems

Finnegan, David Jesse January 2005 (has links)
This longitudinal comparative study using a multidisciplinary approach, applies a processual analysis (Pettigrew, 1985; Pettigrew, 1990; Pettigrew, 1997) from a knowledge sharing perspective, to the implementation of what the literature shows to be a relatively under researched area of Customer Relationship Management( CRM) systemsi n contemporary (2001-2004) situations within Birmingham City Council and IBM. A specific focus is given to areas neglected in previous CRM studies - sub-cultures, psychological contracts, how tacit/non-codified knowledge is surfaced and shared, and with what effects on implementation. It investigates how the system stakeholders and the information system (IS) itself evolved through encountering barriers, sharing knowledge, finding new uses and inventing workarounds. A rich picture emerges of how sub-cultural silos of knowledge linked with psychological contracts and power-based relationships influence and inhibit adoption and acceptance of the CRM system. A major contribution of this processual study is to focus on the relatively neglected 'R' in CRM systems implementations. Hitherto, there has been little attempt to analyse the micro elements in the implementation of CRM systems using the lens of a multidisciplinary approach in a longitudinal study. The investigation of knowledge sharing (in particular non-codified knowledge sharing) across the key sub-cultures in the implementation process of CRM systems remains understudied. Scholars such as Lawrence and Lorch (1967), Boland and Tenkasi (1996), Newell et al. (2002) and Iansiti (1993) write of 'knowing of what. others know', 'mutual perspective taking', 'shared mental space' and 'T- shaped skills', as aids to tacit /non-codified knowledge sharing. However, they do not address fully the micro processes that lead to the above. This research aims to fill this knowledge gap, by investigating the micro elements (including in our study the psychological contracts) that lead to 'mutual perspective taking', enabling tacit/noncodified knowledge sharing across the key sub-cultures and their impacts on the adaptation and acceptance of a CRM system. This processual study lays a strong foundation for further research along the route of investigating multiple micro level elements in the process of implementation of a CRM system in order to enhance understanding of such phenomena in a contemporary situation. This qualitative study compares the CRM implementations at IBM. COM and Birmingham City Council. It penetrates the knowledge sharing issues faced by practitioners in a system integration environment. We highlight and discuss the importance of psychological contracts and their interdependencies on sub-cultural interactions and knowledge sharing. We have been able to relate and discuss real life issues in the light of existing academic theories, in order to enhance our understanding of the relatively neglected knowledge sharing phenomena in a CRM environment. The processual analysis framework extensively used and further developed in this research provides keys to its further use in enhancing the richness of future IS implementation studies at a micro level. The research contributes to the study of IS development by providing an integrative approach investigating the existing academic understandings at a micro level in a contemporary situation. A major contribution is also a detailed insight into the process of Boland and Tenkasi's (1996) 'mutual perspective taking' through the investigation of psychological contracts and their interdependencies on sub-cultural interaction and knowledges haring. An interesting finding has been that the distinctive contexts of the two cases have had lesser effects than the distinctive nature of CRM Systems and the implementation processes adopted. The study shows that irrespective of sectoral backgrounds the two organisations studied in this research failed to address adequately a range of common issues related to human behaviour, psychology, organisational characteristics, sub-cultural interactions and knowledge sharing. According to our research findings these factors have greater explanatory power for the results achieved than the distinctive contexts in which the two organisations operated.
58

A logical analysis of soft systems modelling : implications for information system design and knowledge based system design

Gregory, Frank Hutson January 1993 (has links)
The thesis undertakes an analysis of the modelling methods used in the Soft Systems Methodology (SSM) developed by Peter Checkland and Brian Wilson. The analysis is undertaken using formal logic and work drawn from modern Anglo-American analytical philosophy especially work in the area of philosophical logic, the theory of meaning, epistemology and the philosophy of science. The ability of SSM models to represent causation is found to be deficient and improved modelling techniques suitable for cause and effect analysis are developed. The notional status of SSM models is explained in terms of Wittgenstein's language game theory. Modal predicate logic is used to solve the problem of mapping notional models on to the real world. The thesis presents a method for extending SSM modelling in to a system for the design of a knowledge based system. This six stage method comprises: systems analysis, using SSM models; language creation, using logico-linguistic models; knowledge elicitation, using empirical models; knowledge representation, using modal predicate logic; codification, using Prolog; and verification using a type of non-monotonic logic. The resulting system is constructed in such a way that built in inductive hypotheses can be falsified, as in Karl Popper's philosophy of science, by particular facts. As the system can learn what is false it has some artificial intelligence capability. A variant of the method can be used for the design of other types of information system such as a relational database.
59

Addressing concerns in performance prediction : the impact of data dependencies and denormal arithmetic in scientific codes

Foley, Brian Patrick January 2009 (has links)
To meet the increasing computational requirements of the scientific community, the use of parallel programming has become commonplace, and in recent years distributed applications running on clusters of computers have become the norm. Both parallel and distributed applications face the problem of predictive uncertainty and variations in runtime. Modern scientific applications have varying I/O, cache, and memory profiles that have significant and difficult to predict effects on their runtimes. Data-dependent sensitivities such as the costs of denormal floating point calculations introduce more variations in runtime, further hindering predictability. Applications with unpredictable performance or which have highly variable runtimes can cause several problems. If the runtime of an application is unknown or varies widely, workflow schedulers cannot e�ciently allocate them to compute nodes, leading to the under-utilisation of expensive resources. Similarly, a lack of accurate knowledge of the performance of an application on new hardware can lead to misguided procurement decisions. In heavily parallel applications, minor variations in runtime on individual nodes can have disproportionate effects on the overall application runtime. Even on a smaller scale, a lack of certainty about an application’s runtime can preclude its use in real-time or time-critical applications such as clinical diagnosis. This thesis investigates two sources of data-dependent performance variability. The first source is algorithmic and is seen in a state-of-the-art C++ biomedical imaging application. It identifies the cause of the variability in the application and develops a means of characterising the variability. This ‘probe task’ based model is adapted for use with a workflow scheduler, and the scheduling improvements it brings are examined. The second source of variability is more subtle as it is micro-architectural in nature. Depending on the input data, two runs of an application executing exactly the same sequence of instructions and with exactly the same memory access patterns can have large differences in runtime due to deficiencies in common hardware implementations of denormal arithmetic1. An exception-based profiler is written to detect occurrences of denormal arithmetic and it is shown how this is insufficient to isolate the sources of denormal arithmetic in an application. A novel tool based on theValgrind binary instrumentation framework is developed which can trace the origins of denormal values and the frequency of their occurrence in an application’s data structures. This second tool is used to isolate and remove the cause of denormal arithmetic both from a simple numerical code, and then from a face recognition application.
60

A business-oriented framework for enhancing web services security for e-business

Nurse, Jason R. C. January 2010 (has links)
Security within the Web services technology field is a complex and very topical issue. When considering using this technology suite to support interacting e-businesses, literature has shown that the challenge of achieving security becomes even more elusive. This is particularly true with regard to attaining a level of security beyond just applying technologies, that is trusted, endorsed and practiced by all parties involved. Attempting to address these problems, this research proposes BOF4WSS, a Business-Oriented Framework for enhancing Web Services Security in e-business. The novelty and importance of BOF4WSS is its emphasis on a tool-supported development methodology, in which collaborating e-businesses could achieve an enhanced and more comprehensive security and trust solution for their services interactions. This investigation began with an in-depth assessment of the literature in Web services, e-business, and their security. The outstanding issues identified paved the way for the creation of BOF4WSS. With appreciation of research limitations and the added value of framework tool-support, emphasis was then shifted to the provision of a novel solution model and tool to aid companies in the use and application of BOF4WSS. This support was targeted at significantly easing the difficulties incurred by businesses in transitioning between two crucial framework phases. To evaluate BOF4WSS and its supporting model and tool, a two-step approach was adopted. First, the solution model and tool were tested for compatibility with existing security approaches which they would need to work with in real-world scenarios. Second, the framework and tool were evaluated using interviews with industry-based security professionals who are experts in this field. The results of both these evaluations indicated a noteworthy degree of evidence to affirm the suitability and strength of the framework, model and tool. Additionally, these results also act to cement this thesis' proposals as innovative and significant contributions to the research field.

Page generated in 0.1228 seconds