• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 1
  • Tagged with
  • 937
  • 143
  • 105
  • 73
  • 73
  • 63
  • 44
  • 39
  • 35
  • 21
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

A service late binding enabled solution for data integration from autonomous and evolving databases

Wang, Chong January 2010 (has links)
Integrating data from autonomous, distributed and heterogeneous data sources to provide a unified vision is a common demand for many businesses. Since the data sources may evolve frequently to satisfy their own independent business needs, solutions which use hard coded queries to integrate participating databases may cause high maintenance costs when evolution occurs. Thus a new solution which can handle database evolution with lower maintenance effort is required. This thesis presents a new solution: Service Late binding Enabled Data Integration (SLEDI) which is set into a framework modeling the essential processes of the data integration activity. It integrates schematic heterogeneous relational databases with decreased maintenance costs for handling database evolution. An algorithm, named Information Provision Unit Describing (IPUD) is designed to describe each database as a set of Information Provision Units (IPUs). The IPUs are represented as Directed Acyclic Graph (DAG) structured data instead of hard coded queries, and further realized as data services. Hence the data integration is achieved through service invocations. Furthermore, a set of processes is defined to handle the database evolution through automatically identifying and modifying the IPUs which are affected by the evolution. An extensive evaluation based on a case study is presented. The result shows that the schematic heterogeneities defined in this thesis can be solved by IPUD except the relation isomorphism discrepancy. Ten out of thirteen types of schematic database evolution can be automatically handled by the evolution handling processes as long as the evolution is represented by the designed data model. The computational costs of the automatic evolution handling show a slow linear growth with the number of participating databases. Other characteristics addressed include SLEDI’s scalability, independence of application domain and databases model. The descriptive comparison with other data integration approaches shows that although the Data as a Service approach may result in lower performance under some circumstances, it supports better flexibility for integrating data from autonomous and evolving data sources.
122

Semantic service description framework for efficient service discovery and composition

Du, Xiaofeng January 2009 (has links)
Web services have been widely adopted as a new distributed system technology by industries in the areas of, enterprise application integration, business process management, and virtual organisation. However, lack of semantics in current Web services standards has been a major barrier in the further improvement of service discovery and composition. For the last decade, Semantic Web Services have become an important research topic to enrich the semantics of Web services. The key objective of Semantic Web Services is to achieve automatic/semi-automatic Web service discovery, invocation, and composition. There are several existing semantic Web service description frameworks, such as, OWL-S, WSDL-S, and WSMF. However, existing frameworks have several issues, such as insufficient service usage context information, precisely specified requirements needed to locate services, lacking information about inter-service relationships, and insufficient/incomplete information handling, make the process of service discovery and composition not as efficient as it should be. To address these problems, a context-based semantic service description framework is proposed in this thesis. This framework focuses on not only capabilities of Web services, but also the usage context information of Web services, which we consider as an important factor in efficient service discovery and composition. Based on this framework, an enhanced service discovery mechanism is proposed. It gives service users more flexibility to search for services in more natural ways rather than only by technical specifications of required services. The service discovery mechanism also demonstrates how the features provided by the framework can facilitate the service discovery and composition processes. Together with the framework, a transformation method is provided to transform exiting service descriptions into the new framework based descriptions. The framework is evaluated through a scenario based analysis in comparison with OWL-S and a prototype based performance evaluation in terms of query response time, the precision and recall ratio, and system scalability.
123

An empirical assessment of the software design pattern concept

Zhang, Cheng January 2011 (has links)
Context: The publication of the milestone textbook on design patterns by the ‘Gang of Four’ (GoF) in 1995, introduced a set of 23 design patterns that are largely concerned with improving the practices and products of software development. However, there has been no comprehensive assessment of the effectiveness of design patterns, nor is there any evidence about any claims and factors that are made for pattern reuse in software development. Aims: The aims of this thesis are to assess the design patterns systematically in a sequence of studies, and to identify the claims and factors to determine how well they reflect experiences of pattern reuse in practice. Method: This thesis describes four studies: a document survey to identify claims for patterns, a mapping study to identify empirical studies about patterns, an online survey, and a narrative synthesis. The mapping study and the online survey together provide quite comprehensive and thorough evidence for the narrative synthesis. In the narrative synthesis, we check whether there is any consistency or not in the evidence about specific patterns, and also to see how the claims and factors influence pattern reuse. Results: The mapping study found 20 primary studies, and the online survey had 206 usable responses. In the 20 primary study of the mapping study 17 design patterns were examined. In the online survey 175 respondents considered patterns were useful, and 155 respondents reported on patterns that they considered not to be useful. Conclusion: From the synthesis results, the specific patterns Composite and Observer are evaluated as being generally useful, but the Visitor and Singleton patterns, while useful, have possible negative aspects. And also four of the claims and the effect of one factor are demonstrated to be generally true. But the others are either unsupported or have no effect.
124

Speech/music discrimination : novel features in time domain

Alnadabi, Muhammad Saeid Muhammad January 2010 (has links)
This research aimed to find novel features that can be used to discriminate between speech and music in the time domain for the purpose of data retrieval. The study used speech and music data that were recorded in standard anechoic chambers and sampled at 44.1 kHz. Two types of new features were found and thoroughly examined: the Ratio of Silent Frames (RSF) feature and the Time Series Events (TSE) set of features. The Receiver Operating Characteristics (ROC) curves were used to assess each one of the proposed features as well as certain relevant features from the literature for the purpose of comparison. The RSF feature introduced up to 8% enhancement when compared to a couple of relevant features from the literature. One of the TSE set of features provided close to 100% speech/music discrimination.
125

Type oriented parallel programming

Brown, Nicholas Edward January 2010 (has links)
Context: Parallel computing is an important field within the sciences. With the emergence of multi, and soon many, core CPUs this is moving more and more into the domain of general computing. HPC programmers want performance, but at the moment this comes at a cost; parallel languages are either efficient or conceptually simple, but not both. Aim: To develop and evaluate a novel programming paradigm which will address the problem of parallel programming and allow for languages which are both conceptually simple and efficient. Method: A type-based approach, which allows the programmer to control all aspects of parallelism by the use and combination of types has been developed. As a vehicle to present and analyze this new paradigm a parallel language, Mesham, and associated compilation tools have also been created. By using types to express parallelism the programmer can exercise efficient, flexible control in a high level abstract model yet with a sufficiently rich amount of information in the source code upon which the compiler can perform static analysis and optimization. Results: A number of case studies have been implemented in Mesham. Official benchmarks have been performed which demonstrate the paradigm allows one to write code which is comparable, in terms of performance, with existing high performance solutions. Sections of the parallel simulation package, Gadget-2, have been ported into Mesham, where substantial code simplifications have been made. Conclusions: The results obtained indicate that the type-based approach does satisfy the aim of the research described in this thesis. By using this new paradigm the programmer has been able to write parallel code which is both simple and efficient.
126

Feedback 2.0 : an investigation into using sharable feedback tags as programming feedback

Cummins, Stephen Alexander January 2010 (has links)
Objectives: Learning and teaching computer programming is a recognised challenge in Higher Education. Since feedback is regarded as being the most important part of the learning process, it is expected that improving it could support students' learning. This thesis aims to investigate how new forms of feedback can improve student learning of programming and how feedback sharing can further enhance the students' learning experience. Methods: This thesis investigates the use of new forms of feedback for programming courses. The work explores the use of collaborative tagging often found in Web 2.0 software systems and a feedback approach that requires examiners to annotate students source code with short, potentially reusable feedback. The thesis utilises a variety of research methods including questionnaires, focus groups and collection of system usage data recorded from student interactions with their feedback. Sentiment and thematic analysis are used to investigate how well feedback tags communicate the intended message from examiners to students. The approaches used are tested and refined over two preliminary investigations before use in the final investigation. Results: The work identified that a majority of students responded positively to the new feedback approach described. Student engagement was high with up to 100% viewing their feedback and at least 42% of students opting to share their feedback. Students in the cohort who achieved either the lower or higher marks for the assignment appeared more likely to share their feedback. Conclusions: This thesis has demonstrated that sharing of feedback can be useful for disseminating good practice and common pitfalls. Provision of feedback which is contextually rich and textually concise has resulted in higher engagement from students. However, the outcomes of this research have been shown to be influenced by the assessment process adopted by the University. For example, students were more likely to engage with their feedback if marks are unavailable at the time of feedback release. This issue and many others are proposed as further work.
127

Indexing and retrieval of 3D articulated geometry models

Tam, Kwok Leung January 2009 (has links)
In this PhD research study, we focus on building a content-based search engine for 3D articulated geometry models. 3D models are essential components in nowadays graphic applications, and are widely used in the game, animation and movies production industry. With the increasing number of these models, a search engine not only provides an entrance to explore such a huge dataset, it also facilitates sharing and reusing among different users. In general, it reduces production costs and time to develop these 3D models. Though a lot of retrieval systems have been proposed in recent years, search engines for 3D articulated geometry models are still in their infancies. Among all the works that we have surveyed, reliability and efficiency are the two main issues that hinder the popularity of such systems. In this research, we have focused our attention mainly to address these two issues. We have discovered that most existing works design features and matching algorithms in order to reflect the intrinsic properties of these 3D models. For instance, to handle 3D articulated geometry models, it is common to extract skeletons and use graph matching algorithms to compute the similarity. However, since this kind of feature representation is complex, it leads to high complexity of the matching algorithms. As an example, sub-graph isomorphism can be NP-hard for model graph matching. Our solution is based on the understanding that skeletal matching seeks correspondences between the two comparing models. If we can define descriptive features, the correspondence problem can be solved by bag-based matching where fast algorithms are available. In the first part of the research, we propose a feature extraction algorithm to extract such descriptive features. We then convert the skeletal matching problems into bag-based matching. We further define metric similarity measure so as to support fast search. We demonstrate the advantages of this idea in our experiments. The improvement on precision is 12\% better at high recall. The indexing search of 3D model is 24 times faster than the state of the art if only the first relevant result is returned. However, improving the quality of descriptive features pays the price of high dimensionality. Curse of dimensionality is a notorious problem on large multimedia databases. The computation time scales exponentially as the dimension increases, and indexing techniques may not be useful in such situation. In the second part of the research, we focus ourselves on developing an embedding retrieval framework to solve the high dimensionality problem. We first argue that our proposed matching method projects 3D models on manifolds. We then use manifold learning technique to reduce dimensionality and maximize intra-class distances. We further propose a numerical method to sub-sample and fast search databases. To preserve retrieval accuracy using fewer landmark objects, we propose an alignment method which is also beneficial to existing works for fast search. The advantages of the retrieval framework are demonstrated in our experiments that it alleviates the problem of curse of dimensionality. It also improves the efficiency (3.4 times faster) and accuracy (30\% more accurate) of our matching algorithm proposed above. In the third part of the research, we also study a closely related area, 3D motions. 3D motions are captured by sticking sensor on human beings. These captured data are real human motions that are used to animate 3D articulated geometry models. Creating realistic 3D motions is an expensive and tedious task. Although 3D motions are very different from 3D articulated geometry models, we observe that existing works also suffer from the problem of temporal structure matching. This also leads to low efficiency in the matching algorithms. We apply the same idea of bag-based matching into the work of 3D motions. From our experiments, the proposed method has a 13\% improvement on precision at high recall and is 12 times faster than existing works. As a summary, we have developed algorithms for 3D articulated geometry models and 3D motions, covering feature extraction, feature matching, indexing and fast search methods. Through various experiments, our idea of converting restricted matching to bag-based matching improves matching efficiency and reliability. These have been shown in both 3D articulated geometry models and 3D motions. We have also connected 3D matching to the area of manifold learning. The embedding retrieval framework not only improves efficiency and accuracy, but has also opened a new area of research.
128

Rank lower bounds in propositional proof systems based on integer linear programming methods

Rhodes, Mark Nicholas Charles January 2009 (has links)
The work of this thesis is in the area of proof complexity, an area which looks to uncover the limitations of proof systems. In this thesis we investigate the rank complexity of tautologies for several of the most important proof systems based on integer linear programming methods. The three main contributions of this thesis are as follows: Firstly we develop the first rank lower bounds for the proof system based on the Sherali-Adams operator and show that both the Pigeonhole and Least Number Principles require linear rank in this system. We also demonstrate a link between the complexity measures of Sherali-Adams rank and Resolution width. Secondly we present a novel method for deriving rank lower bounds in the well-studied Cutting Planes proof system. We use this technique to show that the Cutting Plane rank of the Pigeonhole Principle is logarithmic. Finally we separate the complexity measures of Resolution width and Sherali-Adams rank from the complexity measures of Lovasz and Schrijver rank and Cutting Planes rank.
129

Exploiting structure to cope with NP-hard graph problems : polynomial and exponential time exact algorithms

Van-'T-Hof, Pim January 2010 (has links)
An ideal algorithm for solving a particular problem always finds an optimal solution, finds such a solution for every possible instance, and finds it in polynomial time. When dealing with NP-hard problems, algorithms can only be expected to possess at most two out of these three desirable properties. All algorithms presented in this thesis are exact algorithms, which means that they always find an optimal solution. Demanding the solution to be optimal means that other concessions have to be made when designing an exact algorithm for an NP-hard problem: we either have to impose restrictions on the instances of the problem in order to achieve a polynomial time complexity, or we have to abandon the requirement that the worst-case running time has to be polynomial. In some cases, when the problem under consideration remains NP-hard on restricted input, we are even forced to do both. Most of the problems studied in this thesis deal with partitioning the vertex set of a given graph. In the other problems the task is to find certain types of paths and cycles in graphs. The problems all have in common that they are NP-hard on general graphs. We present several polynomial time algorithms for solving restrictions of these problems to specific graph classes, in particular graphs without long induced paths, chordal graphs and claw-free graphs. For problems that remain NP-hard even on restricted input we present exact exponential time algorithms. In the design of each of our algorithms, structural graph properties have been heavily exploited. Apart from using existing structural results, we prove new structural properties of certain types of graphs in order to obtain our algorithmic results.
130

Policy making using computer simulators for complex physical systems : Bayesian decision support for the development of adaptive strategies

Williamson, Daniel January 2010 (has links)
Policy makers increasingly rely on computer models to aid policy judgements for complex systems. The climate system, for example, is extremely complicated and its reaction to changes in radiative forcing through CO2 emissions can only be explored using models. Bayesian methods for making inferences about physical systems that combine information from computer simulators and system observations have become increasingly well studied. We apply some of these methods to the policy problem where the decisions to be made are inputs to the computer model. Particular features of our methodologies include: the provision of Bayesian decision support for the policy problem when it is known that policy may be adapted in reaction to future observations of the complex system; and careful integration of the knowledge that our computer simulators will evolve and improve over time, which may affect downstream strategies and, hence, current policy. Our methods also allow research investment questions to be explored in the context of the wider policy problem. For example, the question of whether or not an improved version of a computer simulator should be built and how much it should be run can be addressed as part of the policy problem.

Page generated in 0.0359 seconds