• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 23
  • 15
  • 13
  • 1
  • Tagged with
  • 2468
  • 1189
  • 1131
  • 1070
  • 895
  • 182
  • 133
  • 92
  • 89
  • 89
  • 88
  • 86
  • 78
  • 67
  • 66
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Fast and efficient algorithms on strings and sequences

Rahman, Mohammad Sohel January 2007 (has links)
The last few decades have witnessed the fascinating outgrowth of the field of String Algorithms. And, with the ever increasing mass of information and rapid pace of dissemination and sharing thereof, the importance and relevance of these algorithms can be expected to grow even further in near future. In this thesis, we have studied a number of interesting problems on strings and sequences and taken an effort to devise efficient algorithms to solve them. The problems we have studied here falls into two main categories of problems, namely the String Matching Problems and the Longest Common Subsequence Problems. The first String matching problem handled in this thesis is the pattern matching problems under the don't care paradigm. Don't care pattern matching problems have been studied since 1974 and the quest for improved algorithms is still on. We present efficient algorithms for different versions of this problem. We also present efficient algorithms for a more general problem, namely the pattern matching problem with degenerate strings. Next, we consider another well-studied problem namely the Swap Matching problem. We present a new graph-theoretic approach to model the problem, which opens a new and hitherto unexplored avenue to solve it. Then, using the model, we devise an efficient algorithm, which works particularly well for short patterns. Computer assisted music analysis and music information retrieval has a number of tasks that can be related to stringology. To this extent, we consider the problem of identifying rhythms in a musical text, which can be used in automatic song classification. Since there doesn't exist a complete agreement on the definitions of related features, we take an effort to present a new paradigm to model this problem and devise an efficient algorithm to solve it. Next, we consider a number ofrelatively new interesting pattern matching problems from indexing point of view. We present efficient indexing schemes for the problem of pattern matching in given intervals, the property matching problem and the circular pattern matching problem. We conclude our study of string matching problems by focusing on devising an efficient data structure to index gapped factors. In the second part of this thesis, we study the longest common subsequence (LCS) problems and variants thereof. We first introduce a number of new LCS variants by applying the notion of gap-constraints. The new variants are motivated by practical applications from computational molecular biology and we present efficient algorithms to solve them. Then, using the techniques developed while solving these variants, we present efficient algorithms for the classic LCS problem and for another relatively new variant, namely, the Constrained LCS (CLCS) problem. Finally, we efficiently solve the LCS and CLCS problems for degenerate strings.

Formal modelling and analysis of dynamic reconfiguration of dependable systems

Bhattacharyya, Anirban January 2013 (has links)
The contribution of this thesis is a novel way of formally modelling and analyzing dynamic process reconfiguration in dependable systems. Modern dependable systems are required to be flexible, reliable, available and highly predictable. One way of achieving flexibility, reliability and availability is through dynamic reconfiguration. That is, by changing at runtime the structure of a system – consisting of its components and their communication links – or the hardware location of its software components. However, predicting the system’s behaviour during its dynamic reconfiguration is a challenge, and this motivates our research. Formal methods can determine whether or not a system’s design is correct, and design correctness is a key factor in ensuring the system will behave predictably and reliably at runtime. Therefore, our approach is formal. Existing research on software reconfiguration has focused on planned reconfiguration and link mobility. The focus of this thesis is on unplanned process reconfiguration. That is, the creation, deletion and replacement of processes that is not designed into a system when it is manufactured. We describe a process algebra (CCS<sup>dp</sup>) which is CCS extended with a new type of process (termed a fraction process) in order to model process reconfiguration. We have deliberately not introduced a new operator in <sup>dp</sup> in order to model unplanned reconfiguration. Instead, we define a bisimulation (~<sub>of</sub>) that is used to identify a process for reconfiguration by behavioural matching. The use of behavioural matching based on ~<sub>of</sub> (rather than syntactic or structural congruence-based matching) helps to make models simple and terse. However, ~<sub>of</sub> is too weak to be a congruence. Therefore, we strengthen the conditions defining ~<sub>of</sub> to obtain another bisimulation (~<sub>dp</sub>) which is a congruence, and (therefore) can be used for equational reasoning. Our notion of fraction process is recursive to enable fractions to be themselves reconfigured. We bound the depth of recursion of a fraction and its successors in order to ensure that ~<sub>of</sub> and ~<sub>dp</sub> are decidable. Furthermore, we restrict the set of states in a model of a system to be finite, which also supports decidability of the two bisimulations and helps model checking. We evaluate CCS<sup>dp</sup> in two ways. First, with respect to requirements used to evaluate other formalisms. Second, through a simple case study, in which the reconfiguration of an office workflow is modelled using CCS<sup>dp</sup>.

Improving the Process of Model Checking through State Space Reductions

Turner, Edward Nanakorn January 2007 (has links)
Model checking is a technique for finding errors in systems and algorithms. The technique requires a formal definition of the system with a set of correctness conditions, and the use of a tool, the model checker, that searches for model behaviours violating these correctness conditions. The value of existing model checkers depends 'largely on the complexity of the system being checked. Systems involving complex data structures quickly encounter the problem of state explosion, and checking becomes intractable. Furthermore, auxiliary feedback originally designed to aid the practitioner (e.g., process automata) becomes less useful. This thesis develops of a set of techniques to address these 'problems. The main contributions of this thesis are methods that improve model checking in the formal language of B, by reductions in the size of a system's state space. Methods are described that enable a user to view various succinct properties about a system's behaviour through automatic analysis of reached state spaces, and a technique is developed to improve the efficiency of generating state spaces during model checking using algorithms for identifying symmetries via graph isomorphism. Soundness proofs arc shown using refinement in B. Each technique has been implemented into the B model checker, called PRoB, and is shown to be effective through experimentation and evaluation. This research has stimulated three complementary approaches for improving the generation of state spaces, which are also presented and evaluated. Although this work concerns the context of B and PRoB, the techniques could be generalised to verification tools of other languages.

Sequential Monte Carlo methods for extended and group object tracking

Petrov, Nikolay January 2013 (has links)
This dissertation deals with the challenging tasks of real-time extended and group object tracking. The problems are formulated as joint parameter and state estimation of dynamic systems. The solutions proposed are formulated within a general nonlinear framework and are based on the Sequential Monte Carlo (SMC) method, also known as Particle Filtering (PF) method. Eour different solutions are proposed for the extended object tracking problem. The first two are based on border parametrisation of the visible surface of the extended object. The likelihood functions are derived for two different scenarios - one without clutter in the measurements and another one in the presence of clutter. In the third approach the kernel density estimation technique is utilised to approximate the joint posterior density of the target dynamic state and static size parameters. The forth proposed approach solves the extended object tracking problem based on the recently emerged SMC method combined with interval analysis , called Box Particle Filter (Box P F). Simulation results for all of the developed algorithms show accurate online tracking, with very good estimates both for the target kinematic states and for the parameters of the target extent. In addition, the performance of the Box PF and the border parametrised PF is validated utilising real measurements from laser range scanners obtained within a prototype security system replicating an airport corridor.

Software outsourcing vendors' readiness model (SOVRM)

Khan, Siffat Ullah January 2011 (has links)
CONTEXT - Offshore software development outsourcing (OSDO) is a modern business strategy for developing high quality software in low-wage countries at low cost. OSDO is a contract-based relationship between client and vendor organisations in which a c1ient(s) contracts out all or part of its software development activities to a vendor(s), who provides agreed services in return for remuneration. Vendor's readiness plays an important role in the successful outcomes of OSDO projects. OBJECTIVE - The objective of this thesis is to develop a software outsourcing vendors' readiness model (SOVRM) to help vendors for assessing and improving their readiness for OSDO activities. The SOVRM is primarily developed to assist OSDO vendor organisations. However, it is also beneficial to client organisations as client organisations can take benefit from this research by getting to know about the areas in which vendors can be assessed based on their own priorities. Moreover, clients can make a better informcd decision of their choice of OSDO vendors. In developing the SOVRM model, we considered other similar efforts addressing similar areas, specifically CMMI and ISO 9001. SOVRM has its distinctive and unique features that differentiate it from other models. Specifically, SOVRM is a comprehensive model focussing on assisting outsourcing companies in assessing their readiness for software development outsourcing. It is not a software process improvement model or standard such as CMMI and ISO 9001. Vendor companies which are CMMI or ISO 900 I certified may not be ready for software development outsourcing as the characteristics of CMMI and ISO 9001 is to improve the software development capabilities of companies instead of improving their outsourcing readiness. However, we included CMMI or ISO 900 I certification as one of the success factors defined in SOVRM.

Exploiting the architectural characteristics of software components to improve software reuse

Alkazemi, Basem Yousef January 2009 (has links)
Software development is a costly process for all but the most trivial systems. One of the commonly known ways of minimizing development costs is to re-use previously built software components. However, a significant problem that source-code re-users encounter is the difficulty of finding components that not only provide the functionality they need but also conform to the architecture of the system they are building. To facilitate finding reusable components there is a need to establish an appropriate mechanism for matching the key architectural characteristics of the available source-code components against the characteristics of the system being built. This research develops a precise characterization of the architectural characteristics of source-code components, and investigates a new way to describe how appropriate components for re-use can be identified and categorized.

The design and construction of deadlock-free concurrent systems

Martin, Jeremy Malcolm Randolph January 1996 (has links)
Throughout our lives we take for granted the safety of complex structures that surround us. We live and work in buildings with scant regard for the lethal currents of electricity and flammable gas coarsing through their veins. We cross high bridges with little fear of them crumbling into the depths below. We are secure in the knowledge that these objects have been constructed using sound engineering principles. Now, increasingly, we are putting OUT lives into the hands of complex computer programs. One could cite aircraft control systems, railway signalling systems, and medical databases as examples. But whereas the disciplines of electrical and mechanical engineering have long been well understood, software engineering is in its infancy. Unlike other fields, there is no generally accepted certification of competence for its practitioners. Formal scientific methods for reliable software production have been developed, but these tend to require a level of mathematical knowledge beyond that of most programmers. Engineers, in general, are usually more concerned with practical issues than with the underlying scientific theory of their particular discipline. They want to get on with the business of building powerful systems. They rely on scientists to provide them with safety rules which they can incorporate into their designs. For instance, a bridge designer needs to know certain formulae to calculate how wide to set the span of an arch - he does not need to know why the formulae work. Software engineering is in need of a battery of similar rules to provide a bridge between theory and practice. The demand for increasing amounts of computing power makes parallel programming very appealing. However additional dangers lurk in this exciting field. In this thesis we explore ways to circumvent one particularly dramatic problem - deadlock. This is a state where none of the constituent processes of a system can agree on how to proceed, so nothing ever happens. Clearly we would desire that any sensible system we construct could never arrive at such a state, but what can we do to ensure that this is indeed the case? We might think to use a computer to check every posssible state of the system. But, given that the number of states of a parallel system usually grows exponentially with the number of processes, we would most likely find the task too great.

Automatic summarization of opinions in service reviews

Di Fabrizzio, Giuseppe January 2012 (has links)
No description available.

Modelling and assessing the environmental impacts of software

Williams, Daniel R. January 2013 (has links)
Software is used on billions of devices every day for numerous activities and services that the modern world relies upon. It also has the potential to control energy consumption and maximise device efficiency, thus minimising device and service environmental impact. The academic background to the environmental impact of software, at the beginning of this research, was minor and little utilised by information and communications technology (ICT) organisations. Consequently, this research analysed and modelled three main software types enabling the environmental analysis of software use to take place with real data and information. Firstly, Operating Systems (OS) and their power management features were analysed. as form the foundation of any software based activity and control device componentry, and thus overall device energy consumption. The overall impact of an as, not just a devices idle or maximum power, was investigated with a new set of measurements, methods, and models constructed in this research. Secondly, new methodologies to measure the energy consumption of electronic software distribution (ESD) and cloud computing were created, which spanned the entire life cycle of the data route. Both technologies were modelled and measured across a number of real scenarios. ESD and cloud computing can be utilised to dematerialise and increase the efficiency of some high energy impact activities. Additionally, it was found that cloud computing can be utilised to reduce the impact of data and process intensive computing, but can also be wasteful compared to traditional computing when utilised for certain types of low resource software activities. Finally, a novel framework was created to enable the analysis and comparison of the energy consumption and greenhouse gas (GHG) emissions of the same software activity across a range of devices and as, according to the technical functionality and price of a set of devices. This framework enables the analysis of devices according to their software GHG emission efficiencies across different platforms and may allow manufacturers and consumers to maximise and drive forward environmental awareness. In conclusion, this research has set the foundations for quantifying, and has demonstrated, the large potential that software has to reduce energy consumption and overall environmental impact.

Action in context - context in action : towards a grounded theory of software design

Webb, Brian Robert January 2001 (has links)
This thesis develops a model and a theory of software design. Thirty-two transcripts of interviews with software designers were analysed using the Grounded Theory method. The first set of sixteen interviews drawn from the field of Digital Interactive Multimedia (Data-set A) was used to develop the model and theory, the second set of sixteen interviews drawn from one source of technical literature (Data-set B) was used to test and enhance the initial outcomes. Final outcomes are then grounded in the general literature on problem solving and design. The model is concerned to capture a rich, holistic picture of software design. It is descriptive rather than prescriptive, concerned to capture how software design is done rather than advocate how it ought to be done. The theory is a development of the model and is presented initially as a theoretical framework and then as a series of propositions. The theoretical framework is a function of the juxtaposition of specific properties or attributes of the "core category", which uniquely explains the phenomenon. Its outcome is four design scenarios. Each scenario is of interest as an explanation of software design practice but two scenarios wherein such practice does not "fit" the design context are of most interest. It is argued that these scenarios can be used to identify and explain design breakdowns. Finally, the thesis purports to explicate the "Meta-process" - the process through which the inductive model and theory was developed. This is an unusual objective for a piece of IS research but valid nonetheless and significant, given the complexity of the research method used and the dearth of good process accounts in the IS literature and elsewhere.

Page generated in 0.0433 seconds