• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 28
  • 10
  • 9
  • Tagged with
  • 422
  • 80
  • 74
  • 44
  • 40
  • 40
  • 40
  • 39
  • 39
  • 29
  • 28
  • 27
  • 26
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Hyper-heuristic decision tree induction

Vella, Alan January 2012 (has links)
A hyper-heuristic is any algorithm that searches or operates in the space of heuristics as opposed to the space of solutions. Hyper-heuristics are increasingly used in function and combinatorial optimization. Rather than attempt to solve a problem using a fixed heuristic, a hyper-heuristic approach attempts to find a combination of heuristics that solve a problem (and in turn may be directly suitable for a class of problem instances). Hyper-heuristics have been little explored in data mining. This work presents novel hyper-heuristic approaches to data mining, by searching a space of attribute selection criteria for decision tree building algorithm. The search is conducted by a genetic algorithm. The result of the hyper-heuristic search in this case is a strategy for selecting attributes while building decision trees. Most hyper-heuristics work by trying to adapt the heuristic to the state of the problem being solved. Our hyper-heuristic is no different. It employs a strategy for adapting the heuristic used to build decision tree nodes according to some set of features of the training set it is working on. We introduce, explore and evaluate five different ways in which this problem state can be represented for a hyper-heuristic that operates within a decisiontree building algorithm. In each case, the hyper-heuristic is guided by a rule set that tries to map features of the data set to be split by the decision tree building algorithm to a heuristic to be used for splitting the same data set. We also explore and evaluate three different sets of low-level heuristics that could be employed by such a hyper-heuristic. This work also makes a distinction between specialist hyper-heuristics and generalist hyper-heuristics. The main difference between these two hyperheuristcs is the number of training sets used by the hyper-heuristic genetic algorithm. Specialist hyper-heuristics are created using a single data set from a particular domain for evolving the hyper-heurisic rule set. Such algorithms are expected to outperform standard algorithms on the kind of data set used by the hyper-heuristic genetic algorithm. Generalist hyper-heuristics are trained on multiple data sets from different domains and are expected to deliver a robust and competitive performance over these data sets when compared to standard algorithms. We evaluate both approaches for each kind of hyper-heuristic presented in this thesis. We use both real data sets as well as synthetic data sets. Our results suggest that none of the hyper-heuristics presented in this work are suited for specialization – in most cases, the hyper-heuristic’s performance on the data set it was specialized for was not significantly better than that of the best performing standard algorithm. On the other hand, the generalist hyper-heuristics delivered results that were very competitive to the best standard methods. In some cases we even achieved a significantly better overall performance than all of the standard methods.
52

Logic, computation and constraint satisfaction

Martin, Barnaby D. January 2005 (has links)
We study a class of non-deterministic program schemes with while loops: firstly, augmented with a priority queue for memory; secondly, augmented with universal quantification; and, thirdly, augmented with universal quantification and a stack for memory. We try to relate these respective classes of program schemes to well-known complexity classes and logics.;We study classes of structure on which path system logic coincides with polynomial time P.;We examine the complexity of generalisations of non-uniform boolean constraint satisfaction problems, where the inputs may have a bounded number of quantifier alternations (as opposed to the purely existential quantification of the CSP). We prove, for all bounded-alternation prefixes that have some universal quantifiers to the outside of some existential quantifiers (i.e. 2 and above), that this generalisation of boolean CSP respects the same dichotomy as that for the non-uniform boolean quantified constraint satisfaction problem.;We study the non-uniform QCSP, especially on digraghs, through a combinatorial analog - the alternating-homomorphism problem - that sits in relation to the QCSP exactly as the homomorphism problem sits with the CSP. We establish a trichotomy theorem for the non-uniform QCSP when the template is restricted to antireflexive, undirected graphs with at most one cycle. Specifically, such templates give rise to QCSPs that are either tractable, NP-complete or Pspace-complete.;We study closure properties on templates that respect QCSP hardness or QCSP equality. Our investigation leads us to examine the properties of first-order logic when deprived of the equality relation.;We study the non-uniform QCSP on tournament templates, deriving sufficient conditions for tractablity, NP-completeness and Pspace-completeness. In particular, we prove that those tournament templates that give rise to tractable CSP also give rise to tractable QCSP.
53

A cognitive study of learning to program in introductory programming courses

Dehnadi, Saeed January 2009 (has links)
Programming is notoriously hard for novices to learn and a substantial number of learners fail in introduction to programming courses. It is not just a UK problem: a number of multi-institutional and multi-national studies reveal that the problem is well-known and is widespread. There is no general agreement about the causes or the remedies. The major factors which can be hypothesised as a cause of this phenomenon are: learners' psychology; teaching methods; complexity of programming. In this study, learners' common mistakes, bugs, misconceptions, frequencies and type of errors (syntactic and semantic) in the early stages of learning programming were studied. Noticing the patterns of rationales behind novices' mistakes swayed the study toward investigating novices' mental ability which was found to have a great effect on their learning performance. It was observed that novices reported a recognisable set of models of program execution each of which was logically acceptable as a possible answer and it appeared that some students even used these models systematically. It was suspected that the intellectual strategies behind their reasoning could have been built up from their programming background knowledge and it was surprising when it was found that some of those novices had not even seen a program before. A diagnostic questionnaire was designed that apparently examined a student's understanding of assignments and sequence but in fact was capturing the reasoning strategy behind their interpretation of each question, regardless of a correct or wrong answer. The questionnaire was administered in the first week of an introductory programming course, without giving any explanation of what the questions were about. A full response from most participants was received, despite the fact that the questions were unexplained. Confronted with a simple program, about half of novices seem to spontaneously invent and consistently apply a mental model of program execution. They were called the consistent subgroup. The other half are either unable to build a model or to apply one consistently. They were called the inconsistent subgroup. The first group perform very much better in their end-of-course examination than the rest. Meta-analysis of the results of six experiments in UK and Australia confirmed a strong effect of consistency on success which is highly significant (p < 0:001). A strong eect persisted in every group of candidates, sliced by background factors of programming experience (with/ without), relevant programming experience(with/without), and prior programming course (with/without) which might be thought to have had an effect on success. This result confirms that consistency is not simply provided by prior programming background. Despite the tendency in institutions to rely on students' prior programming background as a positive predictor for success, this study revealed that prior programming education did not have a noticeable eect on novices' success. A weak positive eect of prior programming experience was observed overall which appeared to be driven by one experiment with a programming-skilful population. This study shows that students in the consistent subgroup have the ability to build a mental model, something that follows rules like a mechanical construct. It also seems that when programming skill is measured by a weak assessment mechanism, the eect of consistency on success is reduced.
54

System programming in a high level language

Birrell, Aandrew David January 1977 (has links)
No description available.
55

Syntactic analysis of programming languages

Blake, J. D. January 1968 (has links)
No description available.
56

A string-oriented technique for the analysis of medium sized computer programs to recognise and expose sections which could be performed in parallel

Bock, W. A. January 1975 (has links)
No description available.
57

Balancing between agile and plan-driven software development methods to minimise project risk and improve quality

Liu, Hsun-Wen Lisa January 2013 (has links)
The majority of software development projects, in particular the larger more complex ones, end in failure (The Standish group, 1994, 1998,2003,2009). There has been much discussion over the best approach to adopt in order to address this issue (Beck and Fowler, 2000; Boehm and Turner, 2003; Larman, 2004; Rico, 2008). Proponents of both Agile and Plan-Driven methods have put forward arguments supporting their particular preference. Agile software development methods such as Extreme Programming (XP), Scrum, Crystal, DSDM and Feature Driven Development (FDD) promise increased customer satisfaction, lower defect rates, faster development times, and a solution to rapidly changing requirements. Plan-driven approaches such as Waterfall, incremental, spiral, Rational Unified Process (RUP), the Personal Software Process (PSP), or methods based on the Capability Maturity Model (CMM), promise predictability, stability, and high assurance. However, both approaches have shortcomings that, if left unaddressed, can lead to project failure. One of the purposes of this study is based on Barry Boehm's (Boehm and Turner, 2003) risk-based framework of risk analysis by integrating each phase of the project life cycle with the risk process framework in an overall development strategy in order to convince the stakeholders what risk they should be aware of and getting their involvement in the early project risk assessment stage. This is an empirical survey-based study to identify the risk factors from the perspective of software practitioners who practice agile software development or/and plan-driven methods in software development projects. The study used a mixture of quantitative and qualitative methods. Based on the literature reviews, and an analysis of the key factors which caused IT project failures collected from the survey, this study proposes that the integrated risk analysis framework is an ideal tool of 6- Dimensional tool to define and address the risks associated with agile and plan-driven methods. The proposed framework also helps IT managers to manage risks and budgets for managing risks in order to make their projects successful.
58

An exploration of traditional and data driven predictors of programming performance

Watson, Christopher January 2015 (has links)
This thesis investigates factors that can be used to predict the success or failure of students taking an introductory programming course. Four studies were performed to explore how aspects of the teaching context, static factors based upon traditional learning theories, and data-driven metrics derived from aspects of programming behaviour were related to programming performance. In the first study, a systematic review into the worldwide outcomes of programming courses revealed an average pass rate of 67.7\%. This was found to have not significantly changed over time, or to have differed based upon aspects of the teaching context, such as the programming language taught to students. The second study showed that many of the factors based upon traditional learning theories, such as learning styles, are context dependent, and fail to consistently predict programming performance when they are applied across different teaching contexts. The third study explored data-driven metrics derived from the programming behaviour of students. Analysing data logged from students using the BlueJ IDE, 10 new data-driven metrics were identified and validated on three independently gathered datasets. Weaker students were found to make a greater percentage of successive errors, and spend a greater percentage of their lab time resolving errors than stronger students. The Robust Relative algorithm was developed to hybridize four of the strongest data-driven metrics into a performance predictor. The novel relative scoring of students based upon how their resolve times for different types of errors compared to the resolve times of their peers, resulted in a predictor which could explain a large proportion of the variance in the performance of three independent cohorts, $R^2$ = 42.19\%, 43.65\% and 44.17\% - almost double the variance which could be explained by Jadud's Error Quotient metric. The fourth study situated the findings of this thesis within the wider literature, by applying meta-analysis techniques to statistically synthesise fifty years of conflicting research, such that the most important factors for learning programming could be identified. 482 results describing the effects of 116 factors on programming performance were synthesised and consolidated to form a six class theoretical framework. The results showed that the strongest predictors identified over the past fifty years are data-driven metrics based upon programming behaviour. Several of the traditional predictors were also found to be influential, suggesting that both a certain level of scientific maturity and self-concept are necessary for programming. Two thirds of the weakest predictors were based upon demographic and psychological factors, suggesting that age, gender, self-perceived abilities, learning styles, and personality traits have no relevance for programming performance. This thesis argues that factors based upon traditional learning theories struggle to consistently predict programming performance across different teaching contexts because they were not intended to be applied for this purpose. In contrast, the main advantage of using data-driven approaches to derive metrics based upon students' programming processes, is that these metrics are directly based upon the programming behaviours of students, and therefore can encapsulate such changes in their programming knowledge over time. Researchers should continue to explore data-driven predictors in the future.
59

Functional verification coverage closure

Salem, Mohamed A. January 2015 (has links)
Verification is a critical phase of the development cycle. It confirms the compliance of a design implementation with its functional specification. Coverage measures the progress of the verification plan. Structural coverage determines the code exercised by the functional tests. Modified Condition Decision Coverage (MC/DC) is a structural coverage type. This thesis is based on a comprehensive study for MC/DC conventions. It provides a new MC/DC test generation algorithm, presents associated MC/DC empirical work from which it draws novel insights into MC/DC utilization as a coverage metric, and investigates the design faults detection strength of MC/DC. The research results have had significant impact on industry. The MC/DC study in hardware verification is motivated by the MC/DC certification requirements for critical software applications, the MC/DC foundation on hardware principles like controllability and observability, and the linear growth of MC/DC test set. A new MC/DC test generation algorithm named OBSRV is developed, implemented, and optimized based on the D-algorithm. It is distinguished from conventional techniques as it is mainly based on logic analysis. The thesis provides the empirical work, and associated results that represent an exhaustive validation of OBSRV. It has identified novel MC/DC insights represented by the minimal MC/DC requirements optimization, the MC/DC compositionality aspects, and the design options for MC/DC fulfillment. The research has had direct impact on industrial MC/DC applications. A major EDA MC/DC product has been completely re-architected, and the verification of an industrial safety critical embedded processor has been guided for MC/DC fulfillment. It demonstrates the feasibility of MC/DC as an applicable solution for structural, and functional coverage by an evaluation that proves the MC/DC detection strength for main design faults in microprocessors. The results motivate the continuity of future research leading to MC/DC adoption as main metric for functional verification coverage closure in hardware, and software domain.
60

Design and implementation of a privacy impact assessment tool

Tancock, David January 2015 (has links)
A Privacy Impact Assessment (PIA) is a systematic process for evaluating the possible future effects that a particular activity or proposal may have on an individual's privacy. It focuses on understanding the system, initiative or scheme, identifying and mitigating adverse privacy impacts and informing decision makers who must decide whether the project should proceed and in what form. A PIA, as a proactive business process, is thus properly distinguished from reactive processes, such as privacy issue analysis, privacy audits and privacy law compliance checking, applied to existing systems to ensure their continuing conformity with internal rules and external requirements. Typically, in most of the major jurisdictions (i.e. Canada, United States (US) etc.) that conduct PIAs, PIA tools and document templates are used by organisations for project compliance/analysis in relation to their own national, state or sector-specific requirements. However, in the United Kingdom (UK) organisations typically use manual documents in one form or another (i.e. ranging from un-systematised documentation sets to organised Microsoft Templates) to undertake PIAs, which are usually based upon the advice given by the Information Commissioner's Office (lCO) and its UK PIA Handbook, or upon their own organisational rules and procedures. Therefore, while manual documents provide some benefits with regard to user comprehension and ease of use, there are some disadvantages in using them including: human error, data duplication, and time consumption. The research described in this thesis focuses upon demonstrating and exploring the extent to which an automated tool might assist in the process of carrying out PIAs in the UK, and thereby improve PIA uptake. Such a PIA tool may set the bar higher for the process itself, help organisations in carrying out PIAs more easily in the UK, facilitate comparison and improve standardisation. A PIA tool is developed and described, in the form of a software prototype based upon a Decision Support System (DSS), that is a type of expert system that addresses the complexity of privacy compliance requirements for organisations (in both public and private sectors). More specifically, the developed automated PIA tool may help decision makers within organisations decide whether a new project (where "project" is defined in a broad sense, encompassing a scheme, notion, or product etc.), should go ahead and if so, in what form (i.e. what restrictions there are, what additional checks should be made, etc.). Therefore, techniques outlined in this thesis for the development of the PIA tool include: requirements elicitation; stakeholder mapping; data collection; data analysis; UML (Unified Modelling Language) modelling, and the software implementation of an expert system. In addition, Artificial Intelligence (AI)techniques are assessed with regards to how these can be used to enhance the PIA process, and a technique is developed to incorporate expression of belief. Stakeholders outlined in this thesis are anyone with an interest in such a PIA tool. For example, the intended users of the tool are stakeholders, as they have an interest in having a product that addresses the complexity of privacy compliance requirements for organisations (in both public and private sectors). In addition, stakeholders were mapped into a number of stakeholder groups including: privacy, data protection, computer security, records management, PIA consultants, and software development. Thus stakeholders were selected to provide requirements for the PIA tool (i.e. functional and non-functional requirements), and also to participate in the PIA tools validation process (i.e. a judgement on the functionality, usability, and portability of the PIA tool). The outcomes of the research include both a proof of concept implementation of a PIA tool, and analysis of a stakeholder-derived validation process for that tool

Page generated in 0.0226 seconds