• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 720
  • 715
  • 707
  • 398
  • 385
  • 382
  • 164
  • 97
  • 86
  • 82
  • 44
  • 42
  • 39
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Applying case based reasoning and structural similarity for effective retrieval of expert knowledge from software designs

Wolf, Markus Adrian January 2012 (has links)
Due to the proliferation of object-oriented software development, UML software designs are ubiquitous. The creation of software designs already enjoys wide software support through CASE (Computer-Aided Software Engineering) tools. However, there has been limited application of computer reasoning to software designs in other areas. Yet there is expert knowledge embedded in software design artefacts which could be useful if it were successfully retrieved. Thus, there is a need for automated support for expert knowledge retrieval from software design artefacts. A software design is an abstract representation of a software product and, in the case of a class diagram, contains information about its structure. It is therefore possible to extract knowledge about a software application from its design. For a human expert an important aspect of a class diagram are the semantic tags associated with each composing element, as these provide a link to the concept each element represents. For implemented code, however, the semantic tags have no bearing. The focus of this research has been on the question of whether is it possible to retrieve knowledge from class diagrams in the absence of semantic information. This thesis formulates an approach which combines case-based reasoning with graph matching to retrieve knowledge from class diagrams using only structural information. The practical applicability of this research has been demonstrated in the areas of cost estimation and plagiarism detection. It was shown that by applying case-based reasoning and graph matching to measure similarity between class diagrams it is possible to identify properties of an implementation not encoded within the actual diagram, such as the domain, programming language, quality and implementation cost. An approach for increasing users’ confidence in automatic class diagram matching by providing explanation is also presented. The findings show that the technique applied here can contribute to industry and academia alike in obtaining solutions from class diagrams where semantic information is lacking. The approach presented here, as well as its evaluation, were automated through the development of the UMLSimilator software tool.
62

Designing and evaluating information spaces : a navigational perspective

McCall, Roderick January 2003 (has links)
Navigation in two and three dimensional electronic environments has become an important usability issue.Research in to the use of hypertext systems would appear to suggest that people suffer from a variety of navigational problems in these environments. In addition users also encounter problems in 3D environments and in applications software. Therefore in order to enhance the ease of use from the point of view of preventing errors and making it more pleasurable the navigating in information space approach to HCI has been adopted. The research presented in this thesis examines whether the study of real world environments, in particular aspects of the built environment, urban planning and environmental psychology are beneficial in the development of guidelines for interface design and evaluation. In doing so the thesis examines three main research questions (1) is there a transfer of design knowledge from real to electronic spaces? (2) can concepts be provided in a series of useful guidelines? (3) are the guidelines useful for the design and evaluation of electronic spaces? Based upon the results of the two main studies contained within this thesis it is argued that the navigational perspective is one which is relevant to user interface design and evaluation and that navigation in electronic spaces is comparable to but not identical with actions within the real world. Moreover, the studies pointed to the validity of the core concepts when evaluating 2D and 3D spaces and designing 3D spaces. The thesis also points to the relevancy of the overall design guidance in 2D and 3D environments and the ability to make such information available through a software tool.
63

Data quality and data cleaning in database applications

Li, Lin January 2012 (has links)
Today, data plays an important role in people's daily activities. With the help of some database applications such as decision support systems and customer relationship management systems (CRM), useful information or knowledge could be derived from large quantities of data. However, investigations show that many such applications fail to work successfully. There are many reasons to cause the failure, such as poor system infrastructure design or query performance. But nothing is more certain to yield failure than lack of concern for the issue of data quality. High quality of data is a key to today's business success. The quality of any large real world data set depends on a number of factors among which the source of the data is often the crucial factor. It has now been recognized that an inordinate proportion of data in most data sources is dirty. Obviously, a database application with a high proportion of dirty data is not reliable for the purpose of data mining or deriving business intelligence and the quality of decisions made on the basis of such business intelligence is also unreliable. In order to ensure high quality of data, enterprises need to have a process, methodologies and resources to monitor and analyze the quality of data, methodologies for preventing and/or detecting and repairing dirty data. This thesis is focusing on the improvement of data quality in database applications with the help of current data cleaning methods. It provides a systematic and comparative description of the research issues related to the improvement of the quality of data, and has addressed a number of research issues related to data cleaning. In the first part of the thesis, related literature of data cleaning and data quality are reviewed and discussed. Building on this research, a rule-based taxonomy of dirty data is proposed in the second part of the thesis. The proposed taxonomy not only summarizes the most dirty data types but is the basis on which the proposed method for solving the Dirty Data Selection (DDS) problem during the data cleaning process was developed. This helps us to design the DDS process in the proposed data cleaning framework described in the third part of the thesis. This framework retains the most appealing characteristics of existing data cleaning approaches, and improves the efficiency and effectiveness of data cleaning as well as the degree of automation during the data cleaning process. Finally, a set of approximate string matching algorithms are studied and experimental work has been undertaken. Approximate string matching is an important part in many data cleaning approaches which has been well studied for many years. The experimental work in the thesis confirmed the statement that there is no clear best technique. It shows that the characteristics of data such as the size of a dataset, the error rate in a dataset, the type of strings in a dataset and even the type of typo in a string will have significant effect on the performance of the selected techniques. In addition, the characteristics of data also have effect on the selection of suitable threshold values for the selected matching algorithms. The achievements based on these experimental results provide the fundamental improvement in the design of 'algorithm selection mechanism' in the data cleaning framework, which enhances the performance of data cleaning system in database applications.
64

A holistic semantic based approach to component specification and retrieval

Li, Chengpu January 2012 (has links)
Component-Based Development (CBD) has been broadly used in software development as it enhances the productivity and reduces the costs and risks involved in systems development. It has become a well-understood and widely used technology for developing not only large enterprise applications, but also a whole spectrum of software applications, as it offers fast and flexible development. However, driven by the continuous expansions of software applications, the increase in component varieties and sizes and the evolution from local to global component repositories, the so-called component mismatch problem has become an even more severe hurdle for component specification and retrieval. This problem not only prevents CBD from reaching its full potential, but also hinders the acceptance of many existing component repository. To overcome the above problem, existing approaches engaged a variety of technologies to support better component specification and retrieval. The existing approaches range from the early syntax-based (traditional) approaches to the recent semantic-based approaches. Although the different technologies are proposed to achieve accurate description of the component specification and/or user query in their specification and retrieval, the existing semantic-based approaches still fail to achieve the following goals which are desired for present component reuse: precise, automated, semantic-based and domain capable. This thesis proposes an approach, namely MVICS-based approach, aimed at achieving holistic, semantic-based and adaptation-aware component specification and retrieval. As the foundation, a Multiple-Viewed and Interrelated Component Specification ontology model (MVICS) is first developed for component specification and repository building. The MVICS model provides an ontology-based architecture to specify components from a range of perspectives; it integrates the knowledge of Component-Based Software Engineering (CBSE), and supports ontology evolution to reflect the continuous developments in CBD and components. A formal definition of the MVICS model is presented, which ensures the rigorousness of the model and supports the high level of automation of the retrieval. Furthermore, the MVICS model has a smooth mechanism to integrate with domain related software system ontology. Such integration enhances the function and application scope of the MVICS model by bringing more domain semantics into component specification and retrieval. Another improved feature of the proposed approach is that the effect of possible component adaptation is extended to the related components. Finally a comprehensive profile of the result components shows the search results to the user from a summary to satisfied and unsatisfied discrepancy details. The above features of the approach are well integrated, which enables a holistic view in semantic-based component specification and retrieval. A prototype tool was developed to exert the power of the MVICS model in expressing semantics and process automation in component specification and retrieval. The tool implements the complete process of component search. Three case studies have been undertaken to illustrate and evaluate the usability and correctness of the approach, in terms of supporting accurate component specification and retrieval, seamless linkage with a domain ontology, adaptive component suggestion and comprehensive result component profile. A conclusion is drawn based on an analysis of the feedback from the case studies, which shows that the proposed approach can be deployed in real life industrial development. The benefits of MVICS include not only the improvement of the component search precision and recall, reducing the development time and the repository maintenance effort, but also the decrease of human intervention on CBD.
65

Novel hyper-heuristics applied to the domain of bin packing

Sim, Kevin January 2014 (has links)
Principal to the ideology behind hyper-heuristic research is the desire to increase the level of generality of heuristic procedures so that they can be easily applied to a wide variety of problems to produce solutions of adequate quality within practical timescales. This thesis examines hyper-heuristics within a single problem domain, that of Bin Packing where the benefits to be gained from selecting or generating heuristics for large problem sets with widely differing characteristics is considered. Novel implementations of both selective and generative hyper-heuristics are proposed. The former approach attempts to map the characteristics of a problem to the heuristic that best solves it while the latter uses Genetic Programming techniques to automate the heuristic design process. Results obtained using the selective approach show that solution quality was improved significantly when contrasted to the performance of the best single heuristic when applied to large sets of diverse problem instances. Although enforcing the benefits to be gained by selecting from a range of heuristics the study also highlighted the lack of diversity in human designed algorithms. Using Genetic Programming techniques to automate the heuristic design process allowed both single heuristics and collectives of heuristics to be generated that were shown to perform significantly better than their human designed counterparts. The thesis concludes by combining both selective and generative hyper-heuristic approaches into a novel immune inspired system where heuristics that cover distinct areas of the problem space are generated. The system is shown to have a number of advantages over similar cooperative approaches in terms of its plasticity, efficiency and long term memory. Extensive testing of all of the hyper-heuristics developed on large sets of both benchmark and newly generated problem instances enforces the utility of hyper-heuristics in their goal of producing fast understandable procedures that give good quality solutions for a range of problems with widely varying characteristics.
66

Evaluating book and hypertext : analysis of individual differences

Wilkinson, Simon January 2001 (has links)
This thesis investigates the usability of an 800 page textbook compared with a hypertext version containing the same information. Hypertext is an interesting new medium in that it is seen as possessing advantages as both delivery technology that influence cost and access to information and design technology influencing student achievement. Unfortunately the proclamations of its advocates have usually exceeded empirical findings. Also, rapid advances in both hardware and software are necessitating the frequent re-evaluation of contemporary hypertext. In addition to an up-to-date evaluation of the relative performance of book and hypertext supporting set tasks, the research reported in this thesis also sought to specifically analyse the potential role individual differences could play within media evaluation. To do this the cognitive styles and spatial ability of 57 postgraduate student volunteers, from two computer related diplomas, were measured. Half the subjects were then randomly assigned to a Book group and half to a Hypertext group. Each group was then allocated the same amount of time to complete two separate tasks: 1) short answer questions analysing the basic information retrieval potential of each medium, and one week later 2) four open-ended short essay questions. Surprisingly, subjects assigned to the Book group performed significantly better than those assigned to the Hypertext group for Task 1. The mean academic performance of subjects (the mean mark obtained over the 8 modules of their diploma) predicted most variance in Task 1 performance for both groups. However, with Task 2, the cognitively more demanding exercise, none of the measured individual differences could significantly predict the scores of subjects. Another surprising finding, given that all subjects were studying computing, was that the amount of prior computing experience was found to approach significance for those subjects assigned to Hypertext for Task 1. Given the ease with which this particular individual difference could be manipulated it was decided to run a second experiment employing -subjects with more experience of the Hypertext system used. The results from this second cohort showed no significant differences in score for either task between Book or Hypertext. However, as the more qualitative data from a questionnaire showed, there are a large number of different factors and issues that contribute to the ultimate acceptability of one medium compared with the other. The thesis concludes by recommending a number of possible avenues for future research looking at the role hypertext has to play in the construction of hyperlibraries and Virtual Learning Environments.
67

Generative aspect-oriented component adaptation

Feng, Yankui January 2008 (has links)
Due to the availability of components and the diversity of target applications, mismatches between pre-qualified existing components and the particular reuse context in applications are often inevitable and have been a major hurdle of component reusability and successful composition. Although component adaptation has acted as a key solution for eliminating these mismatches, existing practices are either only capable for adaptation at the interface level, or require too much intervention from software engineers. Another weakness of existing approaches is the lack of reuse of component adaptation knowledge. Aspect Oriented Programming (AOP) is a new methodology that provides separation of crosscutting concerns by introducing a new unit of modularization - an Aspect that crosscuts other modules. In this way, all the associated complexity of the crosscutting concerns is isolated into the Aspects, hence the final system becomes easier to design, implement and maintain. The nature of AOP makes it particularly suitable for addressing non-functional mismatches with component-based systems. However, current AOP techniques are not powerful enough for efficient component adaptation due to the weaknesses they have, including the limited reusability of Aspects, platform specific Aspects, and naive weaving processes. Therefore, existing AOP technology needs to be expanded before it can be used for efficient component adaptation. This thesis presents a highly automated approach to component adaptation through product line based Generative Aspect Oriented Component adaptation. In the approach, the adaptation knowledge is captured in Aspects and aims to be reusable in various adaptation circumstances. Automatic generation of adaptation Aspects is developed as a key technology to improve the level of automation of the approach and the reusability of adaptation knowledge. This generation is realised by developing a two dimensional Aspect model, which incorporates the technologies of software product line and generative programming. The adaptability and automation of the approach is achieved in an Aspect oriented component adaptation framework by generating and then applying the adaptation Aspects under a designed weaving process according to specific adaptation requirements. To expand the adaptation power of AOP, advanced Aspect weaving processes have been developed with the support of an enhanced aspect weaver. To promote the reusability of adaptation Aspects, an expandable repository of reusable adaptation Aspects has been developed based on the proposed two-dimensional Aspect model. A prototype tool is built as a leverage of the approach and automates the adaptation process. Case studies have been done to illustrate and evaluate the approach, in terms of its capability of building highly reusable Aspects across various AOP platforms and providing advanced weaving process. In summary, the proposed approach applies Generative Aspect Oriented Adaptation to targeted components to correct the mismatch problem so that the components can be integrated into a target application easily. The automation of the adaptation process, the deep level of the adaptation, and the reusability of adaptation knowledge are the advantages of the approach.
68

A software framework for the microscopic modelling of pedestrian movement

Kukla, Robert January 2007 (has links)
A town planner, faced with the task of designing attractive walking spaces, needs a tool that will allow different designs to be compared in terms of their attractiveness as well as their effectiveness. PEDFLOW is an attempt to create such a tool. It is an agent-based, microscopic model of pedestrian flow where virtual pedestrians navigate a virtual environment. On their way towards a goal the agents, representing pedestrians, interact with features of the environment and with other agents. The microscopic, rule-based actions result in an emergent behaviour that mimics that of real pedestrians. Pedestrians are subjected to a multitude of influences when walking. The majority of existing models only focus on a single aspect, typically the avoidance of obstructions or other pedestrians. PEDFLOW uses an implementation of context-mediated behaviour to enable the agents to deal with multiple cause-effect relations in a well-defined and flexible yet highly efficient manner. A variety of mobile and immobile entities can be modelled by objects in an object-oriented environment. The model is informed by an empirical study of pedestrian behaviour and the parameters of the agents are derived from measures of observed pedestrian movement. PEDFLOW's suitability for pedestrian modelling in the described context is evaluated in both qualitative and quantitative terms. Typical macroscopic movement patterns from the real world such as "platooning" and "walking with a partner" are selected and the corresponding emergent model behaviours investigated. Measures of service (MOS) are defined end extracted from the model for comparison with real world measures. As PEDFLOW was created as an interactive tool to be used in an office environment rather than in a high performance lab, the scalability and performance limitations are explored with regards to the size of the modelled area, the number of modelled pedestrians and the complexity of the interactions between them. It is shown that PEDFLOW can be a useful tool in the urban design process.
69

Metaheuristics for university course timetabling

Lewis, Rhydian M. R. January 2006 (has links)
The work presented in this thesis concerns the problem of timetabling at universities – particularly course-timetabling, and examines the various ways in which metaheuristic techniques might be applied to these sorts of problems. Using a popular benchmark version of a university course timetabling problem, we examine the implications of using a “twostaged” algorithmic approach, whereby in stage-one only the mandatory constraints are considered for satisfaction, with stage-two then being concerned with satisfying the remaining constraints but without re-breaking any of the mandatory constraints in the process. Consequently, algorithms for each stage of this approach are proposed and analysed in detail. For the first stage we examine the applicability of the so-called Grouping Genetic Algorithm (GGA). In our analysis of this algorithm we discover a number of scaling-up issues surrounding the general GGA approach and discuss various reasons as to why this is so. Two separate ways of enhancing general performance are also explored. Secondly, an Iterated Heuristic Search algorithm is also proposed for the same problem, and in experiments it is shown to outperform the GGA in almost all cases. Similar observations to these are also witnessed in a second set of experiments, where the analogous problem of colouring equipartite graphs is also considered. Two new metaheuristic algorithms are also proposed for the second stage of the twostaged approach: an evolutionary algorithm (with a number of new specialised evolutionary operators), and a simulated annealing-based approach. Detailed analyses of both algorithms are presented and reasons for their relative benefits and drawbacks are discussed. Finally, suggestions are also made as to how our best performing algorithms might be modified in order to deal with further “real-world” constraints. In our analyses of these modified algorithms, as well as witnessing promising behaviour in some cases, we are also able to highlight some of the limitations of the two-stage approach in certain cases.
70

An expert system for the performance control of rotating machinery

Pearson, William N. January 2000 (has links)
This research presented in this thesis examines the application of feed forward neural networks to the performance control of a gas transmission compressor. It is estimated that a global saving in compressor fuel gas of 1% could save the production of 6 million tonnes of CO2 per year. Current compressor control philosophy pivots around prevention of surge or anti-surge control. Prevention of damage to high capital cost equipment is a key control driver but other factors such as environmental emissions restrictions require most efficient use of fuel. This requires reliable and accurate performance control. A steady state compressor model was developed. Actual compressor performance characteristics were used in the model and correlations were applied to determine the adiabatic head characteristics for changed process conditions. The techniques of neural network function approximation and pattern recognition were investigated. The use of neural networks can avoid the potential difficulties in specifying regression model coefficients. Neural networks can be readily re-trained, once a database is populated, to reflect changing characteristics of a compressor. Research into the use of neural networks to model compressor performance characteristics is described. A program of numerical testing was devised to assess the performance of neural networks. Testing was designed to evaluate training set size, signal noise, extrapolated data, random data and use of normalised compressor coefficient data on compressor speed estimates. Data sets were generated using the steady state compressor model. The results of the numerical testing are discussed. Established control paradigms are reviewed and the use of neural networks in control l'Iystems were identified. These were generally to be found in the areas of adaptive or model predictive control. Algorithms required to implement a novel compressor performance control scheme are described. A review of plant control hierarchies has identified how the Mdwme might be implemented. The performance control algorithm evaluates current !,!'Ocells load and suggests a new compressor speed or updates the neural network model. {'ornpressor speed can be predicted to approximately ± 2.5% using a neural network h,lt1l'd model predictive performance controller. Comparisons with previous work suggest l'1l1t 'IlUal global savings of 34 million tonnes of CO2 emissions per year. A generic, rotating machinery performance control expert system is proposed.

Page generated in 0.0541 seconds