• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20450
  • 5226
  • 1262
  • 1210
  • 867
  • 670
  • 435
  • 410
  • 410
  • 410
  • 410
  • 410
  • 407
  • 158
  • 156
  • Tagged with
  • 34397
  • 34397
  • 14116
  • 10832
  • 3107
  • 2981
  • 2737
  • 2541
  • 2483
  • 2354
  • 2279
  • 2178
  • 2165
  • 2046
  • 1937
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

The derivation of a pragmatic requirements framework for web development

Jeary, Sherry January 2010 (has links)
Web-based development is a relatively immature area of Software Engineering, producing often complex applications to many different types of end user and stakeholders. Web Engineering as a research area, was created to introduce processes that enable web based development to be repeatable and to avoid potential failure in the fast changing landscape that is the current ubiquitous Internet. A survey of existing perspectives from the literature highlights a number of points. Firstly, that web development has a number of subtle differences to Software Engineering and that many web development methods are not used. Further, that there has been little work done on what should be in a web development method. A full survey of 50 web development methods finds that they do not give enough detail to be used in their entirety; they are difficult for a non-computer scientist to understand in the techniques they use and most do not cover the lifecycle, particularly in the area of requirements, implementation and testing. This thesis introduces a requirements framework for novice web developers. It is created following an in-depth case study carried out over two years that investigates the use of web development methods by novice developers. The study finds that web development methods are not easy to understand, there is a lack of explanation as to how to use the techniques within the method and the language used is too complex. A high level method is derived with an iterative process and with the requirements phase in the form of a framework; it addresses the problems that are discussed and provides excellent support for a novice web developer in the requirements phase of the lifecycle. An evaluation of the method using a group of novice developers who reflect on the method and a group who use it for development finds that the method is both easy to understand and use.
52

Realising context-oriented information filtering

Webster, David Edward January 2010 (has links)
The notion of information overload is an increasing factor in modern information service environments where information is ‘pushed’ to the user. As increasing volumes of information are presented to computing users in the form of email, web sites, instant messaging and news feeds, there is a growing need to filter and prioritise the importance of this information. ‘Information management’ needs to be undertaken in a manner that not only prioritises what information we do need, but to also dispose of information that is sent, which is of no (or little) use to us.The development of a model to aid information filtering in a context-aware way is developed as an objective for this thesis. A key concern in the conceptualisation of a single concept is understanding the context under which that concept exists (or can exist). An example of a concept is a concrete object, for instance a book. This contextual understanding should provide us with clear conceptual identification of a concept including implicit situational information and detail of surrounding concepts.Existing solutions to filtering information suffer from their own unique flaws: textbased filtering suffers from problems of inaccuracy; ontology-based solutions suffer from scalability challenges; taxonomies suffer from problems with collaboration. A major objective of this thesis is to explore the use of an evolving community maintained knowledge-base (that of Wikipedia) in order to populate the context model from prioritise concepts that are semantically relevant to the user’s interest space. Wikipedia can be classified as a weak knowledge-base due to its simple TBox schema and implicit predicates, therefore, part of this objective is to validate the claim that a weak knowledge-base is fit for this purpose. The proposed and developed solution, therefore, provides the benefits of high recall filtering with low fallout and a dependancy on a scalable and collaborative knowledge-base.A simple web feed aggregator has been built using the Java programming language that we call DAVe’s Rss Organisation System (DAVROS-2) as a testbed environment to demonstrate specific tests used within this investigation. The motivation behind the experiments is to demonstrate that the combination of the concept framework instantiated through Wikipedia can provide a framework to aid in concept comparison, and therefore be used in news filtering scenario as an example of information overload. In order to evaluate the effectiveness of the method well understood measures of information retrieval are used. This thesis demonstrates that the utilisation of the developed contextual concept expansion framework (instantiated using Wikipedia) improved the quality of concept filtering over a baseline based on string matching. This has been demonstrated through the analysis of recall and fallout measures.
53

Building well-performing classifier ensembles : model and decision level combination

Eastwood, Mark January 2010 (has links)
There is a continuing drive for better, more robust generalisation performance from classification systems, and prediction systems in general. Ensemble methods, or the combining of multiple classifiers, have become an accepted and successful tool for doing this, though the reasons for success are not always entirely understood. In this thesis, we review the multiple classifier literature and consider the properties an ensemble of classifiers - or collection of subsets - should have in order to be combined successfully. We find that the framework of Stochastic Discrimination provides a well-defined account of these properties, which are shown to be strongly encouraged in a number of the most popular/successful methods in the literature via differing algorithmic devices. This uncovers some interesting and basic links between these methods, and aids understanding of their success and operation in terms of a kernel induced on the training data, with form particularly well suited to classification. One property that is desirable in both the SD framework and in a regression context, the ambiguity decomposition of the error, is de-correlation of individuals. This motivates the introduction of the Negative Correlation Learning method, in which neural networks are trained in parallel in a way designed to encourage de-correlation of the individual networks. The training is controlled by a parameter λ governing the extent to which correlations are penalised. Theoretical analysis of the dynamics of training results in an exact expression for the interval in which we can choose λ while ensuring stability of the training, and a value λ∗ for which the training has some interesting optimality properties. These values depend only on the size N of the ensemble. Decision level combination methods often result in a difficult to interpret model, and NCL is no exception. However in some applications, there is a need for understandable decisions and interpretable models. In response to this, we depart from the standard decision level combination paradigm to introduce a number of model level combination methods. As decision trees are one of the most interpretable model structures used in classification, we chose to combine structure from multiple individual trees to build a single combined model. We show that extremely compact, well performing models can be built in this way. In particular, a generalisation of bottom-up pruning to a multiple-tree context produces good results in this regard. Finally, we develop a classification system for a real-world churn prediction problem, illustrating some of the concepts introduced in the thesis, and a number of more practical considerations which are of importance when developing a prediction system for a specific problem.
54

Critical computer animation : an examination of "practice as research" and its reflection and review processes

Lo-Garry, Yasumiko Cindy Tszyan January 2010 (has links)
My doctoral study investigated the “Practice as Research” model for critical 3D computer animation. I designed a structure for the model using mixed research methods and a critical process, and applied this proposed methodology first into a pilot study to examine some selected methods and identify other required techniques for this research model. The refined "Practice as Research" model was then applied into different fields of animation - a game development project, a narrative, and experimental animation for detailed analysis and improvement of its flexibility. The study examined a variety of practices and procedures used by animators and studios and identified processes for the analysis and evaluation of computer animation. Within the created research space in both commercial project and experimental works, I demonstrated that there were effective and different procedures, depending on the application and its target qualities. Also, I clarified some of the basic differences between traditional animation techniques and 3D skills; hence, explained and modified some of the well-established animation practices to best suit 3D animation development. The "Practice as Research" model encouraged critical research methods and attitudes into industrial settings to expand the receptiveness of experiences and knowledge, shifting away from the common creative product-oriented view. The model naturally led a practitioner to intervene one's perspective and previous ways of doing. It showed that the “Practice as Research” approach could increase creativity in a product while maintaining control in time management and encourage animators to welcome other perspectives. The research concluded that if “Practice as Research” model was used properly, it could be an effective and efficient method to satisfy both commercial qualities and personal development. The most interesting part of the research was perhaps the search for an animator’s mindset, personal qualities, preconceptions and preferences that could influence practices and qualities. With those additional information, I refined the proposed “Practice as Research” model that allowed animators to modify their previous way of working and thinking during the process, and encouraged continuous development to aim for a higher quality of work.
55

Using small world models to study infection communication and control

Ganney, Paul Sefton January 2011 (has links)
The modelling of infection transmission has taken many forms: The simple Susceptible-Infected-Removed (SIR) model yields good epidemiological results, but is not well suited to the modelling of the application of interventions. Attention has focused in recent years on graph (network) models and especially on those exhibiting the small-world properties described by Watts and Strogatz in “Nature” in 1998. This thesis examines such graph models, discovering several attributes which may yield improved results. In order to quantify the effects of these proposals, a classification system was developed together with a Goodness-of-Fit (GoF) measure. Additionally, a questionnaire was developed to reveal the operational organisational structure of the NHS Trust being examined. The resultant theoretical model was implemented in software and seeded with a graph derived from this questionnaire. This model was then examined to determine the effectiveness of these proposals, as measured via the GoF. The additional features proving beneficial were shown to be: full directionality in the graphs; modelling unknown paths via a new concept termed an “external path”; the division of the probability of infection transmission into three components; the seeding of the model with one derived from an organizational questionnaire. The resulting model was shown to yield very good results and be applicable to modelling both infection propagation and control.
56

A computational framework for similarity estimation and stimulus reconstruction of Hodgkin-Huxley neural responses

Sarangdhar, Mayur January 2010 (has links)
Periodic stimuli are known to induce chaotic oscillations in the squid giant axon for a certain range of frequencies, a behaviour modelled by the Hodgkin-Huxley equations. Inthe presence of chaotic oscillations, similarity between neural responses depends on their temporal nature as firing times and amplitudes together reflect the true dynamics of theneuron. This thesis presents a method to estimate similarity between neural responses exhibiting chaotic oscillations by using both amplitude fluctuations and firing times. It isobserved that identical stimuli have similar effect on the neural dynamics and therefore, as the temporal inputs to the neuron are identical, the occurrence of similar dynamicalpatterns result in a high estimate of similarity, which correlates with the observed temporal similarity. The information about a neural activity is encoded in a neural response and usually the underlying stimulus that triggers the activity is unknown. Thus, this thesis also presents anumerical solution to reconstruct stimuli from Hodgkin-Huxley neural responses while retrieving the neural dynamics. The stimulus is reconstructed by first retrieving themaximal conductances of the ion channels and then solving the Hodgkin-Huxley equations for the stimulus. The results show that the reconstructed stimulus is a good approximationof the original stimulus, while the retrieved the neural dynamics, which represent the voltage-dependent changes in the ion channels, help to understand the changes in neuralbiochemistry. As high non-linearity of neural dynamics renders analytical inversion of a neuron an arduous task, a numerical approach provides a local solution to the problem ofstimulus reconstruction and neural dynamics retrieval.
57

Multi-objective system optimisation with respect to availability, maintainability and cost

Nggada, Shawulu Hunira January 2012 (has links)
Safety critical engineering systems are becoming increasingly larger and more complex. One way of ensuring the dependability of such systems is via architectural redundancy and replication of components. Use of redundancy has its limitations though, as it can increase the size, weight and cost of a system beyond acceptable levels. An alternative approach to improving dependability is by designing the system with preventive maintenance (PM) in mind. A well articulated PM policy can reduce the occurrence of system failure, thereby improving dependability attributes such as safety, reliability and availability as well as cost. In a typical scenario, components of the system are maintained periodically at a fixed time interval (month, year, etc). This interval may vary from component to component and therefore the determination of an optimal PM schedule for all components in the system is non trivial. The options for maintenance are simply too many to exhaustively enumerate and evaluate, and therefore the choice of an optimal PM schedule that provide the best trade-offs between dependability and cost becomes a search and optimisation problem. It is precisely this problem that this thesis addresses. Firstly, the thesis investigates the effects of perfect and imperfect preventive maintenance policies on system reliability, availability and cost by establishing mathematical models for both policies. Secondly, a multi-objective optimisation approach is formulated for PM scheduling that takes into account dependability and cost, and finally the approach is evaluated on two case studies using a well-established semi-automated dependability analysis tool - HiP-HOPS. The approach allows automatic model transformation such as substitution of components as well as PM maintenance to be applied by Genetic Algorithms as mechanisms for automatically improving design and achieving trade-offs between dependability and cost. Results from case studies show that this approach can provide an effective tool for definition of PM schedules and lead to engineering and economic benefits.
58

XML documents schema design

Zainol, Zurinahni January 2012 (has links)
The eXtensible Markup Language (XML) is fast emerging as the dominant standard for storing, describing and interchanging data among various systems and databases on the intemet. It offers schema such as Document Type Definition (DTD) or XML Schema Definition (XSD) for defining the syntax and structure of XML documents. To enable efficient usage of XML documents in any application in large scale electronic environment, it is necessary to avoid data redundancies and update anomalies. Redundancy and anomalies in XML documents can lead not only to higher data storage cost but also to increased costs for data transfer and data manipulation. To overcome this problem, this thesis proposes to establish a formal framework of XML document schema design. To achieve this aim, we propose a method to improve and simplify XML schema design by incorporating a conceptual model of the DTD with a theory of database normalization. A conceptual diagram, Graph-Document Type Definition (G-DTD) is proposed to describe the structure of XML documents at the schema level. For G- DTD itself, we define a structure which incorporates attributes, simple elements, complex elements, and relationship types among them. Furthermore, semantic constraints are also precisely defined in order to capture semantic meanings among the defined XML objects. In addition, to provide a guideline to a well-designed schema for XML documents, we propose a set of normal forms for G-DTD on the basis of rules proposed by Arenas and Libkin and Lv. et al. The corresponding normalization rules to transform from a G- DTD into a normal form schema are also discussed. A case study is given to illustrate the applicability of the concept. As a result, we found that the new normal forms are more concise and practical, in particular as they allow the user to find an 'optimal' structure of XML elements/attributes at the schema level. To prove that our approach is applicable for the database designer, we develop a prototype of XML document schema design using a Z formal specification language. Finally, using the same case study, this formal specification is tested to check for correctness and consistency of the specification. Thus, this gives a confidence that our prototype can be implemented successfully to generate an automatic XML schema design.
59

A multilayered agent society for flexible image processing

Hassan, Qais Mahmoud January 2008 (has links)
Medical imaging is revolutionising the practise of medicine, and it is becoming an indispensable tool for several important tasks, such as, the inspection of internal structures, radiotherapy planning and surgical simulation. However, accurate and efficient segmentation and labelling of anatomical structures is still a major obstacle to computerised medical image analysis. Hundreds of image segmentation algorithms have been proposed in the literature, yet most of these algorithms are either derivatives of low-level algorithms or created in an ad-hoc manner in order to solve a particular segmentation problem. This research proposes the Agent Society for Image Processing (ASIP), which is an intelligent customisable framework for image segmentation motivated by active contours and MultiAgent systems. ASIP is presented in a hierarchical manner as a multilayer system consisting of several high-level agents (layers). The bottom layers contain a society of rational reactive MicroAgents that adapt their behaviour according to changes in the world combined with their knowledge about the environment. On top of these layers are the knowledge and shape agents responsible for creating the artificial environment and setting up the logical rules and restrictions for the MicroAgents. At the top layer is the cognitive agent, in charge of plan handling and user interaction. The framework as a whole is comparable to an enhanced active contour model (body) with a higher intelligent force (mind) initialising and controlling the active contour. The ASIP framework was customised for the automatic segmentation of the Left Ventricle (LV) from a 4D MRI dataset. Although no pre-computed knowledge were utilised in the LV segmentation, good results were obtained from segmenting several patients' datasets. The output of the segmentation were compared with several snake based algorithms and evaluated against manually segmented "reference images" using various empirical discrepancy measurements.
60

Robo-CAMAL : anchoring in a cognitive robot

Gwatkin, James January 2009 (has links)
The CAMAL architecture (Computational Architectures for Motivation,Affect and Learning) provides an excellent framework within which to explore and investigate issues relevant to cognitive science and artificial intelligence. This thesis describes a small sub element of the CAMAL architecture that has been implemented on a mobile robot. The first area of investigation within this research relates to the anchoring problem. Can the robotic agent generate symbols based on responses within its perceptual systems and can it reason about its environment based on those symbols? Given that the agent can identify changes within its environment, can it then adapt its behaviour and alter its goals to mirror the change in its environment? The second area of interest involves agent learning. The agent has a domain model that details its goals, the actions it can perform and some of the possible environmental states it may encounter. The agent is not provided with the belief-goal-action combinations in order to achieve its goals. The agent is also unaware of the effect its actions have upon its environment. Can the agent experiment with its behaviour to generate its own belief-goal-action combinations that allow it to achieve its goals? A second related problem involves the case where the belief-goal-action combination is pre-programmed. This is when the agent is provided with several different methods with which to achieve a specific goal. Can the agent learn which combination is the best? This thesis will describe the sub-element of the CAMAL architecture that was developed for a robot (robo-CAMAL). It will also demonstrate how robo-CAMAL solves the anchoring problem, and learns how to act and adapt in its environment.

Page generated in 0.2478 seconds