• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1258
  • 959
  • 482
  • 266
  • 46
  • 35
  • 27
  • 22
  • 17
  • 10
  • 8
  • 6
  • 6
  • 4
  • 3
  • Tagged with
  • 3509
  • 745
  • 681
  • 667
  • 657
  • 648
  • 606
  • 460
  • 371
  • 323
  • 302
  • 295
  • 241
  • 222
  • 203
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Use of the concept of situation room analysis and the relevant enabling technologies to support collaboration in the IT product development

Koumpis, Adamantios January 2006 (has links)
No description available.
192

QoS oriented framework for link selection in heterogeneous wireless environments

Wilson, Ashton January 2008 (has links)
Wireless is now common for access to multimedia services, with many different devices and choice of access technology. Access methods have become varied and with more types of services, which requires more consideration for coordinating existing protocols and quality of service (QoS). Increasingly, new wireless access technologies co-exist on the same devices, for example, smartphones already have third-generation cellular and WiFi. Devices with multiple links are described under the umbrella term ‘heterogeneous environments’. A trend towards heterogeneous wireless environments and varied types of media services requires that QoS and user satisfaction are prominent in next-generation networks. The problems in next-generation heterogeneous wireless environments include many levels of complexity; from link coexistence to user-centric policies and contexts. This thesis explores the issue of QoS in interface selection for devices with more than one wireless access link. A solution that provides link selection for QoS policy is investigated using analytical and simulation techniques. Different wireless networks have capabilities and limitations, determined by radio technology and network conditions. The research focused on an approach to improve QoS by leveraging the differences in wireless networks. However, it is complicated by issues such as: different protocols, physical device co-existence, mobility, and application QoS requirements. Following a review of artificial intelligence (AI) techniques, finite-state machines (FSMs) and fuzzy decision-making (FDM) are proposed as a solution approach. An agent-based prototype is used to combine FSMs and FDM for automating link selection, determined by user and QoS policy. Prototype evaluation was performed using sensitivity analysis for FDM, and discrete-event simulation for generating QoS metrics in wireless environments. The results are comparisons of FDM prototypes using different parameters; different agent prototypes were run with different QoS conditions for comparing points of handover between UMTS and WLAN networks for one service type. The research has shown an agent model can reduce the complexity for a user in wireless interface selection, while including QoS metrics and user preferences into the decision process. Core decision-making techniques in the design are relevant for emerging standardisation frameworks such as 802.21, and the next-generation of wireless networks to support heterogeneous access.
193

Background modelling and performance metrics for visual surveillance

Lazarevic, N. January 2011 (has links)
This work deals with the problems of performance evaluation and background modelling for the detection of moving objects in outdoor video surveillance datasets. Such datasets are typically affected by considerable background variations caused by global and partial illumination variations, gradual and sudden lighting condition changes, and non-stationary backgrounds. The large variation of backgrounds in typical outdoor video sequences requires highly adaptable and robust models able to represent the background at any time instance with sufficient accuracy. Furthermore, in real life applications it is often required to detect possible contaminations of the scene in real time or when new observations become available. A novel adaptive multi-modal algorithm for on-line background modelling is proposed. The proposed algorithm applies the principles of the Gaussian Mixture Model, previously used to model the grey-level (or colour) variations of individual pixels, to the modelling of illumination variations in image regions. The image observations are represented in the eigen-space, where the dimensionality of the data is significantly reduced using the method of the principal components analysis. The projections of image regions in the reduced eigen-space are clustered using K-means into clusters (or modes) of similar backgrounds and are modelled as multivariate Gaussian distributions. Such an approach allows the model to adapts to the changes in the dataset in a timely manner. This work proposed modifications to a previously published method for incremental update of the uni-modal eigne-models. The modifications are twofold. First, the incremental update is performed on the individual modes of the multi-modal model, and second, the mechanism for adding new dimensions is adapted to handle problems typical for outdoor video surveillance scenes with a wide range of illumination changes. Finally, a novel, objective, comparative, object-based methodology for performance evaluation of object detection is also developed. the proposed methodology is concerned with the evaluation of object detection in the context of the end-user defined quality of performance in complex video surveillance applications.
194

The development of a new systematic method based on activity systems that analyses the activity of learning programming

Kheir Abadi, Maryam January 2012 (has links)
The activity of learning programming languages is a difficult and complex process. During this complicated procedure, many problems and difficulties might occur. A straightforward and clear approach, which can help to break down the numerous interacting processes into a series of simpler components, would appear useful. Therefore, the main aim of this research is to design and develop an appropriate method that can meet these criteria. The new method offers a new systematic approach for collecting, modelling and analysing data to discover difficulties within the activity of learning programming. Thus, to achieve these aims, the research work commenced with an investigation into the existing variety of frameworks and methodologies, which have been used in Information Technology (IT). The initial research showed that there are many suitable approaches that have been previously used in the IT field. However, most of these do not offer any clear pathway for collecting and analysing the data from beginning to end of the research process. To address these issues, Activity Theory (AT) has been chosen to be used as an initial framework for the study. AT has been selected due to the nature of the topic being examined. There are several communities involved in the process of learning programming, including students, lecturers, technicians and teaching assistants. AT allows for a holistic consideration of the multiple perspectives involved. In addition, the solid ontology of AT assists with the breakdown of complicated environments into simpler units. However, AT does not specify any particular research methodology that should be used. As a result, an appropriate approach has to be identified and coupled with AT in order to create a new systematic method. The following research methodologies are considered: Action Research (AR), Grounded Theory (GT) and Phenomenography (Ph). It is concluded that GT offers the best approach to complement the use of AT in the context of examining the activity of learning programming languages. Consequently, an initial method has been created by combining AT and GT, which has been used to collect and analyse test cases to investigate whether this combination is effective. After using this initial procedure, changes and improvements were made to create a revised method which has been used to collect and analyse a larger set of data. The results of this research, using three type of case studies of responses from the individual students, focus groups including staff, and observation of the activities in workshop sessions, demonstrated the benefits of the method developed. It was found out that this systematic approach facilitated the process of collecting and analysing the data. In turn, this enabled the discovery of contradictions within the activity of learning programming and the proposed of shifts to solve them. Although this method was tested on first-year students at Kingston University, it is potentially generic, allowing it to be considered for use in other similar domains.
195

FAD : a functional analysis and design methodology

Russell, Daniel J. January 2001 (has links)
No description available.
196

Development of an expert system for planning orthodontic treatment

Mackin, Neil January 1992 (has links)
No description available.
197

The derivation of a pragmatic requirements framework for web development

Jeary, Sherry January 2010 (has links)
Web-based development is a relatively immature area of Software Engineering, producing often complex applications to many different types of end user and stakeholders. Web Engineering as a research area, was created to introduce processes that enable web based development to be repeatable and to avoid potential failure in the fast changing landscape that is the current ubiquitous Internet. A survey of existing perspectives from the literature highlights a number of points. Firstly, that web development has a number of subtle differences to Software Engineering and that many web development methods are not used. Further, that there has been little work done on what should be in a web development method. A full survey of 50 web development methods finds that they do not give enough detail to be used in their entirety; they are difficult for a non-computer scientist to understand in the techniques they use and most do not cover the lifecycle, particularly in the area of requirements, implementation and testing. This thesis introduces a requirements framework for novice web developers. It is created following an in-depth case study carried out over two years that investigates the use of web development methods by novice developers. The study finds that web development methods are not easy to understand, there is a lack of explanation as to how to use the techniques within the method and the language used is too complex. A high level method is derived with an iterative process and with the requirements phase in the form of a framework; it addresses the problems that are discussed and provides excellent support for a novice web developer in the requirements phase of the lifecycle. An evaluation of the method using a group of novice developers who reflect on the method and a group who use it for development finds that the method is both easy to understand and use.
198

Building well-performing classifier ensembles : model and decision level combination

Eastwood, Mark January 2010 (has links)
There is a continuing drive for better, more robust generalisation performance from classification systems, and prediction systems in general. Ensemble methods, or the combining of multiple classifiers, have become an accepted and successful tool for doing this, though the reasons for success are not always entirely understood. In this thesis, we review the multiple classifier literature and consider the properties an ensemble of classifiers - or collection of subsets - should have in order to be combined successfully. We find that the framework of Stochastic Discrimination provides a well-defined account of these properties, which are shown to be strongly encouraged in a number of the most popular/successful methods in the literature via differing algorithmic devices. This uncovers some interesting and basic links between these methods, and aids understanding of their success and operation in terms of a kernel induced on the training data, with form particularly well suited to classification. One property that is desirable in both the SD framework and in a regression context, the ambiguity decomposition of the error, is de-correlation of individuals. This motivates the introduction of the Negative Correlation Learning method, in which neural networks are trained in parallel in a way designed to encourage de-correlation of the individual networks. The training is controlled by a parameter λ governing the extent to which correlations are penalised. Theoretical analysis of the dynamics of training results in an exact expression for the interval in which we can choose λ while ensuring stability of the training, and a value λ∗ for which the training has some interesting optimality properties. These values depend only on the size N of the ensemble. Decision level combination methods often result in a difficult to interpret model, and NCL is no exception. However in some applications, there is a need for understandable decisions and interpretable models. In response to this, we depart from the standard decision level combination paradigm to introduce a number of model level combination methods. As decision trees are one of the most interpretable model structures used in classification, we chose to combine structure from multiple individual trees to build a single combined model. We show that extremely compact, well performing models can be built in this way. In particular, a generalisation of bottom-up pruning to a multiple-tree context produces good results in this regard. Finally, we develop a classification system for a real-world churn prediction problem, illustrating some of the concepts introduced in the thesis, and a number of more practical considerations which are of importance when developing a prediction system for a specific problem.
199

Critical computer animation : an examination of "practice as research" and its reflection and review processes

Lo-Garry, Yasumiko Cindy Tszyan January 2010 (has links)
My doctoral study investigated the “Practice as Research” model for critical 3D computer animation. I designed a structure for the model using mixed research methods and a critical process, and applied this proposed methodology first into a pilot study to examine some selected methods and identify other required techniques for this research model. The refined "Practice as Research" model was then applied into different fields of animation - a game development project, a narrative, and experimental animation for detailed analysis and improvement of its flexibility. The study examined a variety of practices and procedures used by animators and studios and identified processes for the analysis and evaluation of computer animation. Within the created research space in both commercial project and experimental works, I demonstrated that there were effective and different procedures, depending on the application and its target qualities. Also, I clarified some of the basic differences between traditional animation techniques and 3D skills; hence, explained and modified some of the well-established animation practices to best suit 3D animation development. The "Practice as Research" model encouraged critical research methods and attitudes into industrial settings to expand the receptiveness of experiences and knowledge, shifting away from the common creative product-oriented view. The model naturally led a practitioner to intervene one's perspective and previous ways of doing. It showed that the “Practice as Research” approach could increase creativity in a product while maintaining control in time management and encourage animators to welcome other perspectives. The research concluded that if “Practice as Research” model was used properly, it could be an effective and efficient method to satisfy both commercial qualities and personal development. The most interesting part of the research was perhaps the search for an animator’s mindset, personal qualities, preconceptions and preferences that could influence practices and qualities. With those additional information, I refined the proposed “Practice as Research” model that allowed animators to modify their previous way of working and thinking during the process, and encouraged continuous development to aim for a higher quality of work.
200

A population model of vasopressin secretion

Durie, Ruth Frances January 2008 (has links)
Computer modelling is a powerful tool for clarifying and testing theory. In neuroscience, this often means replicating firing patterns. Models need evaluation functions to quantify the significance of features in the firing patterns, but usually the effect of firing is insufficiently understood. The magnocellular vasopressin neurons of the hypothalamus do have an output that is both well understood and quantifiable: they secrete a hormone into the bloodstream in proportion to blood osmolarity and volume, regulating these properties within a narrow physiologically acceptable range. This response of vasopressin secretion to osmotic pressure must be maintained to defend blood pressure. The neurons display a distinctive phasic firing pattern, which a model was developed to mimic. A further, unique step was then taken of extending this model by developing a model for the effect of firing, a stimulus-secretion model. The firing pattern model and stimulus secretion model were then linked and then noisily duplicated to produce a population. This population had a measurable performance - secretion - allowing evaluation of the model in a novel fashion. The population could replicate the secretory response to osmotic pressure observed in vivo. It is possible to test the effect of features by incorporating them into the model and observing the response. A demonstration of this was conducted by changing the mix of excitatory and inhibitory PSPs, showing that inhibition was necessary for an efficient response. Effective techniques may well be reused elsewhere in the brain, so exploring their significance in a simple system may allow understanding of more complex ones. This project has constructed a model from firing to effect, offering novel possibilities for quantification and therefore evaluation. The main outcomes from this work are construction of a simple model system in which features can be benchmarked; that a integrate and fire model modified to include bistability can explain the firing of vasopressin neurons; that secretion could well also be controlled by a pool structure, similar to other secretory systems and that a population of these cells can produce a linear output. It has also confirmed that balanced excitatory and inhibitory input is necessary for the most efficient response. It shows that population performance is a trade-off between maximising efficiency, maintaining the secretory response over a wide dynamic range and maximising the maximum achievable secretion rate.

Page generated in 0.0785 seconds