281 |
The implementation of Enterprise Resource Planning Systems in different national and organisational culturesKrumbholz, Marina January 2003 (has links)
ERP (Enterprise Resource Planning) packages provide generic off-the-shelf business and software solutions to customers. However, these packages are implemented in companies with different organisational and national cultures, and there is growing evidence that failure to adapt ERP packages to fit these cultures leads to projects which are expensive and overdue. This thesis investigates this impact of national and organisational cultures on the efficiency of ERP implementations. A theory of culture for ERP implementations is proposed. It draws on key theories and models of social and management science. The theory also includes a meta-schema of culture - a meta-model of the critical elements of national and organisational culture and ERP implementations. It provides the reader with a generic definition and model of culture. The theory was evaluated by two studies. The first study was conducted at the finance department of a higher educational establishment. The second study was conducted at 3 subsidiaries of a large multi-national pharmaceutical organisation in the UK, Germany and Scandinavia. Results provided evidence for the impact of organisational and national culture on the efficiency of ERP implementations. Furthermore, the results validated the theory of culture. They demonstrated that the culture-related problems arise because the changes associated with an ERP implementations, violated the employees' expectations (norms). The thesis also presents a method called CAREs (Culturally Aware Realisation of ERP systems) that aims to help ERP implementation teams to identify, explain and predict potential culture-related problems. Three experts evaluated the CAREs method. They were presented with a series of SAP implementation scenarios and were asked with a number of questionnaires to provide feedback on its utility, usability and effectiveness. The results demonstrated that the method is potentially useful to ERP implementation teams. Moreover, the results provided suggestions on how to improve the CAREs method. The thesis concludes with a review of the research hypotheses and a discussion of future work and future directions.
|
282 |
Intelligent monitoring of a complex, non-linear system using artificial neural networksWeller, Peter Richard January 1997 (has links)
This project uses advanced modelling techniques to produce a design for a computer based advisory system for the operator of a critical, complex, non-linear system, typified by a nuclear reactor. When such systems are in fault the operator has to promptly assess the problem and commence remedial action. Additional accurate and rapid information to assist in this task would clearly be of benefit. The proposed advisory system consists of two main elements. The plant state is determined and then the future condition predicted. These two components are linked by a common data flow. The diagnosed condition is also used as input for the predictive section. Artificial Neural Networks (ANNs) are used to perform both diagnosis and predictions. An ANN, a simplified model of the brain, can be trained to classify a set of known inputs. It can then classify unknown inputs The predictive element is first investigated. The number of conditions that can be predicted by a single ANN is identified as a key factor. Two distinct solutions are considered. The first uses the important features of the fault to determine an empirical relationship for combining transients. The second uses ANNs to model a range of system transients. A simple model is developed and refined to represent an important section of a nuclear reactor. The results show good predicted values for a extensive range of fault scenarios. The second approachis selected for implementation in the advisory system. The diagnostic element is explored using a set of key transients. A series of ANNs for diagnosing these conditions are developed using a range of strategies. The optimum combination was selected for implementation in the advisory system. The key plant variables which contributed most to the ANN inputs were identified. An implementation of the advisory system is described. The system should be a single suite of programs with the predictive and diagnostic sections supported by a controller module for organising information. The project concludes that the construction of such a system is possible with the latest technologies.
|
283 |
Software reliability predictionWright, David R. January 2001 (has links)
This thesis presents some extensions to existing methods of software reliability estimation and prediction. Firstly, we examine a technique called 'recalibration' by means of which many existing software reliability prediction algorithms assess past predictive performance in order to improve the accuracy of current reliability predictions. This existing technique for forecasting future failure times of software is already quite general. Indeed, whenever your predictions are produced in the form of time-to-failure distributions, successively as more actual failure times are observed, you can apply recalibration irrespective both of which probabilistic software reliability model and of which statistical inference technique you are using. In the current work we further generalise the recalibration method to those situations where empirical failure data take the form of failure-counts rather than precise inter-failure times. We then briefly explore how the reasoning we have used, in this extension of recalibration to the prediction of failure-count sequences, might further extend to recalibration of other representations of predicted reliability. Secondly, the thesis contains a theoretical discussion of some modelling possibilities for improving software reliability predictions by the incorporation of disparate sources of data. There are well established techniques for forecasting the reliability of a particular software product using as data only the past failure behaviour of that software under statistically representative operational testing. However, there may sometimes be reasons for seeking improved predictive accuracy by using data of other kinds too, rather than relying on this single source of empirical evidence. Notable among these is the economic impracticability, in many cases, of obtaining sufficient, representative software failure vs. time data (from execution of the particular product in question) to determine, by inference applied to software reliability growth models, whether or not a high reliability requirement has been achieved in a particular case, prior to extensive operational use of the software in question. For example, this problem arises in particular for safety-critical systems, whose required reliability is often extremely high. An accurate reliability assessment is often required in advance of a decision whether to release the software for actual use in the field. Another argument for attempting to determine other usable data sources for software reliability prediction is the value that would attach to rigorous empirical confirmation or refutation of any of the many existing theories and claims about what are the factors of software reliability, and how these factors may interact, in some given context. In those cases, such as some safety-critical systems, in which assessment of a high reliability level is required at an early stage, the necessary assessment is in practice often currently carried out rather informally, and often does claim to take account of many different types of evidence experience of previous, similar systems; evidence of the efficacy of the development process; expert judgement, etc-to supplement the limited available data on past failure vs. time behaviour which emanates from testing of the software within a realistic usage environment. Ideally, we would like this assessment to allow all such evidence to be combined into a final numerical measure of reliability in a scientifically more rigorous way. To address these problems, we first examine some candidate general statistical regression models used in other fields such as medicine and insurance and discuss how these might be applied to prediction of software reliability. We have here termed these models explanatory variables regression models. The goal here would be to investigate statistically how to explain differences in software failure behaviour in terms of differences in other measured characteristics of a number of different statistical 'individuals', or 'experimental units': We discuss the interpretation, within the software reliability context, of this statistical concept of an 'individual', with our favoured interpretation being such that a single statistical reliability regression model would be used to model simultaneously a family of parallel series of inter-failure times emanating from measurably different software products or from measurably different installations of a single software product. In statistical regression terms here, each one of these distinct failure vs. time histories would be the 'response variable' corresponding to one of these 'individuals'. The other measurable differences between these individuals would be captured in the model as explanatory variable values which would differ from one individual to another. Following this discussion, we then leave general regression models to examine a slightly different theoretical approach-to essentially the same question of how to incorporate diverse data within our predictions-through an examination of models for 'unexplained' differences between individuals' failure behaviours. Here, rather than assuming the availability of putative 'explanatory variables' to distinguish our statistical individuals and 'explain' the way that their reliabilities differ, we instead use randomness alone to model their differences in reliability. We have termed the class of models produced by this approach similar products models, meaning models in which we regard the individuals' different likely failure vs. time behaviours as initially (i. e. a priori) indistinguishable to us: Here, either we cannot (or we choose not to attempt with a formal model to) explain the differences between individuals' reliabilities in terms of other metrics applied to our individuals, but we do still expect that the 'similar products" (i. e. the individuals') reliabilities will be different from each other: We postulate the existence of a single probability distribution from which we may assume our individuals' true, unknown reliabilities to have all been drawn independently in a random fashion. We present some mathematical consequences, showing how, within such a modelling framework, prior belief about the distribution of reliabilities assumes great importance for model consequences. We also present some illustrative numerical results that seem to suggest that experience from previous products or environments, so represented within the model-even where very high operational dependability has been achieved in such previous cases-can only modestly improve our confidence in the reliability of a new product, or of an existing product when transferred to a new environment.
|
284 |
The use of XML schema and XSLT rules for product information personalizationStampoultzis, Michael January 2004 (has links)
This thesis describes research carried out in order to help solve the problem of personalization in e-commerce/CRM system. Web-based personalization consists of activities, such as providing customised information, that tailor the user's Web experience- browsing a Web site or purchasing a product, for example-to that user's particular needs. The main research objective of the project is to investigate how XSLT technologies can be used for the development of matching engines that find XML represented products that match the tastes, needs or requirements of customers as captured in customer profiles, also represented in XML. More specifically our research investigates into novel algorithms for transforming XML based product specifications using rules that derive from mining customer profiles with the purpose of customizing the product information.
|
285 |
Assessing the evolution of social networks in e-learningLaghos, Andrew January 2007 (has links)
This research provides a new approach to analysing the evolutionary nature of social networks that are formed around computer-mediated-communication (CMC) in e-Learning courses. Aspects that have been studied include Online Communities and student communication e-Learning environments. The literature review performed identified weaknesses in the current methods of analyzing CMC activity. A proposed unfied analysis framework (FESNeL) was developed which enables us to explore students' interactions and to test a number of hypotheses. The creation of the framework is discussed in detail along with its major components (e.g. Social Network Analysis and Human Computer Interaction techniques). Furthermore this framework was tested on a case study of an online Language Learning Course. The novelty of this study lies in the investigation of the evolution of online social networks, filling a gap in current research which focuses on specific time stamps (usually the end of the course) when analysing CMC. In addition, the framework uses both qualitative and quantitative methods allowing for a complete assessment of such social networks. Results indicate that FESNeL is a useful methodological framework that can be used to assess student communication and interaction in web-based courses. In addition, through the use of this framework, several characteristic hypotheses were tested which provided useful insights about the nature of learning and communicating online.
|
286 |
Evolutionary computing techniques to aid the acquisition and analysis of nuclear magnetic resonance dataGray, Helen Frances January 2007 (has links)
Evolutionary computation, including genetic algorithms and genetic programming have taken the ideas of evolution in biology and applied some of the characteristics to problem solving. The survival of the fittest paradigm allows a population of candidate solutions to be modified by sexual and asexual reproduction and mutation to come closer to solving the problem in question without the necessity of having prior knowledge of what a good solution looks like. The increasing importance of Nuclear Magnetic Resonance Spectroscopy in medicine has created a demand for automated data analysis for tissue classification and feature selection. The use of artificial intelligence techniques such as evolutionary computing can be used for such data analysis. This thesis applies the techniques of evolutionary computation to aid the collection and classification of Nuclear Magnetic Resonance spectroscopy data. The first section (chapters one and two) introduces Nuclear Magnetic Resonance spectroscopy and evolutionary computation and also contains a review of relevant literature. The second section focuses on classification. In the third chapter classification into two classes of brain tumours is undertaken. The fourth chapter expands this to classify tumours and tissues into more than two classes. Genetic Programming provided good solutions with relatively simple biochemical interpretation and was able to classify data into more than two classes at one time. The third section of the thesis concentrates on using evolutionary computation techniques to optimise data acquisition parameters directly from the Nuclear Magnetic Resonance hardware. Chapter five shows that Genetic Algorithms in particular are successful at suppressing signals from solvent while chapter six applies these techniques to find a way of enhancing the signals from metabolites important to the classification of brain tumours as found in chapter three. The final chapter draws conclusions as to the efficacy of evolutionary computation techniques applied to Nuclear Magnetic Resonance Spectroscopy.
|
287 |
Social interactions of computer games : an activity frameworkAng, Chee Siang January 2007 (has links)
With the advent of computer games, the Human Computer Interaction (HCI) community has begun studying games, often with the intention of uncovering useful information to inform the design of work-based software. However, most HCI research on computer games focuses on the use of game technologies, often overlooking the fairly large amount of classic game literature. Despite the potential importance of computer game studies in HCI, there is a lack of frameworks that could guide such studies especially with regard to sociability. I believe that sociability is one of the most important criteria game developers may want to apply to game design as computer games are becoming more socialoriented due to the inception of the Internet. Therefore, the main aim of the thesis is to develop a play activity framework with an emphasis on social interactions. To achieve this, first, a comprehensive body of game literature was reviewed as a step to provide a solid foundation for the construction of the framework. Through the extensive review of literature, I chose Activity Theory as the foundation for the framework development. In order to demonstrate the applicability of Activity Theory in analysing computer-mediated social interactions, an exploratory study of online activities in a game community was conducted. Then, two studies were undertaken to formulate the framework by modelling play activities in the social game context. The first study was centred on the individual and collective play activities that take place within the game virtual world. The second study focused on games as a whole participatory culture, in which playing games is not just confined to within the game space but also includes other playful activities governed by norms and specific identities around the game. Through these studies, a play activity framework consisting of three play mode/s was developed: intrinsic, reflective and expansive play models, which are inter-related. The framework provides a vocabulary to describe the component, the motivation and the process of game play. The framework was then operationalised into methodological guidelines with a set of heuristic questions grouped into different categories. The guidelines were applied to analyse two issues, namely community building and social learning, in a Massively Multi-player Online Game (MMOG). As a conclusion, the framework has expanded conventional game studies by emphasising the socio-cultural context. It provides a different perspective on analysing computer games particularly the social aspects of gaming. Game researchers could use the framework to investigate play activities within and beyond the game and how they are related. The framework offers a theoretical explanation of various social activities observed in computer games. Finally, the methodological guidelines derived from the framework are useful as they give directions to analyse play activities particularly social interactions and game communities.
|
288 |
Trichomoniasis in Africa : rapid laboratory diagnosisGoodall, Mark January 2001 (has links)
No description available.
|
289 |
Industrial seating and spinal loadingEklund, Jörgen January 1986 (has links)
Little information is available in the literature concerning an ergonomic systems view of industrial seats. This study has been aimed at expanding knowledge of industrial seat design. For this purpose, a model for evaluating industrial seats has been proposed, listing demands and restrictions from the task and the workplace. It also includes responses and effects on the sitter, and methods of measurement for evaluating industrial work seats. The appropriateness of work seat design has been assessed in laboratory and field studies, using methods to measure body loads, their effects and responses. These have been body height shrinkage, biomechanical methods, subjective assessment, and posture assessment. The shrinkage method, including equipment and procedures, has been developed in this project. It assesses the effect of loads on the spine in vivo by using body height changes as a measure of disc creep. The results are well correlated with spinal loads. The method is sensitive enough to differentiate between spinal loads of 100 N difference. The results are also related to the perception of discomfort. Biomechanical methods have been developed for calculating compressive, shear, and momental loads on the spine. Ratings of discomfort, body mapping, interviews, video recordings, and prototype equipment for the recording of head posture have also been used. The methods have been shown to be appropriate for seat evaluation. Work seats have been evaluated in different tasks, incorporating back-rests of different height, width and shape, conventional seat pans and sit-stand seats. It has been shown that advantageous chair features could be referred to each particular task. The tasks evaluated included forward force exertion (high backrests advantageous), vision to the side (low backrests advantageous), work with restricted knee-room (seats allowing increased trunk-thigh angle advantageous), grinding (high, narrow backrests advantageous), punch press work (increased seat height advantageous), and fork lift truck driving (medium height backrest advantageous). The work task has been shown to be a major influence on seat design, and must therefore always be thoroughly considered.
|
290 |
Structured evaluation of training in virtual environmentsD'Cruz, Mirabelle January 1999 (has links)
Virtual Environments (VEs) created through Virtual Reality (VR) technologies have been suggested as potentially beneficial for a number of applications. However a review of VEs and VR has highlighted the main barriers to implementation as: current technological limitations; usability issues with various systems; a lack of real applications; and therefore little proven value of use. These barriers suggest that industry would benefit from some structured guidance for developing effective VEs. To examine this ‘training’ was chosen to be explored, as it has been suggested as a potential early use of VEs and is of importance to many sectors. A review of existing case studies on VE training applications (VETs) examined type of training applications and VR systems being considered; state of development of these applications and results of any evaluation studies. In light of these case studies, it was possible to focus this work on the structured evaluation of training psycho-motor skills using VEs created by desktop VR. In order to perform structured evaluation, existing theories of training and evaluation were also reviewed. Using these theories, a framework for developing VETs was suggested. Applying this framework, two VETs were proposed, specified, developed and evaluated. Conclusions of this work highlighted the many areas in the development process of an effective VET that still need addressing. In particular, in the proposal stage, it is necessary to provide some guidance on the appropriateness of VET for particular tasks. In the specification and building stages, standard formats and techniques are required in order to guide the VE developer(s) in producing an effective VET. Finally in the evaluation stage, there are still tools required that highlight the benefits of VET and many more evaluation studies needed to contribute information back to the development process. Therefore VEs are still in their early stages and this work unifies existing work in the area specifically on training and highlights the gaps that need to be addressed before widespread implementation.
|
Page generated in 0.0736 seconds