• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 707
  • 707
  • 669
  • 165
  • 110
  • 71
  • 70
  • 62
  • 58
  • 50
  • 46
  • 44
  • 44
  • 44
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Critical systems thinking and pluralism : a new constellation

Gregory, Wendy Jane January 1992 (has links)
This thesis explains theoretical issues concerned with paradigm incommensurability and the solutions offered by various critical systems writers. The problems of "imperialism" are outlined together with an analysis of the meta-theoretical views which purport to avoid imperialism. It is suggested that researchers attempting to understand alien or incommensurable paradigms or cultures often succumb to imperialism in its various guises. Three models of methods used by such researchers are described. The last of these, the model of critical appreciation, incorporates two crucial components advocated by Habermas and endorsed by Bernstein: critical self-reflection based upon an analogy of Freud's model of dream analysis, and an explicit critique of ideology. Methodological guidelines are offered which draw on an analogy of dream-analysis and on historical reconstruction as ideology-critique. It is suggested that any social inquiry must contain elements of "reflexive" (philosophical) and "scientific" (practical) inquiry together with ideology-critique and critical self-reflection in order to bring about the emancipation of individuals and groups. A model of self-society dynamics reveals the need for reflexive inquiry, discourse and action (as exemplified in the critical appreciation process) in any efforts to transform 'self' or 'society'. Consideration turns to the relationship between critical thinking and pluralism. The enriched version of critical appreciation is shown to require an a prior commitment to a new, discordant pluralism, which it also suggests in its modus operandii. In particular, the 'either/or' problematique presented by many writers is transformed into a 'both/and' juxtapositioning which lends its support to the form of pluralism involving both critical self-reflection and ideology-critique. The fully elaborated model of critical appreciation will finally be shown to fulfil the demands of the commitments of critical systems thinking.
132

Harmonic domain modelling of wind-based micro-grids

Mumtaz, Majid January 2012 (has links)
Power quality problems have been identified with wind generation sites and their connection with the distribution network. The main aim of this research is to put forward and develop models for the conventional components in a power system, but with provision for the representation of wind farms. To develop the necessary tools and computational methods that can be embedded in programmes in such a way that economic and security assessments can be carried out on present and future wind-based networks that are likely to be highly decentralised in future. The goal has been accomplished using MATLAB programming and the 'power library' tools.
133

A study of the wake of an isolated tidal turbine with application to its effects on local sediment transport

Vybulkova, Lada January 2013 (has links)
Tidal energy conversion devices (TECDs) are in development throughout the world to help reduce the need for fossil fuels. These devices will generally be mounted on the seabed and remain there over a period of years. Most of the previous research on TECDs has focused on their power extraction capability and efficient design. The handful of studies which have focused on the effects of the devices on the marine environment have not considered small-scale three-dimensional phenomena occurring in the flow near the rotor. These phenomena are likely to disturb the marine environment by altering the dynamics of sediment. The accurate prediction of the rapidly changing flow down-stream of a TECD and its influence on the seabed poses a challenge. The nature of the interactions between such a flow and sediment has not been experimentally established. Predictions of these interactions, as is necessary for an assessment of the effects of the devices on the seabed, need to account for the depth-dependence of the flow velocity and its changes during the tidal cycle. The difference between the typical time-scales of the development of the rotor wake and the tidal cycle represents a difficulty for the computational modelling of the interactions between the device and the tidal flow. This dissertation presents an inviscid analysis of the flow down-stream of horizontal- axis, vertical-axis and cross-flow TECDs by means of computer modelling. The Vortic- ity Transport Model, modified to simulate the flow down-stream of a TECD mounted onto the seabed, predicts the shear stress inflicted by the flow on the seabed. The shear stresses on the seabed, generated by small-scale vortical structures in the wake down-stream of the devices, cause sediment to uplift. This process along with the sub- sequent motion of the sediment is simulated by a sediment model implemented into the Vorticity Transport Model. The critical bed shear stress is known as a threshold for initiation of sediment motion, therefore the relative difference between the stress on the seabed and the critical bed shear stress, called the excess bed shear stress, is chosen here as an indicator of the impact of the TECDs on the seabed. The evolution of the instantaneous stresses on the seabed is predicted to vary with the configuration of TECD. The results suggest that the average excess bed shear stress inflicted on the seabed by the horizontal-axis device increases with the inflow velocity during the flood part of the representative tidal cycle and that the increase can be expressed by a simple algebraic expression. It is also predicted that the impact of this device on the seabed does not monotonically decrease with increasing separation between the rotor and the seabed. In addition, the relationship between the excess bed shear stress and the position of the rotor is established. Furthermore, the simulations indicate that the wake down-stream of the horizontal-axis device is lifted by the flow away from the seabed, which result in a confinement of its impact to the vicinity of the rotor. In contrast with the horizontal-axis configuration, it is concluded that the vertical-axis and cross-flow configurations of the rotor would promote the erosion of the seabed further away from the device, at a location where the wake approaches the seabed again and that this location depends on the inflow velocity. The predicted effects of these devices on the marine environment need to be con- sidered in advance of their installation on the seabed.
134

The construction and evaluation of statistical models of melodic structure in music perception and composition

Pearce, Marcus Thomas January 2005 (has links)
The prevalent approach to developing cognitive models of music perception and composition is to construct systems of symbolic rules and constraints on the basis of extensive music-theoretic and music-analytic knowledge. The thesis proposed in this dissertation is that statistical models which acquire knowledge through the induction of regularities in corpora of existing music can, if examined with appropriate methodologies, provide significant insights into the cognitive processing involved in music perception and composition. This claim is examined in three stages. First, a number of statistical modelling techniques drawn from the fields of data compression, statistical language modelling and machine learning are subjected to empirical evaluation in the context of sequential prediction of pitch structure in unseen melodies. This investigation results in a collection of modelling strategies which together yield significant performance improvements over existing methods. In the second stage, these statistical systems are used to examine observed patterns of expectation collected in previous psychological research on melody perception. In contrast to previous accounts of this data, the results demonstrate that these patterns of expectation can be accounted for in terms of the induction of statistical regularities acquired through exposure to music. In the final stage of the present research, the statistical systems developed in the first stage are used to examine the intrinsic computational demands of the task of composing a stylistically successful melody The results suggest that the systems lack the degree of expressive power needed to consistently meet the demands of the task. In contrast to previous research, however, the methodological framework developed for the evaluation of computational models of composition enables a detailed empirical examination and comparison of such models which facilitates the identification and resolution of their weaknesses.
135

The implementation of Enterprise Resource Planning Systems in different national and organisational cultures

Krumbholz, Marina January 2003 (has links)
ERP (Enterprise Resource Planning) packages provide generic off-the-shelf business and software solutions to customers. However, these packages are implemented in companies with different organisational and national cultures, and there is growing evidence that failure to adapt ERP packages to fit these cultures leads to projects which are expensive and overdue. This thesis investigates this impact of national and organisational cultures on the efficiency of ERP implementations. A theory of culture for ERP implementations is proposed. It draws on key theories and models of social and management science. The theory also includes a meta-schema of culture - a meta-model of the critical elements of national and organisational culture and ERP implementations. It provides the reader with a generic definition and model of culture. The theory was evaluated by two studies. The first study was conducted at the finance department of a higher educational establishment. The second study was conducted at 3 subsidiaries of a large multi-national pharmaceutical organisation in the UK, Germany and Scandinavia. Results provided evidence for the impact of organisational and national culture on the efficiency of ERP implementations. Furthermore, the results validated the theory of culture. They demonstrated that the culture-related problems arise because the changes associated with an ERP implementations, violated the employees' expectations (norms). The thesis also presents a method called CAREs (Culturally Aware Realisation of ERP systems) that aims to help ERP implementation teams to identify, explain and predict potential culture-related problems. Three experts evaluated the CAREs method. They were presented with a series of SAP implementation scenarios and were asked with a number of questionnaires to provide feedback on its utility, usability and effectiveness. The results demonstrated that the method is potentially useful to ERP implementation teams. Moreover, the results provided suggestions on how to improve the CAREs method. The thesis concludes with a review of the research hypotheses and a discussion of future work and future directions.
136

Intelligent monitoring of a complex, non-linear system using artificial neural networks

Weller, Peter Richard January 1997 (has links)
This project uses advanced modelling techniques to produce a design for a computer based advisory system for the operator of a critical, complex, non-linear system, typified by a nuclear reactor. When such systems are in fault the operator has to promptly assess the problem and commence remedial action. Additional accurate and rapid information to assist in this task would clearly be of benefit. The proposed advisory system consists of two main elements. The plant state is determined and then the future condition predicted. These two components are linked by a common data flow. The diagnosed condition is also used as input for the predictive section. Artificial Neural Networks (ANNs) are used to perform both diagnosis and predictions. An ANN, a simplified model of the brain, can be trained to classify a set of known inputs. It can then classify unknown inputs The predictive element is first investigated. The number of conditions that can be predicted by a single ANN is identified as a key factor. Two distinct solutions are considered. The first uses the important features of the fault to determine an empirical relationship for combining transients. The second uses ANNs to model a range of system transients. A simple model is developed and refined to represent an important section of a nuclear reactor. The results show good predicted values for a extensive range of fault scenarios. The second approachis selected for implementation in the advisory system. The diagnostic element is explored using a set of key transients. A series of ANNs for diagnosing these conditions are developed using a range of strategies. The optimum combination was selected for implementation in the advisory system. The key plant variables which contributed most to the ANN inputs were identified. An implementation of the advisory system is described. The system should be a single suite of programs with the predictive and diagnostic sections supported by a controller module for organising information. The project concludes that the construction of such a system is possible with the latest technologies.
137

Software reliability prediction

Wright, David R. January 2001 (has links)
This thesis presents some extensions to existing methods of software reliability estimation and prediction. Firstly, we examine a technique called 'recalibration' by means of which many existing software reliability prediction algorithms assess past predictive performance in order to improve the accuracy of current reliability predictions. This existing technique for forecasting future failure times of software is already quite general. Indeed, whenever your predictions are produced in the form of time-to-failure distributions, successively as more actual failure times are observed, you can apply recalibration irrespective both of which probabilistic software reliability model and of which statistical inference technique you are using. In the current work we further generalise the recalibration method to those situations where empirical failure data take the form of failure-counts rather than precise inter-failure times. We then briefly explore how the reasoning we have used, in this extension of recalibration to the prediction of failure-count sequences, might further extend to recalibration of other representations of predicted reliability. Secondly, the thesis contains a theoretical discussion of some modelling possibilities for improving software reliability predictions by the incorporation of disparate sources of data. There are well established techniques for forecasting the reliability of a particular software product using as data only the past failure behaviour of that software under statistically representative operational testing. However, there may sometimes be reasons for seeking improved predictive accuracy by using data of other kinds too, rather than relying on this single source of empirical evidence. Notable among these is the economic impracticability, in many cases, of obtaining sufficient, representative software failure vs. time data (from execution of the particular product in question) to determine, by inference applied to software reliability growth models, whether or not a high reliability requirement has been achieved in a particular case, prior to extensive operational use of the software in question. For example, this problem arises in particular for safety-critical systems, whose required reliability is often extremely high. An accurate reliability assessment is often required in advance of a decision whether to release the software for actual use in the field. Another argument for attempting to determine other usable data sources for software reliability prediction is the value that would attach to rigorous empirical confirmation or refutation of any of the many existing theories and claims about what are the factors of software reliability, and how these factors may interact, in some given context. In those cases, such as some safety-critical systems, in which assessment of a high reliability level is required at an early stage, the necessary assessment is in practice often currently carried out rather informally, and often does claim to take account of many different types of evidence experience of previous, similar systems; evidence of the efficacy of the development process; expert judgement, etc-to supplement the limited available data on past failure vs. time behaviour which emanates from testing of the software within a realistic usage environment. Ideally, we would like this assessment to allow all such evidence to be combined into a final numerical measure of reliability in a scientifically more rigorous way. To address these problems, we first examine some candidate general statistical regression models used in other fields such as medicine and insurance and discuss how these might be applied to prediction of software reliability. We have here termed these models explanatory variables regression models. The goal here would be to investigate statistically how to explain differences in software failure behaviour in terms of differences in other measured characteristics of a number of different statistical 'individuals', or 'experimental units': We discuss the interpretation, within the software reliability context, of this statistical concept of an 'individual', with our favoured interpretation being such that a single statistical reliability regression model would be used to model simultaneously a family of parallel series of inter-failure times emanating from measurably different software products or from measurably different installations of a single software product. In statistical regression terms here, each one of these distinct failure vs. time histories would be the 'response variable' corresponding to one of these 'individuals'. The other measurable differences between these individuals would be captured in the model as explanatory variable values which would differ from one individual to another. Following this discussion, we then leave general regression models to examine a slightly different theoretical approach-to essentially the same question of how to incorporate diverse data within our predictions-through an examination of models for 'unexplained' differences between individuals' failure behaviours. Here, rather than assuming the availability of putative 'explanatory variables' to distinguish our statistical individuals and 'explain' the way that their reliabilities differ, we instead use randomness alone to model their differences in reliability. We have termed the class of models produced by this approach similar products models, meaning models in which we regard the individuals' different likely failure vs. time behaviours as initially (i. e. a priori) indistinguishable to us: Here, either we cannot (or we choose not to attempt with a formal model to) explain the differences between individuals' reliabilities in terms of other metrics applied to our individuals, but we do still expect that the 'similar products" (i. e. the individuals') reliabilities will be different from each other: We postulate the existence of a single probability distribution from which we may assume our individuals' true, unknown reliabilities to have all been drawn independently in a random fashion. We present some mathematical consequences, showing how, within such a modelling framework, prior belief about the distribution of reliabilities assumes great importance for model consequences. We also present some illustrative numerical results that seem to suggest that experience from previous products or environments, so represented within the model-even where very high operational dependability has been achieved in such previous cases-can only modestly improve our confidence in the reliability of a new product, or of an existing product when transferred to a new environment.
138

The use of XML schema and XSLT rules for product information personalization

Stampoultzis, Michael January 2004 (has links)
This thesis describes research carried out in order to help solve the problem of personalization in e-commerce/CRM system. Web-based personalization consists of activities, such as providing customised information, that tailor the user's Web experience- browsing a Web site or purchasing a product, for example-to that user's particular needs. The main research objective of the project is to investigate how XSLT technologies can be used for the development of matching engines that find XML represented products that match the tastes, needs or requirements of customers as captured in customer profiles, also represented in XML. More specifically our research investigates into novel algorithms for transforming XML based product specifications using rules that derive from mining customer profiles with the purpose of customizing the product information.
139

Assessing the evolution of social networks in e-learning

Laghos, Andrew January 2007 (has links)
This research provides a new approach to analysing the evolutionary nature of social networks that are formed around computer-mediated-communication (CMC) in e-Learning courses. Aspects that have been studied include Online Communities and student communication e-Learning environments. The literature review performed identified weaknesses in the current methods of analyzing CMC activity. A proposed unfied analysis framework (FESNeL) was developed which enables us to explore students' interactions and to test a number of hypotheses. The creation of the framework is discussed in detail along with its major components (e.g. Social Network Analysis and Human Computer Interaction techniques). Furthermore this framework was tested on a case study of an online Language Learning Course. The novelty of this study lies in the investigation of the evolution of online social networks, filling a gap in current research which focuses on specific time stamps (usually the end of the course) when analysing CMC. In addition, the framework uses both qualitative and quantitative methods allowing for a complete assessment of such social networks. Results indicate that FESNeL is a useful methodological framework that can be used to assess student communication and interaction in web-based courses. In addition, through the use of this framework, several characteristic hypotheses were tested which provided useful insights about the nature of learning and communicating online.
140

Evolutionary computing techniques to aid the acquisition and analysis of nuclear magnetic resonance data

Gray, Helen Frances January 2007 (has links)
Evolutionary computation, including genetic algorithms and genetic programming have taken the ideas of evolution in biology and applied some of the characteristics to problem solving. The survival of the fittest paradigm allows a population of candidate solutions to be modified by sexual and asexual reproduction and mutation to come closer to solving the problem in question without the necessity of having prior knowledge of what a good solution looks like. The increasing importance of Nuclear Magnetic Resonance Spectroscopy in medicine has created a demand for automated data analysis for tissue classification and feature selection. The use of artificial intelligence techniques such as evolutionary computing can be used for such data analysis. This thesis applies the techniques of evolutionary computation to aid the collection and classification of Nuclear Magnetic Resonance spectroscopy data. The first section (chapters one and two) introduces Nuclear Magnetic Resonance spectroscopy and evolutionary computation and also contains a review of relevant literature. The second section focuses on classification. In the third chapter classification into two classes of brain tumours is undertaken. The fourth chapter expands this to classify tumours and tissues into more than two classes. Genetic Programming provided good solutions with relatively simple biochemical interpretation and was able to classify data into more than two classes at one time. The third section of the thesis concentrates on using evolutionary computation techniques to optimise data acquisition parameters directly from the Nuclear Magnetic Resonance hardware. Chapter five shows that Genetic Algorithms in particular are successful at suppressing signals from solvent while chapter six applies these techniques to find a way of enhancing the signals from metabolites important to the classification of brain tumours as found in chapter three. The final chapter draws conclusions as to the efficacy of evolutionary computation techniques applied to Nuclear Magnetic Resonance Spectroscopy.

Page generated in 0.0669 seconds