• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 33
  • 16
  • 15
  • 7
  • 2
  • 1
  • Tagged with
  • 207
  • 52
  • 34
  • 30
  • 28
  • 27
  • 20
  • 19
  • 18
  • 17
  • 16
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Designing communication systems that work

Parker, Aaron Mark January 2011 (has links)
Developments in computer-mediated communications are changing the way people communicate. Yet for every system that succeeds, many others fail. The failure of seemingly superior systems, such as the Picturephone, and success of simpler ones, such as Twitter, have challenged common-sense notions of what makes for a successful communications system. This thesis proposes that computer-mediated communication systems should not be compared relative to the quality of the communicative cues they transmit. Instead, they should be viewed relative to their ability to offer channels of communication that may be creatively repurposed to compensate for restrictions. Together with a new tool for enquiry, termed the integrative framework, it is proposed that needs and trade-offs occur across four interrelated levels. These four levels are the task, the group, the self, and the system. Studies 1 and 2 applied the integrative framework to a new form of high-quality videoconferencing called telepresence. Anecdotal evidence suggested that users were enthusiastic towards these systems, unlike with standard videoconferencing products. In Study 1, user interviews supported this and also provided evidence that users were adapting the system for activities it was not designed. An investigation into the effects of system latency on collaboration by telepresence followed in Study 2. Again, results were atypical of videoconferencing, suggesting that telepresence offers mechanisms through which the effects of latency can be mitigated. Studies 3, 4 and 5 investigated the use of paralanguage (operationalised as the unusual informal elements of written language) in text-based communications as an example where users adapt to the restrictions of a medium. In Study 3, it was discovered that users of a virtual world utilised paralanguage extensively, even in a formal context. Through grounded theory it was discovered that paralanguage had a self-presentation role. In Study 5, the association of para language with the levels of the integrative framework was investigated. Associations were discovered, highlighting the users' role in improving restrictive media. The thesis extends the theoretical understanding of computer mediated communication by demonstrating that users engage in compensatory behaviour to adapt to system restrictions. It also introduces the integrative framework as a tool for the design and evaluation of effective communication systems.

Finding and measuring inconsistency in arbitrary knowledge bases

McAreavey, Kevin January 2014 (has links)
Inconsistency is prevalent in real-world knowledge base applications. In this thesis, we consider how inconsistency measures can support formal inconsistency handling techniques. We begin with a review of existing formula-level inconsistency measures proposed in the literature. We then carry out a case study on inconsistency in the QRadar security information and event management (SIEM) platform. From this work, we argue that formula-level inconsistency measures based on the notion of minimal inconsistent subsets (MISes) are an intuitive means of supporting inconsistency handling. However, few of these measures have been implemented or experimentally evaluated to support their viability for arbitrary knowledge bases, since computing all MISes is intractable in the worst case. Fortunately, recent work on a related problem, known as minimal unsatisfiable subformulae (MUSes), offers a viable solution in many cases. As such, we draw connections between MISes and MUSes and propose two new algorithms for computing MISes, termed MUS generalization and optimized MUS transformation. We implement these algorithms in a tool called MIMUS, along with a selection of existing measures for flat and stratified knowledge bases. We also propose and implement a novel measure for stratified knowledge bases which offers a more fine-grained inspection of the formulae involved in inconsistency. After this, we carry out an experimental evaluation of MIMUS using random arbitrary knowledge bases. Finally, we demonstrate how inconsistency measures can be exploited in other domains where we propose the use of inconsistency measures for evaluating belief merging operators. We conclude that computing MISes is viable for many large and complex random instances. We also conclude that these measures are relatively trivial to compute once MISes have been found. As such, these measures represent a viable and intuitive tool for inconsistency handling in real-world applications such as QRadar. Moreover, inconsistency measures represent an appropriate method for evaluating merging operators.

Advanced methods for nonlinear system modelling and identification

Li, Kang January 2015 (has links)
System modelling and identification has played a key role in modern scientific research for system analysis, control and automation in many areas. This dissertation details the technical contributions of the applicant on parametric non linear system modelling and identification in the past 17 years since he joined Queen's University Belfast in 1998. The thesis first introduces the early research of the applicant on modelling the pollution emissions from fossil fuel fired thermal power plants for the purpose of reducing their negative environmental effects through advanced control. This work led to the proposal of a novel grey-box modelling approach for modelling complex system where many intermediate variables are difficult to measure on-line. Continual work has further led to the development of an algorithmic framework for building models that can be coined in a single hidden layer neural network structure with linear output weights . The first contribution along this stream of r~search is the proposal of a regression framework which allows the development of new fast algorithms to select a compact set of basis functions. This regression framework has further been extended for neural modelling with tunable parameters in the hidden nodes. In order to effectively optimize the two sets of parameters in 'the model and also to build a compact parsimonious model with less basis functions, hybrid approaches have been developed, which allow simultaneous selection of basis functions and optimization of the two sets of model parameters. These proposed methods and algorithms have been successfully applied to pollutant emission modelling in thermal power plants, soft-sensor development for measuring polymer melt viscosity in plastics industry, statistic process monitoring, as well as biological process modelling in systems biology, winning several prizes and awards. The research has further led to the development of new energy and condition monitoring systems currently used in the plastics industry

Dogmatism and bounded rationality : a systemic epistemology for system theory

Georgiou, Phokion 'Ion' Sotirios January 2004 (has links)
No description available.

Judgmental forecasting from graphs and from experience

Theochari, Z. January 2014 (has links)
Research in the field of forecasting suggests that judgmental forecasts are typically subject to a number of biases. These biases may be related to the statistical characteristics of the data series, or to the characteristics of the forecasting task. Here, a number of understudied forecasting paradigms have been investigated and these revealed interesting ways of improving forecasting performance. In a series of experiments, by controlling parameters such as the horizon and direction of the forecasts or the length, scale and presentation format of the series, I demonstrate that forecasting can be enhanced in several ways. In Chapter 3, I examine forecasting direction as well as the use of an end-anchor to the forecasting task (Experimental Studies 1-2). In Chapter 4, I examine the way the length of the series affects forecasting performance of various types of time series (Experimental Studies 3-4). Dimensional issues related to the forecasting task are further investigated in Chapter 5, where graphs’ scale is now manipulated in series with different types of noise (Experimental Studies 5-6). Task characteristics are further explored in dynamic settings in Chapter 6, in a number of experiments (Experimental Studies 7-12), where a new experimental paradigm for judgmental forecasting is introduced. Here, I test already identified robust forecasting biases in this dynamic setting and compare their magnitude and direction with those found in static environments. I conclude that forecasting performance is affected by data series’ and task characteristics in the following ways i) end-anchoring and backwards direction in forecasting tasks enhance accuracy ii) longer lengths are preferable for a number of series’ types iii) dynamic settings may offer specific enhancements to the forecasting task. The implications of these findings are discussed with respect to judgmental forecasting and corresponding cognitive mechanisms, while, directions for future research, towards the development of a unified framework for judgmental forecasting, are suggested.

Karhunen-Loeve expansions and their applications

Wang, Limin January 2008 (has links)
The Karhunen-Loeve Expansion (K-L expansion) is a bi-orthogonal stochastic process expansion. In the field of stochastic process, the Karhunen-Loeve expansion decomposes the process into a series of orthogonal functions with the random coefficients. The essential idea of the expansion is to solve the Fredholm integral equation, associated with the covariance kernel of the process, which defines a Reproducing Kernel Hilbert Space (RKHS). This either has an analytical solution or special numerical methods are needed. This thesis applies the Karhunen-Loeve expansion to some fields of statistics. The first two chapters review the theoretical background of the Karhunen-Loeve expansion and introduce the numerical methods, including the integral method and the expansion method, when the analytical solution to the expansion is unavailable. Chapter 3 applies the theory of the Karhunen-Loeve expansion to the field of the design experiment using a criteria called "maximum entropy sampling". Under such setting, a type of duality is set up between maximum entropy sampling and the D- optimal design of the classical optimal design. Chapter 4 uses the Karhunen-Loeve expansion to calculate the conditional mean and variance for a given set of observations, with application to prediction. Chapter 5 extends the theory of the Karhunen- Loeve expansion from the univariate setting to the multivariate setting: multivariate space, univariate time. Adaptations of numerical methods of Chapter 2 are also provided for the multivariate setting, with a full matrix development. Chapter 6 applies the numerical method developed in Chapter 5 to the emerging area of multivariate functional data analysis with a detailed example on a trivariate autoregressive process.

Semantic modelling for discrete event simulation

Barakat, Mamdouh Taysir January 1992 (has links)
Discrete event simulation modelling has been established as an important tool for management planning. This process has been aided by the availability of off-the-shelf simulation systems for microcomputers. Traditionally these have had text-based interfaces and very limited graphics. As the availability of powerful colour microcomputers have increased, graphical front-ends have been added. As clients have got used to consistent graphical interfaces (e.g. Apple Macintosh or Microsoft Windows), they have desired the same level of integration in their simulation support environments. Research in other fields has been utilised in improving simulation environments. These fields include relational databases, expert systems, formal languages and graphical environments. This thesis examines the use of artificial intelligence in the discrete event simulation field with the aim of examining some potential areas in which it might be possible to improve simulation environments. Existing simulation research in the artificial intelligence (AI) field is extended by investigating the graphical AI knowledge-base called semantic networks. This thesis demonstrates semantic modelling, a discrete event simulation modelling approach based on semantic networks, which attempts to give a consistent graphical interface throughout the life cycle of a simulation study. The semantic modelling approach also incorporates expert system and natural language research. A prototype system of this approach is described.

Confidence interval methods in discrete event computer simulation : theoretical properties and practical recommendations

Kevork, Ilias January 1990 (has links)
Most of steady state simulation outputs are characterized by some degree of dependency between successive observations at different lags measured by the autocorrelation function. In such cases, classical statistical techniques based on independent, identical and normal random variables are not recommended in the construction of confidence intervals for steady state means. Such confidence intervals would cover the steady state mean with probability different from the nominal confidence level. For the last two decades, alternative confidence interval methods have been proposed for stationary simulation output processes. These methods offer different ways to estimate the variance of the sample mean with final objective of achieving coverages equal to the nominal confidence level. Each sample mean variance estimator depends on a number of different parameters and the sample size. In assessing the performance of the confidence interval methods, emphasis is necessarily placed on studying the actual properties of the methods in an empirical context rather than proving their mathematical properties. The testing process takes place in the context of an environment where certain statistical criteria, which measure the actual properties, are estimated through Monte Carlo methods on output processes from different types of simulation models. Over the past years, however, different testing environments have been used. Different methods have been tested on different output processes under different sample sizes and parameter values for the sample mean variance estimators. The diversity of the testing environments has made it difficult to select the most appropriate confidence interval method for certain types of output processes. Moreover, a catalogue of the properties of the confidence interval methods offers limited direct support to a simulation practitioner seeking to apply the methods to particular processes. Five confidence interval methods are considered in this thesis. Two of them were proposed in the last decade. The other three appeared in the literature in 1983 and 1984 and constitute the recent research objects for the statistical experts in simulation output analysis. First, for the case of small samples, theoretical properties are investigated for the bias of the corresponding sample mean variance estimators on AR(1) and AR(2) time series models and the delay in queue in the M/M/1 queueing system. Then an asymptotic comparison for these five methods is carried out. The special characteristic of the above three processes is that the 5th lag autocorrelation coefficient is given by known difference equations. Based on the asymptotic results and the properties of the sample mean variance estimators in small samples, several recommendations are given in making the following decisions: I) The selection of the most appropriate confidence interval method for certain types of simulation outputs. II) The determination of the best parameter values for the sample mean variance estimators so that the corresponding confidence interval methods achieve acceptable performances. III) The orientation of the future research in confidence interval estimation for steady state autocorrelated simulation outputs.

Information inequalities and quantum circuits

Cadney, Joshua January 2014 (has links)
Information inequalities are vital to the study of both classical and quantum information theory. All of the previously known information inequalities for the von Neumann entropy can be derived from the strong subadditivity of the entropy, and one further constrained inequality. We prove the existence of an infinite family of new constrained inequalities by generalizing the proof of a family of classical information inequalities. We show that our new inequalities are all independent. We also study information inequalities for the quantum O-Renyi entropy. These are equivalent to inequalities for the ranks of marginals of multipartite quantum states. We find two new rank inequalities and provide some evidence for a third. We also find quantum states which violate a previously conjectured inequality. We then move on to study information inequalities in a physical theory which is more general than quantum mechanics: Generalized Non-Signalling Theory (GNST), which is also known as box-world. Here we find that the only information inequalities are non-negativity and subadditivity. What is surprising is that non-locality does not play a role - any bipartite entropy vector can be achieved by separable states of the theory. This is in stark contrast to the case of the von Neumann entropy in quantum theory, where only entangled states satisfy S(AB)<S(A). Finally, we study the implementation of quantum circuits via linear optics. We are able to completely characterize the set of two qubit gates which can be implemented using only linear optical elements (beam splitters and phase shifters) and post-selection. The proof also gives rise to an algorithm for calculating the optimal success probability of those gates which are achievable.

Facilities planning : a systems analysis and simulation approach with particular reference to hospital design

Cinar, Unvar January 1968 (has links)
No description available.

Page generated in 0.0573 seconds