Spelling suggestions: "subject:"computing science"" "subject:"acomputing science""
21 |
Handling emergent conflicts in adaptable rule-based sensor networksBlum, Jesse Michael January 2012 (has links)
This thesis presents a study into conflicts that emerge amongst sensor device rules when such devices are formed into networks. It describes conflicting patterns of communication and computation that can disturb the monitoring of subjects, and lower the quality of service. Such conflicts can negatively affect the lifetimes of the devices and cause incorrect information to be reported. A novel approach to detecting and resolving conflicts is presented. The approach is considered within the context of home-based psychiatric Ambulatory Assessment (AA). Rules are considered that can be used to control the behaviours of devices in a sensor network for AA. The research provides examples of rule conflict that can be found for AA sensor networks. Sensor networks and AA are active areas of research and many questions remain open regarding collaboration amongst collections of heterogeneous devices to collect data, process information in-network, and report personalised findings. This thesis presents an investigation into reliable rule-based service provisioning for a variety of stakeholders, including care providers, patients and technicians. It contributes a collection of rules for controlling AA sensor networks. This research makes a number of contributions to the field of rule-based sensor networks, including areas of knowledge representation, heterogeneous device support, system personalisation, and in particular, system reliability. This thesis provides evidence to support the conclusion that conflicts can be detected and resolved in adaptable rule-based sensor networks.
|
22 |
A framework for e-government success from the user's perspectiveAlmalki, Obaid January 2014 (has links)
This thesis aims to contribute to a better understanding of e-government portal success by developing a e-government success framework from a user’s perspective. The proposed framework is underpinned by relevant theories, such as DeLone and McLean’s IS success model, the Technology Acceptance Model (TAM), self-efficacy theory and trust. The culture aspect has also been taken into consideration by adopting personal values theory introduced by Schwartz (1992). Three data collection methods were used. First, an exploratory study was carried to explore the main aspects and factors for understanding e-government systems success. Second, a Delphi study was conducted to investigate which of the ten value types are particularly relevant to success or have a significant impact. Third, a survey-based study was carried out to validate empirically the proposed theoretical framework. Results of the exploratory study helped to identify the potential success factors of e-government systems. The results of the Delphi study suggest that four of the ten values, namely self-direction, stimulation, security, and tradition, most likely affect e-government portal success. Structural equation modelling techniques were applied to test the research model using a large-scale survey. The findings of hypothesis testing suggested that e-government portal success (i.e. net benefit) was directly affected by actual use and user satisfaction and indirectly affect by a number of factors concerning system quality, service quality, information quality, perceived risk, and computer self-efficacy. By combining IS success model and TAM, this study found system quality, information quality and service quality affected the perceived ease of us, but service quality had no effect on perceived usefulness. However, perceived risk seemed to have no effect on attitudes towards using, but very small negative effect on perceived usefulness. Users’ computer skills was found to have no effect on perceived ease of use and very small effect on perceived usefulness. These indicate that risk and IT skills are playing less significant role in the context of e-government. The research findings confirmed that adoption was not equivalent to success, but it was the necessary precondition to success. In the personal values-attitude-behaviour model, the empirical evidence suggested that Conservation affects attitude towards use which, in turn, affects behavioural intention to re-use. Openness to change had no effect on attitude toward using. The findings provide important implications for e-government research and practice.
|
23 |
Sequence-learning in a self-referential closed-loop behavioural systemPorr, Bernd January 2003 (has links)
This thesis focuses on the problem of "autonomous agents". It is assumed that such agents want to be in a desired state which can be assessed by the agent itself when it observes the consequences of its own actions. Therefore the feedback from the motor output via the environment to the sensor input is an essential component of such a system. As a consequence an agent is defined in this thesis as a self-referential system which operates within a closed sensor- mot or-sensor feedback loop. The generic situation is that the agent is always prone to unpredictable disturbances which arrive from the outside, i.e. from its environment. These disturbances cause a deviation from the desired state (for example the organism is attacked unexpectedly or the temperature in the environment changes, ...). The simplest mechanism for managing such disturbances in an organism is to employ a reflex loop which essentially establishes reactive behaviour. Reflex loops are directly related to closed loop feedback controllers. Thus, they are robust and they do not need a built-in model of the control situation. However, reflexes have one main disadvantage, namely that they always occur 'too late'; i.e., only after a (for example, unpleasant) reflex eliciting sensor event has occurred. This defines an objective problem for the organism. This thesis provides a solution to this problem which is called Isotropic Sequence Order (ISO-) learning. The problem is solved by correlating the primary reflex and a predictive sensor input: the result is that the system learns the temporal relation between the primary reflex and the earlier sensor input and creates a new predictive reflex. This (new) predictive reflex does not have the disadvantage of the primary reflex, namely of always being too late. As a consequence the agent is able to maintain its desired input-state all the time. In terms of engineering this means that ISO learning solves the inverse controller problem for the reflex, which is mathematically proven in this thesis. Summarising, this means that the organism starts as a reactive system and learning turns the system into a pro-active system. It will be demonstrated by a real robot experiment that ISO learning can successfully learn to solve the classical obstacle avoidance task without external intervention (like rewards). In this experiment the robot has to correlate a reflex (retraction after collision) with signals of range finders (turn before the collision). After successful learning the robot generates a turning reaction before it bumps into an obstacle. Additionally it will be shown that the learning goal of 'reflex avoidance' can also, paradoxically, be used to solve an attraction task.
|
24 |
Performance enhancement for LTE and beyond systemsLi, Wei January 2014 (has links)
Wireless communication systems have undergone fast development in recent years. Based on GSM/EDGE and UMTS/HSPA, the 3rd Generation Partnership Project (3GPP) specified the Long Term Evolution (LTE) standard to cope with rapidly increasing demands, including capacity, coverage, and data rate. To achieve this goal, several key techniques have been adopted by LTE, such as Multiple-Input and Multiple-Output (MIMO), Orthogonal Frequency-Division Multiplexing (OFDM), and heterogeneous network (HetNet). However, there are some inherent drawbacks regarding these techniques. Direct conversion architecture is adopted to provide a simple, low cost transmitter solution. The problem of I/Q imbalance arises due to the imperfection of circuit components; the orthogonality of OFDM is vulnerable to carrier frequency offset (CFO) and sampling frequency offset (SFO). The doubly selective channel can also severely deteriorate the receiver performance. In addition, the deployment of Heterogeneous Network (HetNet), which permits the co-existence of macro and pico cells, incurs inter-cell interference for cell edge users. The impact of these factors then results in significant degradation in relation to system performance. This dissertation aims to investigate the key techniques which can be used to mitigate the above problems. First, I/Q imbalance for the wideband transmitter is studied and a self-IQ-demodulation based compensation scheme for frequency-dependent (FD) I/Q imbalance is proposed. This combats the FD I/Q imbalance by using the internal diode of the transmitter and a specially designed test signal without any external calibration instruments or internal low-IF feedback path. The instrument test results show that the proposed scheme can enhance signal quality by 10 dB in terms of image rejection ratio (IRR). In addition to the I/Q imbalance, the system suffers from CFO, SFO and frequency-time selective channel. To mitigate this, a hybrid optimum OFDM receiver with decision feedback equalizer (DFE) to cope with the CFO, SFO and doubly selective channel. The algorithm firstly estimates the CFO and channel frequency response (CFR) in the coarse estimation, with the help of hybrid classical timing and frequency synchronization algorithms. Afterwards, a pilot-aided polynomial interpolation channel estimation, combined with a low complexity DFE scheme, based on minimum mean squared error (MMSE) criteria, is developed to alleviate the impact of the residual SFO, CFO, and Doppler effect. A subspace-based signal-to-noise ratio (SNR) estimation algorithm is proposed to estimate the SNR in the doubly selective channel. This provides prior knowledge for MMSE-DFE and automatic modulation and coding (AMC). Simulation results show that this proposed estimation algorithm significantly improves the system performance. In order to speed up algorithm verification process, an FPGA based co-simulation is developed. Inter-cell interference caused by the co-existence of macro and pico cells has a big impact on system performance. Although an almost blank subframe (ABS) is proposed to mitigate this problem, the residual control signal in the ABS still inevitably causes interference. Hence, a cell-specific reference signal (CRS) interference cancellation algorithm, utilizing the information in the ABS, is proposed. First, the timing and carrier frequency offset of the interference signal is compensated by utilizing the cross-correlation properties of the synchronization signal. Afterwards, the reference signal is generated locally and channel response is estimated by making use of channel statistics. Then, the interference signal is reconstructed based on the previous estimate of the channel, timing and carrier frequency offset. The interference is mitigated by subtracting the estimation of the interference signal and LLR puncturing. The block error rate (BLER) performance of the signal is notably improved by this algorithm, according to the simulation results of different channel scenarios. The proposed techniques provide low cost, low complexity solutions for LTE and beyond systems. The simulation and measurements show good overall system performance can be achieved.
|
25 |
The role of algorithm in general secondary education revisitedLessner, Daniel January 2013 (has links)
The traditional purpose of algorithm in education is to prepare students for programming. In our effort to introduce the practically missing computing science into Czech general secondary education, we have revisited this purpose.We propose an approach, which is in better accordance with the goals of general secondary education in Czechia. The importance of programming is diminishing, while recognition of algorithmic procedures and precise (yet concise) communication of algorithms is gaining importance. This includes expressing algorithms in natural language, which is more useful for most of the students than programming. We propose criteria to evaluate such descriptions. Finally, an idea about the limitations is required (inefficient algorithms, unsolvable problems, Turing’s test). We describe these adjusted educational goals and an outline of the resulting course. Our experience with carrying out the proposed intentions is satisfactory, although we did not accomplish all the defined goals.
|
26 |
Coding strategies for genetic algorithms and neural netsHancock, Peter J. B. January 1993 (has links)
The interaction between coding and learning rules in neural nets (NNs), and between coding and genetic operators in genetic algorithms (GAs) is discussed. The underlying principle advocated is that similar things in "the world" should have similar codes. Similarity metrics are suggested for the coding of images and numerical quantities in neural nets, and for the coding of neural network structures in genetic algorithms. A principal component analysis of natural images yields receptive fields resembling horizontal and vertical edge and bar detectors. The orientation sensitivity of the "bar detector" components is found to match a psychophysical model, suggesting that the brain may make some use of principal components in its visual processing. Experiments are reported on the effects of different input and output codings on the accuracy of neural nets handling numeric data. It is found that simple analogue and interpolation codes are most successful. Experiments on the coding of image data demonstrate the sensitivity of final performance to the internal structure of the net. The interaction between the coding of the target problem and reproduction operators of mutation and recombination in GAs are discussed and illustrated. The possibilities for using GAs to adapt aspects of NNs are considered. The permutation problem, which affects attempts to use GAs both to train net weights and adapt net structures, is illustrated and methods to reduce it suggested. Empirical tests using a simulated net design problem to reduce evaluation times indicate that the permutation problem may not be as severe as has been thought, but suggest the utility of a sorting recombination operator, that matches hidden units according to the number of connections they have in common. A number of experiments using GAs to design network structures are reported, both to specify a net to be trained from random weights, and to prune a pre-trained net. Three different coding methods are tried, and various sorting recombination operators evaluated. The results indicate that appropriate sorting can be beneficial, but the effects are problem-dependent. It is shown that the GA tends to overfit the net to the particular set of test criteria, to the possible detriment of wider generalisation ability. A method of testing the ability of a GA to make progress in the presence of noise, by adding a penalty flag, is described.
|
27 |
A homecoming festival : the application of the dialogic concepts of addressivity and the awareness of participation to an aesthetics of computer-mediated textual artStewart, Gavin Andrew January 2006 (has links)
The recent history of computer-mediated textual art has witnessed a controversy surrounding the aesthetics of these texts. The practice-based research described by this thesis responds to this controversy by posing the question - Is there an aesthetic of computer-mediated textual art that can be used as the basis for a positive evaluation of contemporary practice? In exploring answers to this question, it poses three further questions that investigate the role played by materiality, participation and earlier claims for emancipation in the formation of an evaluation. This thesis develops its answer to these questions by turning first to the work of Bakhtin and the Bakhtin Circle to provide a generalised, architectonic model of meaning-making which serves as a conceptual framework for understanding computer-mediated textual art. This model describes meaning-making as a participative event between particularised individuals, which is defined, in part, by the addressivity oftheir shared utterance. This thesis then draws on the work of Ken Hirschkop to argue that the addressivity of print-mediated utterances contributed to the obscuring of participation of the reader-participant in the event of meaning-making during the period ofthe national culture of print. It also argues that this obscuring of participation had an effect on the development of democratic consciousness during this period. This thesis extends the concepts of the utterance and addressivity to describe computer-mediated textual art. It describes the historical context and the variety of aesthetic interests underpinning contemporary practice. It then argues that a sub-set of these texts exhibit a mode of addressivity that is different from the norms of the national culture of print. It draws on these differences to develop the original contribution ofthis thesis by describing an axiology (a theory of value) of computer-mediated textual art predicated on role played by their addressivity in raising awareness ofthe participation of the reader-participant in meaning-making. This thesis then illustrates the theoretical assessments derived from these questions through practice. It details the methodology employed in this research programme. It then describes the motivations for this research, the course of study, the preparatory practice and provides a social evaluation ofthe technology deployed. It argues for a 'contingent' model of practice in which the design process is framed as a reflective experiment. It then provides an analysis ofthe design process of the computer-mediated textual art work 'Homecoming' to illustrate the arguments made in thesis. This thesis concludes by placing the new axiology into the wider cultural context by arguing that it provides a valuable but non-exhaustive, nonexclusive evaluation ofthese works.
|
28 |
Memristor based neural networks : feasibility, theories and approachesYang, Xiao January 2014 (has links)
Memristor-based neural networks refer to the utilisation of memristors, the newly emerged nanoscale devices, in building neural networks. The memristor was first postulated by Leon Chua in 1971 as the fourth fundamental passive circuit element and experimentally validated by one of HP labs in 2008. Memristors, short for memory-resistor, have a peculiar memory effect which distinguishes them from resistors. By applying a bias voltage across it, the resistance of a memristor, namely memristance, is changed. In addition, the memristance is retained when the power supply is removed which demonstrates the non-volatility of the memristor. Memristor-based neural networks are currently being researched in order to replace complementary metal-oxide-semiconductor (CMOS) devices in neuromorphic circuits with memristors and to investigate their potential applications. Current research primarily focuses on the utilisation of memristors as synaptic connections between neurons, however in any application it may be possible to allow memristors to perform computation in a natural way which attempts to avoid additional CMOS devices. Examples of such methods utilised in neural networks are presented in this thesis, such as memristor-based cellular neural network (CNN) structures, the memristive spiking-time dependent plasticity (STDP) model and the exploration of their potential applications. This thesis presents manifold studies in the topic of memristor-based neural networks from theories and feasibility to approaches to implementations. Studies are divided into two parts which are the utilisation of memristors in non-spiking neural networks and spiking neural networks (SNNs). At the beginning of the thesis, fundamentals of neural networks and memristors are explored with the analysis of the physical properties and $v-i$ behaviour of memristors. In the studies of memristor-based non-spiking neural networks, a staircase memristor model is presented based on memristors which have multi-level resistive states and the delayed-switching effect. This model is adapted to CNNs and echo state networks (ESNs) as applications that benefit from memristive implementations. In the studies of memristor-based SNNs, a trace-based memristive STDP model is proposed and discussed to overcome the incompatibility issues of the previous model with all-to-all spike interaction. The work also presents applications of the trace-based memristive model in associative learning with retention loss and supervised learning. The computational results of experiments with different applications have shown that memristor-based neural networks will be advantageous in building synchronous or asynchronous parallel neuromorphic systems. The work presents several new findings on memristor modelling, memristor-based neural network structures and memristor-based associative learning. These studies address unexplored research areas in the context of memristor-based neural networks to the best of our knowledge, and therefore form original contributions.
|
29 |
Shape analysis in protein structure alignmentGkolias, Theodoros January 2018 (has links)
In this Thesis we explore the problem of structural alignment of protein molecules using statistical shape analysis techniques. The structural alignment problem can be divided into three smaller ones: the representation of protein structures, the sampling of possible alignments between the molecules and the evaluation of a given alignment. Previous work done in this field, can be divided in two approaches: an adhoc algorithmic approach from the Bioinformatics literature and an approach using statistical methods either in a likelihood or Bayesian framework. Both approaches address the problem from a different scope. For example, the algorithmic approach is easy to implement but lacks an overall modelling framework, and the Bayesian address this issue but sometimes the implementation is not straightforward. We develop a method which is easy to implement and is based on statistical assumptions. In order to asses the quality of a given alignment we use a size and shape likelihood density which is based in the structure information of the molecules. This likelihood density is also extended to include sequence infor- mation and gap penalty parameters so that biologically meaningful solution can be produced. Furthermore, we develop a search algorithm to explore possible alignments from a given starting point. The results suggest that our approach produces better or equal alignments when it is compared to the most recent struc- tural alignment methods. In most of the cases we managed to achieve a higher number of matched atoms combined with a high TMscore. Moreover, we extended our method using Bayesian techniques to perform alignments based on posterior modes. In our approach, we estimate directly the mode of the posterior distribution which provides the final alignment between two molecules. We also, choose a different approach for treating the mean parameter. In previous methods the mean was either integrated out of the likelihood density or considered as fixed. We choose to assign a prior over it and obtain its posterior mode. Finally, we consider an extension of the likelihood model assuming a Normal density for both the matched and unmatched parts of a molecule and diagonal covariance structure. We explore two different variants. In the first we consider a fixed zero mean for the unmatched parts of the molecules and in the second we consider a common mean for both the matched and unmatched parts. Based on simulated and real results, both models seems to perform well in obtaining high number of matched atoms and high TMscore.
|
30 |
Optimizing stochastic simulation of a neuron with parallelizationLiss, Anders January 2017 (has links)
In order to optimize the solving of stochastic simulations of neuron channels, an attempt to parallelize the solver has been made. The result of the implementation was unsuccessful. However, the implementation is not impossible and is still a field of research with big potential for improving performance of stochastic simulations.
|
Page generated in 0.1079 seconds