271 |
Malware variant detectionAlzarooni, K. M. A. January 2012 (has links)
Malware programs (e.g., viruses, worms, Trojans, etc.) are a worldwide epidemic. Studies and statistics show that the impact of malware is getting worse. Malware detectors are the primary tools in the defence against malware. Most commercial anti-malware scanners maintain a database of malware patterns and heuristic signatures for detecting malicious programs within a computer system. Malware writers use semantic-preserving code transformation (obfuscation) techniques to produce new stealth variants of their malware programs. Malware variants are hard to detect with today's detection technologies as these tools rely mostly on syntactic properties and ignore the semantics of malicious executable programs. A robust malware detection technique is required to handle this emerging security threat. In this thesis, we propose a new methodology that overcomes the drawback of existing malware detection methods by analysing the semantics of known malicious code. The methodology consists of three major analysis techniques: the development of a semantic signature, slicing analysis and test data generation analysis. The core element in this approach is to specify an approximation for malware code semantics and to produce signatures for identifying, possibly obfuscated but semantically equivalent, variants of a sample of malware. A semantic signature consists of a program test input and semantic traces of a known malware code. The key challenge in developing our semantics-based approach to malware variant detection is to achieve a balance between improving the detection rate (i.e. matching semantic traces) and performance, with or without the e ects of obfuscation on malware variants. We develop slicing analysis to improve the construction of semantic signatures. We back our trace-slicing method with a theoretical result that shows the notion of correctness of the slicer. A proof-of-concept implementation of our malware detector demonstrates that the semantics-based analysis approach could improve current detection tools and make the task more di cult for malware authors. Another important part of this thesis is exploring program semantics for the selection of a suitable part of the semantic signature, for which we provide two new theoretical results. In particular, this dissertation includes a test data generation method that works for binary executables and the notion of correctness of the method.
|
272 |
Data-driven detection and diagnosis of system-level failures in middleware-based service compositionsWassermann, B. January 2012 (has links)
Service-oriented technologies have simplified the development of large, complex software systems that span administrative boundaries. Developers have been enabled to build applications as compositions of services through middleware that hides much of the underlying complexity. The resulting applications inhabit complex, multi-tier operating environments that pose many challenges to their reliable operation and often lead to failures at runtime. Two key aspects of the time to repair a failure are the time to its detection and to the diagnosis of its cause. The prevalent approach to detection and diagnosis is primarily based on ad-hoc monitoring as well as operator experience and intuition. This is inefficient and leads to decreased availability. We propose an approach to data-driven detection and diagnosis in order to decrease the repair time of failures in middleware-based service compositions. Data-driven diagnosis supports system operators with information about the operation and structure of a service composition. We discuss how middleware-based service compositions can be monitored in a comprehensive, yet non-intrusive manner and present a process to discover system structure by processing deployment information that is commonly reified in such systems. We perform a controlled experiment that compares the performance of 22 participants using either a standard or the data-driven approach to diagnose several failures injected into a real-world service composition. We find that system operators using the latter approach are able to achieve significantly higher success rates and lower diagnosis times. Data-driven detection is based on the automation of failure detection through applying an outlier detection technique to multi-variate monitoring data. We evaluate the effectiveness of one-class classification for this purpose and determine a simple approach to select subsets of metrics that afford highly accurate failure detection.
|
273 |
MINES: Mutual Information Neuro-Evolutionary SystemBehzadan, B. January 2011 (has links)
Mutual information neuro-evolutionary system (MINES) presents a novel self-governing approach to determine the optimal quantity and connectivity of the hidden layer of a three layer feed-forward neural network founded on theoretical and practical basis. The system is a combination of a feed-forward neural network, back-propagation algorithm, genetic algorithm, mutual information and clustering. Back-propagation is used for parameter learning to reduce the system’s error; while mutual information aides back-propagation to follow an effective path in the weight space. A genetic algorithm changes the incoming synaptic connections of the hidden nodes, based on the fitness provided by the mutual information from the error space to the hidden layer, to perform structural learning. Mutual information determines the appropriate synapses, connecting the hidden nodes to the input layer; however, in effect it also links the back-propagation to the genetic algorithm. Weight clustering is applied to reduce hidden nodes having similar functionality; i.e. those possessing same connectivity patterns and close Euclidean angle in the weight space. Finally, the performance of the system is assessed on two theoretical and one empirical problems. A nonlinear polynomial regression problem and the well known two-spiral classification task are used to evaluate the theoretical performance of the system. Forecasting daily crude oil prices are conducted to observe the performance of MINES on a real world application.
|
274 |
Invariant encoding schemes for visual recognitionNewell, A. J. January 2012 (has links)
Many encoding schemes, such as the Scale Invariant Feature Transform (SIFT) and Histograms of Oriented Gradients (HOG), make use of templates of histograms to enable a loose encoding of the spatial position of basic features such as oriented gradients. Whilst such schemes have been successfully applied, the use of a template may limit the potential as it forces the histograms to conform to a rigid spatial arrangement. In this work we look at developing novel schemes making use of histograms, without the need for a template, which offer good levels of performance in visual recognition tasks. To do this, we look at the way the basic feature type changes across scale at individual locations. This gives rise to the notion of column features, which capture this change across scale by concatenating feature types at a given scale separation. As well as applying this idea to oriented gradients, we make wide use of Basic Image Features (BIFs) and oriented Basic Image Features (oBIFs) which encode local symmetry information. This resulted in a range of encoding schemes. We then tested these schemes on problems of current interest in three application areas. First, the recognition of characters taken from natural images, where our system outperformed existing methods. For the second area we selected a texture problem, involving the discrimination of quartz grains using surface texture, where the system achieved near perfect performance on the first task, and a level of performance comparable to an expert human on the second. In the third area, writer identification, the system achieved a perfect score and outperformed other methods when tested using the Arabic handwriting dataset as part of the ICDAR 2011 Competition.
|
275 |
Towards the use of visual masking within virtual environments to induce changes in affective cognitionDrummond, J. January 2013 (has links)
This thesis concerns the use of virtual environments for psychotherapy. It makes use of virtual environment properties that go beyond real-world simulation. The core technique used is based on research found within perception science, an effect known as backwards visual masking. Here, a rapidly displayed target image is rendered explicitly imperceptible via the subsequent display of a masking image. The aim of this thesis was to investigate the potential of visual masking within virtual environments to induce changes in affective cognition. Of particular importance would be changes in a positive direction as this could form the foundation of a psychotherapeutic tool to treat affect disorders and other conditions with an affective component. The initial pair of experiments looked at whether visual masking was possible within virtual environments, whether any measurable behavioural influence could be found and whether there was any evidence that affective cognitions could be influenced. It was found that the technique worked and could influence both behaviour and affective cognition. Following this, two experiments looked further at parameter manipulation of visual masking within virtual environments with the aim of better specifying the parameter values. Results indicated that the form of visual masking used worked better in a virtual environment when the target and mask were both highly textured and that affective effects were modulated by the number of exposures of the target. The final pair of experiments attempted to induce an affect contagion effect and an affect cognition-modification effect. An affect cognition-modification effect was found whereas an affect contagion effect was not. Overall, the results show that using visual masking techniques within virtual environments to induce affect cognition changes has merit. The thesis lays the foundation for further work and supports the use of this technique as basis of an intervention tool.
|
276 |
Pose-invariant, model-based object recognition, using linear combination of views and Bayesian statisticsZografos, V. January 2009 (has links)
This thesis presents an in-depth study on the problem of object recognition, and in particular the detection of 3-D objects in 2-D intensity images which may be viewed from a variety of angles. A solution to this problem remains elusive to this day, since it involves dealing with variations in geometry, photometry and viewing angle, noise, occlusions and incomplete data. This work restricts its scope to a particular kind of extrinsic variation; variation of the image due to changes in the viewpoint from which the object is seen. A technique is proposed and developed to address this problem, which falls into the category of view-based approaches, that is, a method in which an object is represented as a collection of a small number of 2-D views, as opposed to a generation of a full 3-D model. This technique is based on the theoretical observation that the geometry of the set of possible images of an object undergoing 3-D rigid transformations and scaling may, under most imaging conditions, be represented by a linear combination of a small number of 2-D views of that object. It is therefore possible to synthesise a novel image of an object given at least two existing and dissimilar views of the object, and a set of linear coefficients that determine how these views are to be combined in order to synthesise the new image. The method works in conjunction with a powerful optimization algorithm, to search and recover the optimal linear combination coefficients that will synthesize a novel image, which is as similar as possible to the target, scene view. If the similarity between the synthesized and the target images is above some threshold, then an object is determined to be present in the scene and its location and pose are defined, in part, by the coefficients. The key benefits of using this technique is that because it works directly with pixel values, it avoids the need for problematic, low-level feature extraction and solution of the correspondence problem. As a result, a linear combination of views (LCV) model is easy to construct and use, since it only requires a small number of stored, 2-D views of the object in question, and the selection of a few landmark points on the object, the process which is easily carried out during the offline, model building stage. In addition, this method is general enough to be applied across a variety of recognition problems and different types of objects. The development and application of this method is initially explored looking at two-dimensional problems, and then extending the same principles to 3-D. Additionally, the method is evaluated across synthetic and real-image datasets, containing variations in the objects’ identity and pose. Future work on possible extensions to incorporate a foreground/background model and lighting variations of the pixels are examined.
|
277 |
Virtual light fields for global illumination in computer graphicsMortensen, J. January 2011 (has links)
This thesis presents novel techniques for the generation and real-time rendering of globally illuminated environments with surfaces described by arbitrary materials. Real-time rendering of globally illuminated virtual environments has for a long time been an elusive goal. Many techniques have been developed which can compute still images with full global illumination and this is still an area of active flourishing research. Other techniques have only dealt with certain aspects of global illumination in order to speed up computation and thus rendering. These include radiosity, ray-tracing and hybrid methods. Radiosity due to its view independent nature can easily be rendered in real-time after pre-computing and storing the energy equilibrium. Ray-tracing however is view-dependent and requires substantial computational resources in order to run in real-time. Attempts at providing full global illumination at interactive rates include caching methods, fast rendering from photon maps, light fields, brute force ray-tracing and GPU accelerated methods. Currently, these methods either only apply to special cases, are incomplete exhibiting poor image quality and/or scale badly such that only modest scenes can be rendered in real-time with current hardware. The techniques developed in this thesis extend upon earlier research and provide a novel, comprehensive framework for storing global illumination in a data structure - the Virtual Light Field - that is suitable for real-time rendering. The techniques trade off rapid rendering for memory usage and precompute time. The main weaknesses of the VLF method are targeted in this thesis. It is the expensive pre-compute stage with best-case O(N^2) performance, where N is the number of faces, which make the light propagation unpractical for all but simple scenes. This is analysed and greatly superior alternatives are presented and evaluated in terms of efficiency and error. Several orders of magnitude improvement in computational efficiency is achieved over the original VLF method. A novel propagation algorithm running entirely on the Graphics Processing Unit (GPU) is presented. It is incremental in that it can resolve visibility along a set of parallel rays in O(N) time and can produce a virtual light field for a moderately complex scene (tens of thousands of faces), with complex illumination stored in millions of elements, in minutes and for simple scenes in seconds. It is approximate but gracefully converges to a correct solution; a linear increase in resolution results in a linear increase in computation time. Finally a GPU rendering technique is presented which can render from Virtual Light Fields at real-time frame rates in high resolution VR presentation devices such as the CAVETM.
|
278 |
Congestion control for real-time interactive multimedia streamsChoi, S.-H. January 2011 (has links)
The Internet is getting richer, and so the services. The richer the services, the more the users demand. The more they demand, the more we guarantee(1). This thesis investigates the congestion control mechanisms for interactive multimedia streaming applications. We start by raising a question as to why the congestion control schemes are not widely deployed in real-world applications, and study what options are available at present. We then discuss and show some of the good reasonings that might have made the control mechanism, specifically speaking the rate-based congestion control mechanism, not so attractive. In an effort to address the problems, we identify the existing problems from which the rate-based congestion control protocol cannot easily escape. We therefore propose a simple but novel windowbased congestion control protocol that can retain smooth throughput property while being fair when competing with TCP, yet still being responsive to the network changes. Through the extensive ns-2 simulations and the real-world experiments, we evaluate TFWC, our proposed mechanisms, and TFRC, the proposed IETF standard, in terms of network-oriented metrics (fairness, smoothness, stability, and responsive), and end-user oriented metrics (PSNR and MOS) to throughly study the protocol’s behaviors. We then discuss and conclude the options of the evaluated protocols for the real application. (1)We as congestion control mechanisms in the Internet.
|
279 |
Geometric models of brain white matter for microstructure imaging with diffusion MRIPanagiotaki, E. January 2011 (has links)
The research presented in this thesis models the diffusion-weighted MRI signal within brain white matter tissue. We are interested in deriving descriptive microstructure indices such as white matter axon diameter and density from the observed diffusion MRI signal. The motivation is to obtain non-invasive reliable biomarkers for early diagnosis and prognosis of brain development and disease. We use both analytic and numerical models to investigate which properties of the tissue and aspects of the diffusion process affect the diffusion signal we measure. First we develop a numerical method to approximate the tissue structure as closely as possible. We construct three-dimensional meshes, from a stack of confocal microscopy images using the marching cubes algorithm. The experiment demonstrates the technique using a biological phantom (asparagus). We devise an MRI protocol to acquire data from the sample. We use the mesh models as substrates in Monte-Carlo simulations to generate synthetic MRI measurements. To test the feasibility of the method we compare simulated measurements from the three-dimensional mesh with scanner measurements from the same sample and simulated measurements from an extruded mesh and much simpler parametric models. The results show that the three-dimensional mesh model matches the data better than the extruded mesh and the parametric models revealing the sensitivity of the diffusion signal to the microstructure. The second study constructs a taxonomy of analytic multi-compartment models of white matter by combining intra- and extra-axonal compartments from simple models. We devise an imaging protocol that allows diffusion sensitisation parallel and perpendicular to tissue fibres. We use the protocol to acquire data from two fixed rat brains, which allows us to fit, study and evaluate the models. We conclude that models which incorporate non-zero axon radius describe the measurements most accurately. The key observation is a departure of signals in the parallel direction from the two-compartment models, suggesting restriction, most likely from glial cells or binding of water molecules to the membranes. The addition of the third compartment can capture this departure and explain the data. The final study investigates the estimates using in vivo brain diffusion measurements. We adjust the imaging protocol to allow an in vivo MRI acquisition of a rat brain and compare and assess the taxonomy of models. We then select the models that best explain the in vivo data and compare the estimates with those from the ex vivo measurements to identify any discrepancies. The results support the addition of the third compartment model as per the ex vivo findings, however the ranking of the models favours the zero radius intra-axonal compartments.
|
280 |
High performance simulation and modelling of wireless vehicular ad hoc networksHewer, T. D. January 2011 (has links)
Vehicular communications occur when two or more vehicles come into range of one another, to share data over wireless media. The applications of this communication are far-reaching, from toll collection to collision avoidance. Due to the proliferation of wireless devices and their ubiquitous nature it is now possible to operate in an ad hoc manner between transmitting stations. Vehicular ad hoc networks (VANET) are a special kind of network, that experience short link times and high levels of interference, but have the ability to present many driver information and safety solutions for the worlds roads. Computer simulation of VANET enables rapid-prototyping and intensive exploration of systems and protocol, using highly complex and computationally expensive models and programs. Experimentation with real vehicles would be time consuming and expensive, limiting the range of study that could be achieved and therefore reducing the accuracy of analytical solutions exposed through experimentation. An extensive corpus of work on networking, traffic modelling and parallel processing algorithm has been reviewed as part of this thesis, to isolate the current state-of-the-art and examine areas for novel research. In this thesis the value and importance of computer simulation for VANET is proposed, which explores the applications of a high-fidelity system when applied to real-world scenarios. The work is grounded on two main contributions: 1) that by using intervehicle communication and an advanced lane changing/merging algorithm the congestion that builds up around an obstruction on a highway can be alleviated and reduced more effectively than simple line-of-sight, even when only a proportion of the vehicles are radio equipped. 2) that the available parameter space, as large as it is, can be efficiently explored using a parallel algorithm with the NS-3 network simulation system. The large-scale simulation of VANET in highway scenarios can be used to discover universal trends and behaviours in the successful and timely delivery of data packets. The application of VANET research has a broad scope for use in modern vehicles and the optimisation of the transmission of data is highly relevant; a large number of parameters can be tuned in a networking device, but knowing which to tune and by how much is paramount to the operation of intelligent transport systems.
|
Page generated in 0.0383 seconds