• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 56
  • 56
  • 56
  • 11
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Homogeneous to heterogeneous Face Recognition

Shaikh, Muhammad January 2015 (has links)
Face Recognition, a very challenging research area, is being studied for almost more than a decade to solve variety of problems associated with it e.g. PIE (pose, expression and illumination), occlusion, gesture, aging etc. Most of the time, these problems are considered in situations when images are captured from same sensors/cameras/modalities. The methods in this domain are termed as homogeneous face recognition. In reality face images are being captured from alternate modalities also e.g. near infrared (NIR), thermal, sketch, digital (high resolution), web-cam (low resolution) which further alleviates face recognition problem. So, matching faces from different modalities are categorized as heterogeneous face recognition (HFR). This dissertation has major contributions in heterogeneous face recognition as well as its homogeneous counterpart. The first contribution is related to multi-scale LBP, Sequential forward search and KCRC-RLS method. Multi scale approaches results in high dimensional feature vectors that increases computational cost of the proposed approach and overtraining problem. Sequential forward approach is adopted to analyze the effect of multi-scale. This study brings an interesting facts about the merging of features of individual scale that it results in significant reduction of the variance of recognition rates among individual scales. In second contribution, I extend the efficacy of PLDA to heterogeneous face recognition. Due to its probabilistic nature, information from different modalities can easily be combined and priors can be applied over the possible matching. To the best of author’s knowledge, this is first study that aims to apply PLDA for intermodality face recognition. The third contribution is about solving small sample size problem in HFR scenarios by using intensity based features. Bagging based TFA method is proposed to exhaustively test face databases in cross validation environment with leave one out strategy to report fair and comparable results. The fourth contribution is about the module which can identify the modality types is missing in face recognition pipeline. The identification of the modalities in heterogeneous face recognition is required to assist automation in HFR methods. The fifth contribution is an extension of PLDA used in my second contribuiton. Bagging based probabilistic linear discriminant analysis is proposed to tackle problem of biased results as it uses overlapping train and test sets. Histogram of gradient descriptors (HOG) are applied and recognition rates using this method outperform all the state-of-the-art methods with only HOG features.
22

Deep visual learning with spike-timing dependent plasticity

Liu, Daqi January 2017 (has links)
For most animal species, reliable and fast visual pattern recognition is vital for their survival. Ventral stream, a primary pathway within visual cortex, plays an important role in object representation and form recognition. It is a hierarchical system consisting of various visual areas, in which each visual area extracts different level of abstractions. It is known that the neurons within ventral stream use spikes to represent these abstractions. To increase the level of realism in a neural simulation, spiking neural network (SNN) is often used as the neural network model. From SNN point of view, the analog output values generated by traditional artificial neural network (ANN) can be considered as the average spiking firing rates. Unlike traditional ANN, SNN can not only use spiking rates but also specific spiking timing sequences to represent the structural information of the input visual stimuli, which greatly increases the distinguishability. To simulate the learning procedure of the ventral stream, various research questions need to be resolved. In most cases, traditional methods use winner-take-all strategy to distinguish different classes. However, such strategy works not well for overlapped classes within decision space. Moreover, neurons within ventral stream tends to recognize new input visual stimuli in a limited time window, which requires a fast learning procedure. Furthermore, within ventral stream, neurons receive continuous input visual stimuli and can only access local information during the learning procedure. However, most traditional methods use separated visual stimuli as the input and incorporate global information within the learning period. Finally, to verify the universality of the proposed SNN framework, it is necessary to investigate its classification performance for complex real world tasks such as video-based face disguise recognition. To address the above problems, a novel classification method inspired by the soft I winner-take-all strategy has been proposed firstly, in which each associated class will be assigned with a possibility and the input visual stimulus will be classified as the class with the highest possibility. Moreover, to achieve a fast learning procedure, a novel feed-forward SNN framework equipped with an unsupervised spike-timing dependent plasticity (STDP) learning rule has been proposed. Furthermore, an eventdriven continuous STDP (ECS) learning method has been proposed, in which two novel continuous input mechanisms have been used to generate a continuous input visual stimuli and a new event-driven STDP learning rule based on the local information has been applied within the training procedure. Finally, such methodologies have also been extended to the video-based disguise face recognition (VDFR) task in which human identities are recognized not just on a few images but the sequences of video stream showing facial muscle movements while speaking.
23

Rigorous code generation for distributed real-time embedded systems

Almohammad, Ali January 2013 (has links)
This thesis addresses the problem of generating executable code for distributed embedded systems in which computing nodes communicate using the Controller Area Network (CAN). CAN is the dominant network in automotive and factory control systems and is becoming increasingly popular in robotic, medical and avionics applications. The requirements for functional and temporal reliability in these domains are often stringent, and testing alone may not offer the required level of con dence that systems satisfy their specications. Consequently, there has been considerable research interest in additional techniques for reasoning about the behaviour of CAN-based systems. This thesis proposes a novel approach in which system behaviour is specifed in a high-level language that is syntactically similar to Esterel but which is given a formal semantics by translation to bCANDLE, an asynchronous process calculus. The work developed here shows that bCANDLE systems can be translated automatically, via a common intermediate net representation, not only into executable C code but also into timed automaton models that can be used in the formal verification of a wide range of functional and temporal properties. A rigorous argument is presented that, for any system expressed in the high-level language, its timed automaton model is a conservative approximation of the executable C code, given certain well-defined assumptions about system components. It is shownthat an off-the-shelf model-checker (UPPAAL) can be used to verify system properties with a high-level of confidence that those properties will be exhibited by the executable code. The approach is evaluated by applying it to four representative case studies. Our results show that, for small to medium-sized systems, the generated code is sufficiently efficient for execution on typical hardware and the generated timed automaton model is sufficiently small for analysis within reasonable time and memory constraints.
24

Guided entity relationship modelling within a simulation of a real world context

Issaravit, Piyanan January 2006 (has links)
This thesis examines the contribution of a guided discovery learning approach within a simulated real world context to learning. In order to consider the potential of this approach, a database design task is chosen (Storey & Goldstein, 1993) which requires the learner to capture the semantics of the domain application in a real world situation and then translate this into a data model for the database management system. This approach to learning has advantages since simulating a real world system in a classroom can be a very difficult and time-consuming activity. The aims of the thesis is, therefore, to investigate the possibility of simulating real world situation for gathering database requirements and a teaching strategy that is suitable for this real world situation context. In order to reach the research goal, two main research questions need to be answered. Firstly, to what extent can a simulation of a real world situation improve the quality of learning in the database design area? Secondly, the extent to which a guided discovery teaching strategy can enhance the learning of database design within such a (simulated) real world context? A framework for simulating the real world situation and guided discovery strategies had been designed in order to implement four versions of a prototype systems called GERM for evaluation in order to answer the research questions. The main results obtained from a small group of learners and lecturers indicates that the potential of guided discovery learning within a real world context can improve the quality of learning in database design - in particular entity relationship modelling. Amongst other advantages, it can help students to change their basic misconceptions. Furthermore, it also can improve students' skills in a real world situation. The promising results suggest further lines of research.
25

An efficient approach to online bot detection based on a reinforcement learning technique

Alauthman, Mohammad January 2016 (has links)
In recent years, Botnets have been adopted as a popular method used to carry and spread many malicious codes on the Internet. These codes pave the way to conducting many fraudulent activities, including spam mail, distributed denial of service attacks (DDoS) and click fraud. While many Botnets are set up using a centralized communication architecture such as Internet Relay Chat (IRC) and Hypertext Transfer Protocol (HTTP), peer-to-peer (P2P) Botnets can adopt a decentralized architecture using an overlay network for exchanging command and control (C&C) messages, which is a more resilient and robust communication channel infrastructure. Without a centralized point for C&C servers, P2P Botnets are more flexible to defeat countermeasures and detection procedures than traditional centralized Botnets. Several Botnet detection techniques have been proposed, but Botnet detection is still a very challenging task for the Internet security community because Botnets execute attacks stealthily in the dramatically growing volumes of network traffic. However, current Botnet detection schemes face significant problem of efficiency and adaptability. The present study combined a traffic reduction approach with reinforcement learning (RL) method in order to create an online Bot detection system. The proposed framework adopts the idea of RL to improve the system dynamically over time. In addition, the traffic reduction method is used to set up a lightweight and fast online detection method. Moreover, a host feature based on traffic at the connection-level was designed, which can identify Bot host behaviour. Therefore, the proposed technique can potentially be applied to any encrypted network traffic since it depends only on the information obtained from packets header. Therefore, it does not require Deep Packet Inspection (DPI) and cannot be confused with payload encryption techniques. The network traffic reduction technique reduces packets input to the detection system, but the proposed solution achieves good a detection rate of 98.3% as well as a low false positive rate (FPR) of 0.012% in the online evaluation. Comparison with other techniques on the same dataset shows that our strategy outperforms existing methods. The proposed solution was evaluated and tested using real network traffic datasets to increase the validity of the solution.
26

Feature reduction and representation learning for visual applications

Yu, Mengyang January 2016 (has links)
Computation on large-scale data spaces has been involved in many active problems in computer vision and pattern recognition. However, in realistic applications, most existing algorithms are heavily restricted by the large number of features, and tend to be inefficient and even infeasible. In this thesis, the solution to this problem is addressed in the following ways: (1) projecting features onto a lower-dimensional subspace; (2) embedding features into a Hamming space. Firstly, a novel subspace learning algorithm called Local Feature Discriminant Projection (LFDP) is proposed for discriminant analysis of local features. LFDP is able to efficiently seek a subspace to improve the discriminability of local features for classification. Extensive experimental validation on three benchmark datasets demonstrates that the proposed LFDP outperforms other dimensionality reduction methods and achieves state-of-the-art performance for image classification. Secondly, for action recognition, a novel binary local representation for RGB-D video data fusion is presented. In this approach, a general local descriptor called Local Flux Feature (LFF) is obtained for both RGB and depth data by computing the local fluxes of the gradient fields of video data. Then the LFFs from RGB and depth channels are fused into a Hamming space via the Structure Preserving Projection (SPP), which preserves not only the pairwise feature structure, but also a higher level connection between samples and classes. Comprehensive experimental results show the superiority of both LFF and SPP. Thirdly, in respect of unsupervised learning, SPP is extended to the Binary Set Embedding (BSE) for cross-modal retrieval. BSE outputs meaningful hash codes for local features from the image domain and word vectors from text domain. Extensive evaluation on two widely-used image-text datasets demonstrates the superior performance of BSE compared with state-of-the-art cross-modal hashing methods. Finally, a generalized multiview spectral embedding algorithm called Kernelized Multiview Projection (KMP) is proposed to fuse the multimedia data from multiple sources. Different features/views in the reproducing kernel Hilbert spaces are linearly fused together and then projected onto a low-dimensional subspace by KMP, whose performance is thoroughly evaluated on both image and video datasets compared with other multiview embedding methods.
27

An investigation into possible attacks on HTML5 IndexedDB and their prevention

Kimak, Stefan January 2016 (has links)
This thesis presents an analysis of, and enhanced security model for IndexedDB, the persistent HTML5 browser-based data store. In versions of HTML prior to HTML5, web sites used cookies to track user preferences locally. Cookies are however limited both in file size and number, and must also be added to every HTTP request, which increases web traffic unnecessarily. Web functionality has however increased significantly since cookies were introduced by Netscape in 1994. Consequently, web developers require additional capabilities to keep up with the evolution of the World Wide Web and growth in eCommerce. The response to this requirement was the IndexedDB API, which became an official W3C recommendation in January 2015. The IndexedDB API includes an Object Store, indices, and cursors and so gives HTML5 - compliant browsers a transactional database capability. Furthermore, once downloaded, IndexedDB data stores do not require network connectivity. This permits mobile web- based applications to work without a data connection. Such IndexedDB data stores will be used to store customer data, they will inevitably become targets for attackers. This thesis firstly argues that the design of IndexedDB makes it unavoidably insecure. That is, every implementation is vulnerable to attacks such as Cross Site Scripting, and even data that has been deleted from databases may be stolen using appropriate software tools. This is demonstrated experimentally on both mobile and desktop browsers. IndexedDB is however capable of high performance even when compared to servers running optimized local databases. This is demonstrated through the development of a formal performance model. The performance predictions for IndexedDB were tested experimentally, and the results showed high conformance over a range of usage scenarios. This implies that IndexedDB is potentially a useful HTML5 API if the security issues can be addressed. In the final component of this thesis, we propose and implement enhancements that correct the security weaknesses identified in IndexedDB. The enhancements use multifactor authentication, and so are resistant to Cross Site Scripting attacks. This enhancement is then demonstrated experimentally, showing that HTML5 IndexedDB may be used securely both online and offline. This implies that secure, standards compliant browser based applications with persistent local data stores may both feasible and efficient.
28

An adaptive simulation-based decision-making framework for small and medium sized enterprises

Zheng, Xin January 2011 (has links)
The rapid development of key mobile technology supporting the ‘Internet of Things’, such as 3G, Radio Frequency Identification (RFID), and Zigbee etc. and the advanced decision making methods have improved the Decision-Making System (DMS) significantly in the last decade. Advanced wireless technology can provide a real-time data collection to support DMS and the effective decision making techniques based on the real-time data can improve Supply Chain (SC) efficiency. However, it is difficult for Small and Medium sized Enterprises (SMEs) to effectively adopt this technology because of the complexity of technology and methods, and the limited resources of SMEs. Consequently, a suitable DMS which can support effective decision making is required in the operation of SMEs in SCs. This thesis conducts research on developing an adaptive simulation-based DMS for SMEs in the manufacturing sector. This research is to help and support SMEs to improve their competitiveness by reducing costs, and reacting responsively, rapidly and effectively to the demands of customers. An adaptive developed framework is able to answer flexible ‘what-if’ questions by finding, optimising and comparing solutions under the different scenarios for supporting SME-managers to make efficient and effective decisions and more customer-driven enterprises. The proposed framework consists of simulation blocks separated by data filter and convert layers. A simulation block may include cell simulators, optimisation blocks, and databases. A cell simulator is able to provide an initial solution under a special scenario. An optimisation block is able to output a group of optimum solutions based on the initial solution for decision makers. A two-phase optimisation algorithm integrated Conflicted Key Points Optimisation (CKPO) and Dispatching Optimisation Algorithm (DOA) is proposed for the condition of Jm|STsi,b with Lot-Streaming (LS). The feature of the integrated optimisation algorithm is demonstrated using a UK-based manufacture case study. Each simulation block is a relatively independent unit separated by the relevant data layers. Thus SMEs are able to design their simulation blocks according to their requirements and constraints, such as small budgets, limited professional staff, etc. A simulation block can communicate to the relative simulation block by the relevant data filter and convert layers and this constructs a communication and information network to support DMSs of Supply Chains (SCs). Two case studies have been conducted to validate the proposed simulation framework. An SME which produces gifts in a SC is adopted to validate the Make To Stock (MTS) production strategy by a developed stock-driven simulation-based DMS. A schedule-driven simulation-based DMS is implemented for a UK-based manufacturing case study using the Make To Order (MTO) production strategy. The two simulation-based DMSs are able to provide various data to support management decision making depending on different scenarios.
29

Development of second order understanding as a basis for organisational improvement

Brown, James Robert January 2009 (has links)
Most if not all organisations claim to pursue a continued improvement philosophy. The processes often adopted are predominantly concerned with the collection and analysis of data. Such approaches take little account of the opinions or varying points of view of the affected groups or individuals. Within this research, these processes are referred to as first order processes. The thesis explores what is termed the second order of organisational improvement, placing the emphasis of the inquiry on the worldviews of those involved. The research includes a study of peoples' attitudes towards organisational improvement and an in depth review of the relevant literature. Initial research consisting of questionnaires and interviews, gave an indication of the willingness within the workforce to engage in improvement activities. This led to the development of a model looking to understand and incorporate the differing worldviews of individuals, into action plans to improve the situations of concern, and an improvement process embedding understanding of others' perspectives and worldviews, dialogical communication and systems thinking. Incorporation of the differentiation of opinions and views of the people affected is central to the second order process. Implementation is possible in any organisation that enjoys an open trusting environment, irrespective of the operational sector. The major contribution of the process is in the change of emphasis from establishment of a commonly held shared view of a situation, to understanding the differences between worldviews of those involved. In effect, the second order process explores the differences in opinions and beliefs that underlie how individuals view a situation. The aim is to understand peoples' different views and incorporate those views in any agreed action.
30

The exploration and adaptation of soft systems methodology using learning theories to enable more effective development of information systems applications

Small, Adrian January 2007 (has links)
According to Lyytinen and Robey (1999), information systems development (ISD) involves risk. This risk is regularly being taken by managers and employees within an organisation but the outcome of such information systems development projects many become a failed information system (IS). The problem is further compounded through the lack of learning about such failures, and unsuccessful/negligible efforts to try and avoid such mistakes in the future (Lyytinen and Robey, 1999). The contribution to knowledge of this thesis is the development of a framework to incorporate a learning approach within information system application (ISA) projects. This thesis puts forward the need for an embedded learning approach and examines its importance for organisations. It is argued that more attention needs to be placed on generating learning because many individuals within organisations focus mainly on their operations and less on other processes. Three areas of theory are argued to relate to exploring these issues, namely how IS can currently be designed and implemented, what role the area of the learning organization can contribute in helping promote and embed a learning approach into an ISD methodology and finally, what theories of learning can be applied to these two bodies of literature. From addressing such issues, the main question of this thesis is how a learning approach can be incorporated into soft methodologies for the design and implementation of information systems applications. By examining a number of soft methodologies and arguing for the expansion of Soft Systems Methodology (SSM), or as the expansion is labelled, Soft Systems Methodology eXpanded for Learning (SSW), a manufacturing organisation is used to test out the framework in practice. The first cycle of action research investigated how SSM' worked in practice. The second cycle of action research, while not using a formal framework, investigated how these participants implemented and managed the technology. Reflecting back on the technology management literature, a technology management process framework (TMPF) is identified and adapted to try and further embed the learning individuals have obtained from the SSM' framework. A discussion on how the two frameworks can be joined together and used in practice is undertaken. This framework is labelled as Soft Systems Methodology eXpanded for Learning and incorporating Technology Management (SSM'). A second case is used to test this now developed SSWTM framework. The second case involved a National Health Service (NHS) organisation. This second case identifies learning points that support or can pose problems with the SSW' framework allowing any refinements to be made. This work finishes by firstly, providing a detailed discussion on the research process this work adopted as well as undertaking an evaluation of the SSW' framework. Secondly, the conclusions address how well a learning approach can be incorporated into a soft methodology for the design and implementation of information system applications (ISA). Lastly, it is stated how this SSM'm can impact on theory and practice.

Page generated in 0.1011 seconds