• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 763
  • 170
  • 24
  • 21
  • 21
  • 21
  • 21
  • 21
  • 21
  • 6
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 2872
  • 2872
  • 2521
  • 2129
  • 1312
  • 553
  • 527
  • 462
  • 443
  • 382
  • 373
  • 306
  • 262
  • 223
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Mathematically inspired approaches to face recognition in uncontrolled conditions : super resolution and compressive sensing

Al-Hassan, Nadia January 2014 (has links)
Face recognition systems under uncontrolled conditions using surveillance cameras is becoming essential for establishing the identity of a person at a distance from the camera and providing safety and security against terrorist, attack, robbery and crime. Therefore, the performance of face recognition in low-resolution degraded images with low quality against images with high quality/and of good resolution/size is considered the most challenging tasks and constitutes focus of this thesis. The work in this thesis is designed to further investigate these issues and the following being our main aim: “To investigate face identification from a distance and under uncontrolled conditions by primarily addressing the problem of low-resolution images using existing/modified mathematically inspired super resolution schemes that are based on the emerging new paradigm of compressive sensing and non-adaptive dictionaries based super resolution.” We shall firstly investigate and develop the compressive sensing (CS) based sparse representation of a sample image to reconstruct a high-resolution image for face recognition, by taking different approaches to constructing CS-compliant dictionaries such as Gaussian Random Matrix and Toeplitz Circular Random Matrix. In particular, our focus is on constructing CS non-adaptive dictionaries (independent of face image information), which contrasts with existing image-learnt dictionaries, but satisfies some form of the Restricted Isometry Property (RIP) which is sufficient to comply with the CS theorem regarding the recovery of sparsely represented images. We shall demonstrate that the CS dictionary techniques for resolution enhancement tasks are able to develop scalable face recognition schemes under uncontrolled conditions and at a distance. Secondly, we shall clarify the comparisons of the strength of sufficient CS property for the various types of dictionaries and demonstrate that the image-learnt dictionary far from satisfies the RIP for compressive sensing. Thirdly, we propose dictionaries based on the high frequency coefficients of the training set and investigate the impact of using dictionaries on the space of feature vectors of the low-resolution image for face recognition when applied to the wavelet domain. Finally, we test the performance of the developed schemes on CCTV images with unknown model of degradation, and show that these schemes significantly outperform existing techniques developed for such a challenging task. However, the performance is still not comparable to what could be achieved in controlled environment, and hence we shall identify remaining challenges to be investigated in the future.
162

An investigation of multi-objective hyper-heuristics for multi-objective optimisation

Maashi, Mashael January 2014 (has links)
In this thesis, we investigate and develop a number of online learning selection choice function based hyper-heuristic methodologies that attempt to solve multi-objective unconstrained optimisation problems. For the first time, we introduce an online learning selection choice function based hyperheuristic framework for multi-objective optimisation. Our multi-objective hyper-heuristic controls and combines the strengths of three well-known multi-objective evolutionary algorithms (NSGAII, SPEA2, and MOGA), which are utilised as the low level heuristics. A choice function selection heuristic acts as a high level strategy which adaptively ranks the performance of those low-level heuristics according to feedback received during the search process, deciding which one to call at each decision point. Four performance measurements are integrated into a ranking scheme which acts as a feedback learning mechanism to provide knowledge of the problem domain to the high level strategy. To the best of our knowledge, for the first time, this thesis investigates the influence of the move acceptance component of selection hyper-heuristics for multi-objective optimisation. Three multi-objective choice function based hyper-heuristics, combined with different move acceptance strategies including All-Moves as a deterministic move acceptance and the Great Deluge Algorithm (GDA) and Late Acceptance (LA) as a nondeterministic move acceptance function. GDA and LA require a change in the value of a single objective at each step and so a well-known hypervolume metric, referred to as D metric, is proposed for their applicability to the multi-objective optimisation problems. D metric is used as a way of comparing two non-dominated sets with respect to the objective space. The performance of the proposed multi-objective selection choice function based hyper-heuristics is evaluated on the Walking Fish Group (WFG) test suite which is a common benchmark for multi-objective optimisation. Additionally, the proposed approaches are applied to the vehicle crashworthiness design problem, in order to test its effectiveness on a realworld multi-objective problem. The results of both benchmark test problems demonstrate the capability and potential of the multi-objective hyper-heuristic approaches in solving continuous multi-objective optimisation problems. The multi-objective choice function Great Deluge Hyper-Heuristic (HHMO_CF_GDA) turns out to be the best choice for solving these types of problems.
163

Identity management policy and unlinkability : a comparative case study of the US and Germany

Rosner, Gilad L. January 2014 (has links)
This study compares the privacy policies of Germany and the US in the field of identity management. It analyses the emergence of unlinkability within the countries’ electronic citizen identity initiatives. The study used qualitative research methods, including semi-structured interview and document analysis, to analyse the policy-making processes surrounding the issue of unlinkability. The study found that unlinkability is emerging in different ways in each country. Germany’s data protection and privacy regimes are more coherent than the US, and unlinkability was an incremental policy change. US unlinkability policies are a more significant departure from its data protection and policy regimes. New institutionalism is used to help explain the similarities and differences between the two countries’ policies. Scholars have long been calling for the use of privacy-enhancing technologies (PETs) in policy-making, and unlinkability falls into this category. By employing PETs in this way, German and US identity management policies are in the vanguard of their respective privacy regimes. Through these policies, the US comes closer to German and European data protection policies, doing so non-legislatively. The digital citizen identities appearing in both countries must be construed as commercial products inasmuch as official identities. Lack of attendance to the commercial properties of these identities frustrates policy goals. As national governments embark on further identity management initiatives, commercial and design imperatives, such as value to the citizen and usability, must be considered for policy to be successful.
164

Semantic methods for functional hybrid modelling

Capper, John January 2014 (has links)
Equation-based modelling languages have become a vital tool in many areas of science and engineering. Functional Hybrid Modelling (FHM) is an approach to equation-based modelling that allows the behaviour of a physical system to be expressed as a modular hierarchy of undirected equations. FHM supports a variety of advanced language features — such as higher-order models and variable system structure — that sets it apart from the majority of other modelling languages. However, the inception of these new features has not been accompanied by the semantic tools required to effectively use and understand them. Specifically, there is a lack of static safety assurances for dynamic models and the semantics of the aforementioned language features are poorly understood. Static safety guarantees are highly desirable as they allow problems that may cause an equation system to become unsolvable to be detected early, during compilation. As a result, the use of static analysis techniques to enforce structural invariants (e.g. that there are the same number of equations as unknowns) is now in use in main-stream equation-based languages like Modelica. Unfortunately, the techniques employed by these languages are somewhat limited, both in their capacity to deal with advanced language features and also by the spectrum of invariants they are able to enforce. Formalising the semantics of equation-based languages is also important. Semantics allow us to better understand what a program is doing during execution, and to prove that this behaviour meets with our expectation. They also allow different implementations of a language to agree with one another, and can be used to demonstrate the correctness of a compiler or interpreter. However, current attempts to formalise such semantics typically fall short of describing advanced features, are not compositional, and/or fail to show correctness. This thesis provides two major contributions to equation-based languages. Firstly, we develop a refined type system for FHM capable of capturing a larger number of structural anomalies than is currently possible with existing methods. Secondly, we construct a compositional semantics for the discrete aspects of FHM, and prove a number of key correctness properties.
165

Truth maintenance in knowledge-based systems

Nguyen, Hai Hoang January 2014 (has links)
Truth Maintenance Systems (TMS) have been applied in a wide range of domains, from diagnosing electric circuits to belief revision in agent systems. There also has been work on using the TMS in modern Knowledge-Based Systems such as intelligent agents and ontologies. This thesis investigates the applications of TMSs in such systems. For intelligent agents, we use a “light-weight” TMS to support query caching in agent programs. The TMS keeps track of the dependencies between a query and the facts used to derive it so that when the agent updates its database, only affected queries are invalidated and removed from the cache. The TMS employed here is “light-weight” as it does not maintain all intermediate reasoning results. Therefore, it is able to reduce memory consumption and to improve performance in a dynamic setting such as in multi-agent systems. For ontologies, this work extends the Assumption-based Truth Maintenance System (ATMS) to tackle the problem of axiom pinpointing and debugging in ontology-based systems with different levels of expressivity. Starting with finding all errors in auto-generated ontology mappings using a “classic” ATMS [23], we extend the ATMS to solve the axiom pinpointing problem in Description Logics-based Ontologies. We also attempt this approach to solve the axiom pinpointing problem in a more expressive upper ontology, SUMO, whose underlying logic is undecidable.
166

Cloud broker based trust assessment of cloud service providers

Pawar, Pramod S. January 2015 (has links)
Cloud computing is emerging as the future Internet technology due to its advantages such as sharing of IT resources, unlimited scalability and flexibility and high level of automation. Along the lines of rapid growth, the cloud computing technology also brings in concerns of security, trust and privacy of the applications and data that is hosted in the cloud environment. With large number of cloud service providers available, determining the providers that can be trusted for efficient operation of the service deployed in the provider’s environment is a key requirement for service consumers. In this thesis, we provide an approach to assess the trustworthiness of the cloud service providers. We propose a trust model that considers real-time cloud transactions to model the trustworthiness of the cloud service providers. The trust model uses the unique uncertainty model used in the representation of opinion. The Trustworthiness of a cloud service provider is modelled using opinion obtained from three different computations, namely (i) compliance of SLA (Service Level Agreement) parameters (ii) service provider satisfaction ratings and (iii) service provider behaviour. In addition to this the trust model is extended to encompass the essential Cloud characteristics, credibility for weighing the feedbacks and filtering mechanisms to filter the dubious feedback providers. The credibility function and the early filtering mechanisms in the extended trust model are shown to assist in the reduction of impact of malicious feedback providers.
167

Open component-oriented multimedia middleware for adaptive distributed applications

Fitzpatrick, Tom January 2000 (has links)
No description available.
168

Identifying and comparing opportunistic and social networks

Ali, Mona January 2013 (has links)
Recent developments in computation and communication technologies are making big changes to the method in which people communicate with each other. Online social networks, wireless technologies and smart-phones are very common and enable communication to be maintained while people are mobile. These types of communication are closely related to humans because they carry the devices. In this research, we detect, analyse and compare opportunistic networks with human social networks. Currently opportunistic networking platforms have not gone beyond the research and development stage where they are challenging to design and implement. Therefore we develop an indoor mobility tracking system to track the participant movement inside buildings and to record the physical interaction between participants using Bluetooth technology. This system has two different ways of abstracting opportunistic networks from the experimental data: mobility (device-to-building) and co-located (device-to-device) interactions. The mobility detection system has been studied using a volunteer group of students in the School of Computer Science and Informatics. This group also has been studied to understand their social networking structure and characteristics by using electronic survey methodology. Different techniques have been used to investigate the individuals’ networks from the survey and mobility movements and a comparison between them. From a precision and recall technique, we find that 60-80 % of the participants’ social network is embedded in the opportunistic network but a small proportion 10-20% of the opportunistic network is embedded in the social network. This shows the presence of many weak links in the opportunistic network that means the opportunistic network connectivity requires a very small number of key-players to disseminate information throughout the network. We also examine both networks from the perspective of information dissemination. We find that device-to-device create many more weak links to disseminate information rather than server detection. Therefore, information quickly floods throughout the co-located network.
169

A computer-based holistic approach to managing progress of distributed agile teams

Alyahya, Sultan January 2013 (has links)
One of the co-ordination difficulties of remote agile teamwork is managing the progress of development. Several technical factors affect agile development progress; hence, their impact on progress needs to be explicitly identified and co-ordinated. These factors include source code versioning, unit testing (UT), acceptance testing (AT), continuous integration (CI), and releasing. These factors play a role in determining whether software produced for a user story (i.e. feature or use case) is ‘working software’ (i.e. the user story is complete) or not. One of the principles introduced by the Agile Manifesto is that working software is the primary measure of progress. In distributed agile teams, informal methods, such as video-conference meetings, can be used to raise the awareness of how the technical factors affect development progress. However, with infrequent communications, it is difficult to understand how the work of one team member at one site influences the work progress of another team member at a different site. Furthermore, formal methods, such as agile project management tools are widely used to support managing progress of distributed agile projects. However, these tools rely on team members’ perceptions in understanding change in progress. Identifying and co-ordinating the impact of technical factors on development progress are not considered. This thesis supports the effective management of progress by providing a computer-based holistic approach to managing development progress that aims to explicitly identify and co-ordinate the effects of the various technical factors on progress. The holistic approach requires analysis of how the technical factors cause change in progress. With each progress change event, the co-ordination support necessary to manage the event has been explicitly identified. The holistic approach also requires designing computer-based mechanisms that take into consideration the impact of technical factors on progress. A progress tracking system has been designed that keeps track of the impact of the technical factors by placing them under control of the tracking system. This has been achieved by integrating the versioning functionality into the progress tracking system and linking the UT tool, AT tool and CI tool with the progress tracking system. The approach has been evaluated through practical scenarios and has validated these through a research prototype. The result shows that the holistic approach is achievable and helps raise awareness of distributed agile teams regarding the change in the progress, as soon as it occurs. It overcomes the limitations of the informal and formal methods. Team members will no longer need to spend time determining how their change will impact the work of the other team members so that they can notify the affected members regarding the change. They will be provided with a system that helps them achieve this as they carry out their technical activities. In addition, they will not rely on static information about progress registered in a progress tracking system, but will be updated continuously with relevant information about progress changes occurring to their work.
170

Hybrid geo-spatial query processing on the semantic web

Younis, Eman January 2013 (has links)
Semantic Web data sources such as DBpedia are a rich resource of structured representations of knowledge about geographical features and provide potential data for computing the results of Question Answering System queries that require geo-spatial computations. Retrieval from these resources of all content that is relevant to a particular spatial query of, for example, containment, proximity or crossing is not always straightforward as the geometry is usually confined to point representations and there is considerable inconsistency in the way in which geographical features are referenced to locations. In DBpedia, some geographical feature instances have point coordinates, others have qualitative properties that provide explicit or implicit spatial relationships between named places, and some have neither of these. This thesis demonstrates that structured geo-spatial query, a form of question answering, on DBpedia can be performed with a hybrid query method that exploits quantitative and qualitative spatial properties in combination with a high quality reference geo-dataset that can help to support a full range of geo-spatial query operators such as proximity, containment and crossing as well as vague directional queries such as Find airports north of London?. A quantitative model based on the spatial directional relations in DBpedia has been used to assist in query processing. Evaluation experiments confirm the benefits of combining qualitative and quantitative methods for containment queries and of employing high-quality spatial data, as opposed to DBpedia points, as reference objects for proximity queries, particularly for linear features. The high quality geo-data also enabled answering questions impossible to answer with SemanticWeb resources alone, such as finding geographic features within some distance from a region boundary. The contributions were validated by a prototype geo-spatial query system that combined qualitative and quantitative processing and included ranking answers for directional queries based on models derived from DBpedia contributed data.

Page generated in 0.103 seconds