• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

Controlling light with photonic metamaterials

Zhang, Jianfa January 2013 (has links)
This thesis reports on my research efforts towards controlling light with photonic metamaterials for desired functionalities: I have demonstrated a new family of continuously metallic metamaterials-‘intaglio’and ‘bas-relief’ metamaterials. They are formed of indented or raised sub-wavelength patterns with depth/height of the order 100 nm and offer a robust and fexible paradigm for engineering the spectral response of metals in the vis-NIR domains. Controlling the colour of metals by intaglio/bas-relief metamaterials has been realized. I have also demonstrated the concept of ‘dielectric loaded’ metamaterials where nanostructured dielectrics on unstructured metal surfaces work as optical frequency selective surfaces. I have demonstrated for the first time controlling light with light without nonlinearity using a plasmonic metamaterial. I have experimentally shown that the interference of two coherent beams can eliminate the plasmonic Joule losses of light energy in the metamaterial with thickness less than one tenth of the wavelength of light or, in contrast, can lead to almost total absorbtion of light. The phenomenon provides functionality that can be implemented freely across a broad visible to infrared range by varying the structural design. I have demonstrated for the first time that a strong light-driven force can be generated when a plasmonic metamaterial is illuminated in close proximity to a dielectric or metal surface. This near-field force can exceed radiation pressure to provide an optically controlled adhesion mechanism mimicking the gecko toe. I have first demonstrated resonant optical forces which are tens of times stronger than radiation pressure within planar dielectric metamaterials and introduced the concept of optomechanical metamaterials. An optomechanical metamaterial consisting of an array of dielectric meta-molecules supported on free-standing elastic beams has been designed. It presents a giant nonlinear optical response driven by resonant optomechanical forces and exhibits optical bistability and nonlinear asymmetric transmission at intensity levels of only a few hundred μW/μm2. Furthermore, I have experimentally demonstrated optical magnetic resonances in all-dielectric metamaterials. I have demonstrated for the first time a non-volatile bi-directional all-optical switching in a phase-change metamaterial. By functionalising a photonic metamaterial with the phase-change chalcogenide glass, phase transitions across a 2000 μm2 area are initiated uniformly by single laser pulse. Reversible switching both in the near- and mid-infrared spectral ranges with a shift of optical resonance position up to 500 nm has been achieved at optical excitation levels of 0.25 mW/μm2, leading to a reflection contrast ratio exceeding 4:1 and transmission contrast around 3.5:1.
572

eCert : a secure and user centric edocument transmission protocol : solving the digital signing practical issues

Chen-Wilson, Lisha January 2013 (has links)
Whilst our paper-based records and documents are gradually being digitized, security concerns about how such electronic data is stored, transmitted, and accessed have increased rapidly. Although the traditional digital signing method can be used to provide integrity, authentication, and non-repudiation for signed eDocuments, this method does not address all requirements, such as fine-grained access control and content status validation. What is more, information owners have increasing demands regarding their rights of ownership. Therefore, a secure user-centric eDocument management system is essential. Through a case study of a secure and user-centric electronic qualification certificate (eCertificate) system, this dissertation explores the issues and the technology gaps; it identifies existing services that can be re-used and the services that require further development; it proposes a new signing method and the corresponding system framework which solves the problems identified. In addition to tests that have been carried out for the newly designed eCertificate system to be employed under the selected ePortfolio environments, the abstract protocol (named eCert protocol) has also been applied and evaluated in two other eDocument transmitting situations, Mobile eID and eHealthcare patient data. Preliminary results indicate that the recommendation from this research meets the design requirements, and could form the foundation of future eDocument transmitting research and development.
573

Trustworthiness of Web information evaluation framework

Pattanaphanchai, Jarutas January 2014 (has links)
Assessing the quality of information on the Web is a challenging issue for at least two reasons. Firstly, there is little control over publishing quality. Secondly, when assessing the trustworthiness of Web pages, users tend to base their judgements upon subjective criteria such as the visual presentation of the website, rather than rigorous criteria such as the author's qualifications or the source's review process. As a result, Web users tend to make incorrect assessments of the trustworthiness of the Web information they are consuming. Also, they are uncertain of their ability to make a decision whether to trust information they are not familiar with. This research addresses this problem by collecting and presenting metadata based on useful practice trustworthiness criteria, in order to support the users' evaluation process for assessing the trustworthiness of Web information during their information seeking processes. In this thesis, we propose the Trustworthiness of Web Information Evaluation (TWINE) application framework, and present a prototype tool that employs this framework for a case study of academic publications. The framework gathers and provides useful information that can support users' judgments of the trustworthiness of Web information. The framework consists of two layers: the presentation layer and the logic layer. The presentation layer is composed of input and output modules, which are the modules that interface with the users. The logic layer consists of the trustworthiness criteria and metadata creation modules. The trustworthiness criteria module is composed of four basic criteria, namely: authority, accuracy, recency and relevance. Each criterion consists of the items, called indicators, in order to indicate the trustworthiness of Web information based on their criteria. The metadata creation module gathers and integrates metadata based on the proposed criteria that will then be used in the output module in order to generate the supportive information for users. The framework was evaluated based on the tool, using an empirical study. The study set a scenario that new postgraduate students search for publications to use in their report using the developed tool. The students were then asked to complete a questionnaire, which was then analysed using quantitative and qualitative methods. The results from the questionnaire show that the confidence level of users when evaluating the trustworthiness of Web information does increase if they obtain useful supportive information about that Web information. The mean of the confidence level of their judgments increases by 12.51 percentage points. Additionally, the number of selected ssessing the quality of information on the Web is a challenging issue for at least two reasons. Firstly, there is little control over publishing quality. Secondly, when assessing the trustworthiness of Web pages, users tend to base their judgements upon subjective criteria such as the visual presentation of the website, rather than rigorous criteria such as the author's qualifications or the source's review process. As a result, Web users tend to make incorrect assessments of the trustworthiness of the Web information they are consuming. Also, they are uncertain of their ability to make a decision whether to trust information they are not familiar with. This research addresses this problem by collecting and presenting metadata based on useful practice trustworthiness criteria, in order to support the users' evaluation process for assessing the trustworthiness of Web information during their information seeking processes. In this thesis, we propose the Trustworthiness of Web Information Evaluation (TWINE) application framework, and present a prototype tool that employs this framework for a case study of academic publications. The framework gathers and provides useful information that can support users' judgments of the trustworthiness of Web information. The framework consists of two layers: the presentation layer and the logic layer. The presentation layer is composed of input and output modules, which are the modules that interface with the users. The logic layer consists of the trustworthiness criteria and metadata creation modules. The trustworthiness criteria module is composed of four basic criteria, namely: authority, accuracy, recency and relevance. Each criterion consists of the items, called indicators, in order to indicate the trustworthiness of Web information based on their criteria. The metadata creation module gathers and integrates metadata based on the proposed criteria that will then be used in the output module in order to generate the supportive information for users. The framework was evaluated based on the tool, using an empirical study. The study set a scenario that new postgraduate students search for publications to use in their report using the developed tool. The students were then asked to complete a questionnaire, which was then analysed using quantitative and qualitative methods. The results from the questionnaire show that the confidence level of users when evaluating the trustworthiness of Web information does increase if they obtain useful supportive information about that Web information. The mean of the confidence level of their judgments increases by 12.51 percentage points. Additionally, the number of selected pieces of Web information used in their work does increase when supportive information is provided. The number of pieces of Web information selected by the users increases on average less than one percentage points. Participating users were satisfied with the supportive information, insofar as it helps them to evaluate the trustworthiness of Web information, with the mean satisfaction level of 3.69 of 5 points. Overall the supportive information provided, based on and provided by the framework, can help users to adequately evaluate the trustworthiness of Web information.
574

Inferring and exploiting compact models of evolutionary problem structure

Cox, Chris January 2015 (has links)
In both natural and artificial evolution, populations search a space of possibilities using the mechanisms of natural selection and random variation. However, not all variations are equally likely. The directions which variation can take are themselves a key part of the evolutionary machinery, determining the ability of evolution to create diversity whilst obeying the constraints of phenotype space. To be effective they must reflect the structure of the selective environment in which they exist. Evolutionary algorithms are often designed with a priori assumptions about this structure, but it can also be learned on the fly using “model-building” algorithms. However, there are many open questions: what information do populations contain about their selective environment? How can it be extracted from a population and represented? And how can it be exploited to facilitate more effective evolutionary search? In this thesis, a novel type of lossless compact model called Schema Grammar is introduced. Schema Grammar overcomes the current limitations of compact models by enabling intrinsically non-sequential data to be compressed. It offers a number of advantages over existing model-building approaches. In particular, the model is able to infer a hierarchy of genetic schemata that is consistent with the compositional structure of the selective environment, and has a strong predictive quality with respect to fitness. By using this structure to facilitate variation at many different levels of scale, instances of well-known test problems are shown to be solvable in low-order polynomial time, matching the performance of state of the art methods. The information-theoretic qualities of Schema Grammar also enable evolutionary information to be quantified in novel ways. Building on recent advances in information and complexity theory, the model is used to quantify mutual information between populations and their selective environment, including environments containing complex epistatic structure. It is also used to predict the fitness of individuals by measuring their information distance to a fit population.
575

Techniques and validation for protection of embedded processors

Kufel, Jedrzej January 2015 (has links)
Advances in technology scaling and miniaturization of on-chip structures have caused an increasing complexity of modern devices. Due to immense time-to-market pressures, the reusability of intellectual property (IP) sub-systems has become a necessity. With the resulting high risks involved with such a methodology, securing IP has become a major concern. Despite a number of proposed IP protection (IPP) techniques being available, securing an IP in the register transfer level (RTL) is not a trivial task, with many of the techniques presenting a number of shortfalls or design limitations. The most prominent and the least invasive solution is the integration of a digital watermark into an existing IP. In this thesis new techniques are proposed to address the implementation difficulties in constrained embedded IP processor cores. This thesis establishes the parameters of sequences used for digital watermarking and the tradeoffs between the hardware implementation cost, detection performance and robustness against IP tampering. A new parametric approach is proposed which can be implemented with any watermarking sequence. MATLAB simulations and experimental results of two fabricated silicon ASICs with a watermark circuit embedded in an ARMR Cortex R-M0 IP core and an ARMR Cortex R-A5 IP core demonstrate the tradeoffs between various sequences based on the final design application. The thesis further focuses on minimization of hardware costs of a watermark circuit implementation. A new clock-modulation based technique is proposed and reuses the existing circuit of an IP core to generate a watermark signature. Power estimation and experimental results demonstrate a significant area and power overhead reduction, when compared with the existing techniques. To further minimize the costs of a watermark implementation, a new technique is proposed which allows a non-deterministic and sporadic generation of a watermark signature. The watermark was embedded in an ARMR Cortex R-A5 IP core and was fabricated in silicon. Experimental silicon results have validated the proposed technique and have demonstrated the negligible hardware implementation costs of an embedded watermark.
576

Smartphone-powered citizen science for bioacoustic monitoring

Zilli, Davide January 2015 (has links)
Citizen science is the involvement of amateur scientists in research for the purpose of data collection and analysis. This practice, well known to different research domains, has recently received renewed attention through the introduction of new and easy means of communication, namely the internet and the advent of powerful “smart” mobile phones, which facilitate the interaction between scientists and citizens. This is appealing to the field of biodiversity monitoring, where traditional manual surveying methods are slow and time consuming and rely on the expertise of the surveyor. This thesis investigates a participatory bioacoustic approach that engages citizens and their smartphones to map the presence of animal species. In particular, the focus is placed on the detection of the New Forest cicada, a critically endangered insect that emits a high pitched call, difficult to hear for humans but easily detected by their mobile phones. To this end, a novel real time acoustic cicada detector algorithm is proposed, which efficiently extracts three frequency bands through a Goertzel filter, and uses them as features for a hidden Markov model-based classifier. This algorithm has permitted the development of a cross-platform mobile app that enables citizen scientists to submit reports of the presence of the cicada. The effectiveness of this approach was confirmed for both the detection algorithm, which achieves an F1 score of 0.82 for the recognition of three acoustically similar insects in the New Forest; and for the mobile system, which was used to submit over 11,000 reports in the first two seasons of deployment, making it one of the largest citizen science projects of its kind. However the algorithm, though very efficient and easily tuned to different microphones, does not scale effectively to many-species classification. Therefore, an alternative method is also proposed for broader insect recognition, which exploits the strong frequency features and the repeating phrases that often occur in insects songs. To express these, it extracts a set of modulation coefficients from the power spectrum of the call, and represents them compactly by sampling them in the log-frequency space, avoiding any bias towards the scale of the phrase. The algorithm reaches an F1 score of 0.72 for 28 species of UK Orthoptera over a small training set, and an F1 score of 0.92 for the three insects recorded in the New Forest, though with higher computational cost compared to the algorithm tailored to cicada detection. The mobile app, downloaded by over 3,000 users, together with the two algorithms, demonstrate the feasibility of real-time insect recognition on mobile devices and the potential of engaging a large crowd for the monitoring of the natural environment.
577

System-level design automation and optimisation of network-on-chips in terms of timing and energy

Qi, Ji January 2015 (has links)
As system complexity constantly increases, traditional bus-based architectures are less adaptable to the increasing design demands. Specifically in on-chip digital system designs, Network-on-Chip (NoC) architectures are promising platforms that have distributed multi-core co-operation and inter-communication. Since the design cost and time cycles of NoC systems are growing rapidly with higher integration, systemlevel Design Automation (DA) techniques are used to abstract models at early design stages for functional validation and performance prediction. Yet precise abstractions and efficient simulations are critical challenges for modern DA techniques to improve the design efficiency. This thesis makes several contributions to address these challenges. We have firstly extended a backbone simulator, NIRGAM, to offer accurate system level models and performance estimates. A case study of developing a one-to-one transmission system using asynchronous FIFOs as buffers in both the NIRGAM simulator and a synthesised gate-level design is given to validate the model accuracy by comparing their power and timing performance. Then we have made a second contribution to improve DA techniques by proposing a novel method to efficiently emulate non-rectangular NoC topologies in NIRGAM and generating accurate energy and timing performance. Our proposed method uses time regulated models to emulate virtual non-rectangular topologies based on a regular Mesh. The performance accuracy of virtual topologies is validated by comparing with corresponding real NoC topologies. The third contribution of our research is a novel task-mapping scheme that generates optimal mappings to tile-based NoC networks with accurate performance prediction and increased execution speed. A novel Non-Linear Programming (NLP) based mapping problem has been formulated and solved by a modified Branch and Bound (BB) algorithm. The proposed method predicts the performance of optimised mappings and compares it with NIRGAM simulations for accuracy validation.
578

Model-based 3D gait biometrics

Ariyanto, Gunawan January 2013 (has links)
Gait biometrics has attracted increasing interest in the computer vision and machine learning communities because of its unique advantages for recognition at distance. However, there have as yet been few gait biometric approaches which use temporal three-dimensional(3D) data. Clearly, 3D gait data conveys more information than 2D gait data and it is also the natural representation of human gait as perceived by humans.The University of Southampton has created a multi-biometric tunnel using twelve cameras to capture multiple gait images and reconstruct them into 3D volumetric gait data. Some analyses have been done using this 3D dataset mainly to solve the view dependent problem using model-free silhouette-based approaches. This thesis explores the potential of model-based methods in an indoor 3D volumetric gait dataset and presents a novel human gait features extraction algorithm based on marionette and mass-spring principles. We have developed two different model-based approaches to extract human gait kinematics from 3D volumetric gait data. The first approach used a structural model of a human. This model contained four articulated cylinders and four joints with two degrees of rotational freedom at each joint to model the human lower legs. Human gait kinematic trajectories were extracted by fitting the gait model to the gait data. We proposed a simple yet effective model-fitting algorithm using a correlation filter and dynamic programming. To increase the fitting performance, we utilized a genetic algorithm on top of this structural model. The second approach was a novel 3D model-based approach using a marionette-based mass-spring model. To model the articulated human body, we used a stick-figure model which emulates marionette's motion and joint structure. The stick-figure model had eleven nodes representing the human joints of head, torso and lower legs. Each node was linked with at least one other node by spring. The voxel data in the next frame had a role as an attractor which able to generate forces for each node and then iteratively warp the model into the data. This process was repeated for successive frames. Our methods can extract both structural and dynamic gait features. Some of the extracted features were inherently unique to 3D gait data such as footprint angle and pelvis rotation. Analysis on a database of 46 subjects shows an encouraging correct classification rate up to 95.1% and suggests that model-based 3D gait analysis can contribute even more in gait biometrics.
579

Colour object recognition using shape-based aspects

Taylor, Richard Ian January 1992 (has links)
No description available.
580

Design, fabrication, and characterization of magnetic nanostructures

Claudio Gonzalez, David January 2008 (has links)
For several years, thin films of ferromagnetic materials with metallic spacer layers showing giant magnetoresistance (GMR) were the technological basis used in the read-heads of hard disk drives. Similarly, tunnelling magnetoresistance (TMR), which is an effect typically larger than giant magnetoresistance, occurs when the metallic spacer layers are substituted by an insulating layer. Read-heads based on tunnelling magnetoresistance have been available to the consumer market for the last couple of years. Furthermore, nonvolatile random access memories, also known as magnetic random access memory (MRAM) have also been possible thanks to the use of the tunnelling magnetoresistance effect and have recently been introduced to the consumer market. These technological advances have been possible thanks to years of extensive study and optimization devoted to such effects. Following this logic, we arrive at the conclusion that an effect that produces higher magnetoresistance ratios but uses lower magnetic fields or even only electric currents is highly desired and could be useful for the design and fabrication of spintronic devices of tomorrow. In this thesis, the use of electron beam lithography (EBL) and a bilayer liftoff process to fabricate magnetic Ni nanostructures with constrictions in the range of 12 to 60 nm is reported. These structures were fabricated based upon the constricted nanowire (CNW) and nanobridge (NB) geometries. High control and reproducibility in the fabrication of such geometries have been achieved with the introduced bilayer liftoff process. This is important because it provides the opportunity to study the statistics of the domain wall magnetoresistance (DWMR) effect and assess its reproducibility. Additionally, micromagnetic simulations of the fabricated structures were carried out and it was found that domain walls (DWs) with reduced widths down to 48 and 42.5 nm, can be achieved using the CNW and NB geometries, respectively. The magnetoresistance effect due to the presence of a DW has been estimated using dimensions achieved experimentally. Furthermore, the anisotropic magnetoresistance (AMR) effect was obtained numerically and it was found to be smaller than DWMR. This opens the possibility of using the fabricated structures for more systematic studies of DWMR.

Page generated in 0.0361 seconds