• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6055
  • 657
  • 636
  • 636
  • 636
  • 636
  • 636
  • 636
  • 179
  • 62
  • 16
  • 7
  • 2
  • 2
  • 2
  • Tagged with
  • 10160
  • 10160
  • 6013
  • 1868
  • 823
  • 781
  • 524
  • 504
  • 497
  • 492
  • 450
  • 441
  • 431
  • 426
  • 396
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Asymmetric Key Distribution

Sonalker, Anuja Anilkumar 12 April 2002 (has links)
<p> ABSTRACT BY Anuja A Sonalker on Asymmetric Key Distribution. (Under the direction of Dr. Gregory T. Byrd) Currently, in Threshold Public Key Systems key shares are generated uniformly and distributed in the same manner to every participant. We propose a new scheme, Asymmetric Key Distribution (AKD), in which one share server is provided with a larger, unequal chunk of the original secret key. Asymmetric Key Distribution is a unique scheme for generating and distributing unequal shares via a Trusted Dealer to all the registered peers in the system such that without the combination of the single compulsory share from the Special Server no transaction can be completed. This application is aimed for circumstances where a single party needs to co-exist within a group of semi-trusted peers, or in a coalition where every entity should have a choice to participate and one of the entities needs to be privileged with more powers. This thesis presents the algorithm and security model for Asymmetric Key Distribution, along with all the assumptions and dependencies within the boundaries of which this algorithm is guaranteed to be secure. Its robustness lies in its simplicity and in its distributed nature. We address all security concerns related to the model including compromised share servers and cryptanalytic attacks. A variation, called the Dual Threshold Scheme, is created to reduce the vulnerability in the algorithm, namely, the compromise of the Special Server and its secret share. In this scheme, a combination of another threshold number of Distributed Special Servers must combine to collectively generate a share equivalent to the Special Server?s share. This flexibility allows us to adjust our threshold scheme for the environment. We describe a Java-based implementation of the AKD algorithm, using Remote Method Invocation (RMI) for communication among share servers. A typical scenario of a Trusted Dealer, a Special Server and a number of Share Servers was created, where timed asymmetric key generation and distribution was carried out after which the servers initiated and carried out certificate signing transactions in the appropriated manner. As an interesting exercise, the share servers were corrupted so that they would try to exclude the Special Server in the transactions and try to form its share themselves, to observe the consequence. All their efforts were futile. Another interesting aspect was the key generation timing. Key generation is known to be a very time-extensive process but the key share reuse concept used in this implementation reduced the time for key generation by 66-90% of the classical key generation time.<P>
62

SAFDetection:Sensor Analysis based Fault Detection in Tightly-CoupledMulti-Robot Team Tasks

Li, Xingyan 01 December 2008 (has links)
This dissertation addresses the problem of detecting faults based on sensor analysis for tightly-coupled multi-robot team tasks. The approach I developed is called SAFDetection, which stands for Sensor Analysis based Fault Detection, pronounced “Safe Detection”. When dealing with robot teams, it is challenging to detect all types of faults because of the complicated environment they operate in and the large spectrum of components used in the robot system. The SAFDetection approach provides a novel methodology for detecting robot faults in situations when motion models and models of multi-robot dynamic interactions are unavailable. The fundamental idea of SAFDetection is to build the robots’ normal behavior model based on the robots’ sensor data. This normal behavior model not only describes the motion pattern for the single robot, but also indicates the interaction among the robots in the same team. Inspired by data mining theory, it combines data clustering techniques with the generation of a probabilistic state transition diagram to model the normal operation of the multi-robot system. The contributions of the SAFDetection approach include: (1) providing a way for a robot system to automatically generate a normal behavior model with little prior knowledge; (2) enabling a robot system to detect physical, logic and interactive faults online; (3) providing a way to build a fault detection capability that is independent of the particular type of fault that occurs; and (4) providing a way for a robot team to generate a normal behavior model for the team based the individual robot’s normal behavior models. SAFDetection has two different versions of implementation on multi-robot teams: the centralized approach and the distributed approach; the preferred approach depends on the size of the robot team, the robot computational capability and the network environment. The SAFDetection approach has been successfully implemented and tested in three robot task scenarios: box pushing (with two robots) and follow-the-leader (implemented with two- and five-robot teams). These experiments have validated the SAFDetection approach and demonstrated its robustness, scalability, and applicability to a wide range of tightly-coupled multi-robot applications.
63

Self-Certified Public Key Cryptographic Methodologies for Resource-Constrained Wireless Sensor Networks

Arazi, Ortal 01 December 2007 (has links)
As sensor networks become one of the key technologies to realize ubiquitous computing, security remains a growing concern. Although a wealth of key-generation methods have been developed during the past few decades, they cannot be directly applied to sensor network environments. The resource-constrained characteristics of sensor nodes, the ad-hoc nature of their deployment, and the vulnerability of wireless media pose a need for unique solutions. A fundamental requisite for achieving security is the ability to provide for data con…dential- ity and node authentication. However, the scarce resources of sensor networks have rendered the direct applicability of existing public key cryptography (PKC) methodologies impractical. Elliptic Curve Cryptography (ECC) has emerged as a suitable public key cryptographic foun- dation for constrained environments, providing strong security for relatively small key sizes. This work focuses on the clear need for resilient security solutions in wireless sensor networks (WSNs) by introducing e¢ cient PKC methodologies, explicitly designed to accommodate the distinctive attributes of resource-constrained sensor networks. Primary contributions pertain to the introduction of light-weight cryptographic arithmetic operations, and the revision of self- certi…cation (consolidated authentication and key-generation). Moreover, a low-delay group key generation methodology is devised and a denial of service mitigation scheme is introduced. The light-weight cryptographic methods developed pertain to a system-level e¢ cient utilization of the Montgomery procedure and e¢ cient calculations of modular multiplicative inverses. With respect to the latter, computational complexity has been reduced from O(m) to O(logm), with little additional memory cost. Complementing the theoretical contributions, practical computation o¤-loading protocols have been developed along with a group key establishment scheme. Implementation on state-of- the-art sensor node platforms has yielded a comprehensive key establishment process obtained in approximately 50 ns, while consuming less than 25 mJ. These exciting results help demonstrate the technology developed and ensure its impact on next-generation sensor networks.
64

Personalized Health Monitoring Using Evolvable Block-based Neural Networks

Jiang, Wei 01 August 2007 (has links)
This dissertation presents personalized health monitoring using evolvable block-based neural networks. Personalized health monitoring plays an increasingly important role in modern society as the population enjoys longer life. Personalization in health monitoring considers physiological variations brought by temporal, personal or environmental differences, and demands solutions capable to reconfigure and adapt to specific requirements. Block-based neural networks (BbNNs) consist of 2-D arrays of modular basic blocks that can be easily implemented using reconfigurable digital hardware such as field programmable gate arrays (FPGAs) that allow on-line partial reorganization. The modular structure of BbNNs enables easy expansion in size by adding more blocks. A computationally efficient evolutionary algorithm is developed that simultaneously optimizes structure and weights of BbNNs. This evolutionary algorithm increases optimization speed by integrating a local search operator. An adaptive rate update scheme removing manual tuning of operator rates enhances the fitness trend compared to pre-determined fixed rates. A fitness scaling with generalized disruptive pressure reduces the possibility of premature convergence. The BbNN platform promises an evolvable solution that changes structures and parameters for personalized health monitoring. A BbNN evolved with the proposed evolutionary algorithm using the Hermite transform coefficients and a time interval between two neighboring R peaks of ECG signal, provides a patient-specific ECG heartbeat classification system. Experimental results using the MIT-BIH Arrhythmia database demonstrate a potential for significant performance enhancements over other major techniques.
65

Computationally Efficient Mixed Pixel Decomposition Using Constrained Optimizations

Miao, Lidan 01 December 2007 (has links)
Sensors with spatial resolution larger than targets yield mixed pixel, i.e., pixel whose measurement is a composite of different sources (endmembers). The analysis of mixed pixels demands subpixel methods to perform source separation and quantification, which is a problem of blind source separation (BSS). Although various algorithms have been proposed, several important issues remain unresolved. First, assuming the endmembers are known, the abundance estimation is commonly performed by employing a least squares criterion, which however makes the estimation sensitive to noise and outliers, and the endmembers with very similar signatures are difficult to differentiate. In addition, the nonnegative con- straints require iterative approaches that are more computationally expensive than direct methods. Secondly, to extract endmembers from the given image, most algorithms make the assumption of the presence of pure pixels, i.e., pixels containing only one endmember class, which is not realistic in real world applications. This dissertation presents effective and computationally efficient source separation al- gorithms, which blindly extract constituent components and their fractional abundances from mixed pixels using constrained optimizations. When the image contains pure pixels, we develop a constrained maximum entropy (MaxEnt) approach to perform unmixing. The entropy formulation provides a natural way to incorporate the physical constraints, and gains an optimal solution that goes beyond least squares. However, the assumption of the presence of pure pixels is not always reliable. To solve this problem, we further develop a constrained nonnegative matrix factorization (NMF) method, which integrates the least square analysis and the model of convex geometry. The constrained NMF approach exploits the important fact that the endmembers occupy the vertices of a simplex, and the simplex volume determined by the actual endmembers is the minimum among all possible simplexes that circumscribe the data scatter space. Both methods blindly extract endmembers and abundances with strong robustness to noise and outliers, and admit a generalization to lower and higher dimensional spaces. For images containing pure pixels, the MaxEnt approach exhibits high estimation accuracy; while, the constrained NMF method yields relatively stable performance for data with di®erent endmember purities, which shows improved per- formance over the MaxEnt approach when all image pixels are mixtures. The proposed algorithms are applied to the subject of hyperspectral unmixing. Com- parative analyses with the state-of-the-art methods show their e®ectiveness and merits. To demonstrate the broad application domain of the unmixing schemes, we generalize the proposed idea to solve classic image processing problems, particularly, blind image restora- tion. We reinvestigate the physical image formation process and interpret the classic image restoration from a BSS perspective; that is, the observed image is considered as a linear combination of a set of shifted point spread function (PSF) with the weight coefficients determined by the actual image. A smoothness and block-decorrelation constrained NMF method is developed to estimate the source image.
66

Integration of Spatial and Spectral Information for Hyperspectral Image Classification

Du, Zheng 01 August 2008 (has links)
Hyperspectral imaging has become a powerful tool in biomedical and agriculture fields in the recent years and the interest amongst researchers has increased immensely. Hyperspectral imaging combines conventional imaging and spectroscopy to acquire both spatial and spectral information from an object. Consequently, a hyperspectral image data contains not only spectral information of objects, but also the spatial arrangement of objects. Information captured in neighboring locations may provide useful supplementary knowledge for analysis. Therefore, this dissertation investigates the integration of information from both the spectral and spatial domains to enhance hyperspectral image classification performance. The major impediment to the combined spatial and spectral approach is that most spatial methods were only developed for single image band. Based on the traditional singleimage based local Geary measure, this dissertation successfully proposes a Multidimensional Local Spatial Autocorrelation (MLSA) for hyperspectral image data. Based on the proposed spatial measure, this research work develops a collaborative band selection strategy that combines both the spectral separability measure (divergence) and spatial homogeneity measure (MLSA) for hyperspectral band selection task. In order to calculate the divergence more efficiently, a set of recursive equations for the calculation of divergence with an additional band is derived to overcome the computational restrictions. Moreover, this dissertation proposes a collaborative classification method which integrates the spectral distance and spatial autocorrelation during the decision-making process. Therefore, this method fully utilizes the spatial-spectral relationships inherent in the data, and thus improves the classification performance. In addition, the usefulness of the proposed band selection and classification method is evaluated with four case studies. The case studies include detection and identification of tumor on poultry carcasses, fecal on apple surface, cancer on mouse skin and crop in agricultural filed using hyperspectral imagery. Through the case studies, the performances of the proposed methods are assessed. It clearly shows the necessity and efficiency of integrating spatial information for hyperspectral image processing.
67

Automated Genome-Wide Protein Domain Exploration

Rekepalli, Bhanu Prasad 01 December 2007 (has links)
Exploiting the exponentially growing genomics and proteomics data requires high quality, automated analysis. Protein domain modeling is a key area of molecular biology as it unravels the mysteries of evolution, protein structures, and protein functions. A plethora of sequences exist in protein databases with incomplete domain knowledge. Hence this research explores automated bioinformatics tools for faster protein domain analysis. Automated tool chains described in this dissertation generate new protein domain models thus enabling more effective genome-wide protein domain analysis. To validate the new tool chains, the Shewanella oneidensis and Escherichia coli genomes were processed, resulting in a new peptide domain database, detection of poor domain models, and identification of likely new domains. The automated tool chains will require months or years to model a small genome when executing on a single workstation. Therefore the dissertation investigates approaches with grid computing and parallel processing to significantly accelerate these bioinformatics tool chains.
68

Accelerating Quantum Monte Carlo Simulations with Emerging Architectures

Gothandaraman, Akila 01 August 2009 (has links)
Scientific computing applications demand ever-increasing performance while traditional microprocessor architectures face limits. Recent technological advances have led to a number of emerging computing platforms that provide one or more of the following over their predecessors: increased energy efficiency, programmability/flexibility, different granularities of parallelism, and higher numerical precision support. This dissertation explores emerging platforms such as reconfigurable computing using fieldprogrammable gate arrays (FPGAs), and graphics processing units (GPUs) for quantum Monte Carlo (QMC), a simulation method widely used in physics and physical chemistry. This dissertation makes the following significant contributions to computational science. First, we develop an open-source userfriendly hardware-accelerated simulation framework using reconfigurable computing. This framework demonstrates a significant performance improvement over the optimized software implementation on the Cray XD1 high performance reconfigurable computing (HPRC) platform. We use novel techniques to approximate the kernel functions, pipelining strategies, and a customized fixed-point representation that guarantees the accuracy required for our simulation. Second, we exploit the enormous amount of data parallelism on GPUs to accelerate the computationally intensive functions of the QMC application using NVIDIA’s Compute Unified Device Architecture (CUDA) paradigm. We experiment with single-, double- and mixed- precisions for the CUDA implementation. Finally, we present analytical performance models to help validate, predict, and characterize the application performance on these architectures. Together, this work that combines novel algorithms and emerging architectures, along with the performance models, will serve as a starting point for investigating related scientific applications on present and future heterogeneous architectures.
69

Fabric-on-a-Chip: Toward Consolidating Packet Switching Functions on Silicon

Matthews, William B. 01 December 2007 (has links)
The switching capacity of an Internet router is often dictated by the memory bandwidth required to bu¤er arriving packets. With the demand for greater capacity and improved service provisioning, inherent memory bandwidth limitations are encountered rendering input queued (IQ) switches and combined input and output queued (CIOQ) architectures more practical. Output-queued (OQ) switches, on the other hand, offer several highly desirable performance characteristics, including minimal average packet delay, controllable Quality of Service (QoS) provisioning and work-conservation under any admissible traffic conditions. However, the memory bandwidth requirements of such systems is O(NR), where N denotes the number of ports and R the data rate of each port. Clearly, for high port densities and data rates, this constraint dramatically limits the scalability of the switch. In an effort to retain the desirable attributes of output-queued switches, while significantly reducing the memory bandwidth requirements, distributed shared memory architectures, such as the parallel shared memory (PSM) switch/router, have recently received much attention. The principle advantage of the PSM architecture is derived from the use of slow-running memory units operating in parallel to distribute the memory bandwidth requirement. At the core of the PSM architecture is a memory management algorithm that determines, for each arriving packet, the memory unit in which it will be placed. However, to date, the computational complexity of this algorithm is O(N), thereby limiting the scalability of PSM switches. In an effort to overcome the scalability limitations, it is the goal of this dissertation to extend existing shared-memory architecture results while introducing the notion of Fabric on a Chip (FoC). In taking advantage of recent advancements in integrated circuit technologies, FoC aims to facilitate the consolidation of as many packet switching functions as possible on a single chip. Accordingly, this dissertation introduces a novel pipelined memory management algorithm, which plays a key role in the context of on-chip output- queued switch emulation. We discuss in detail the fundamental properties of the proposed scheme, along with hardware-based implementation results that illustrate its scalability and performance attributes. To complement the main effort and further support the notion of FoC, we provide performance analysis of output queued cell switches with heterogeneous traffic. The result is a flexible tool for obtaining bounds on the memory requirements in output queued switches under a wide range of tra¢ c scenarios. Additionally, we present a reconfigurable high-speed hardware architecture for real-time generation of packets for the various traffic scenarios. The work presented in this thesis aims at providing pragmatic foundations for designing next-generation, high-performance Internet switches and routers.
70

Automated System to Debug Under-performing Network Flows in Wide Area Networks

Tandra, Harika 01 December 2009 (has links)
Locating the cause of performance losses in large high performance Wide Area Networks (WAN) is an extremely challenging problem. This is because WANs comprise several distributed sub-networks (Autonomous Networks), with their own independent network monitoring systems. Each individual monitoring system has limited or no access to network devices outside its own network. Moreover, conventional network monitoring systems are designed only to provide information about the health of individual network devices, and do not provide sufficient information to monitor endto- end performance – thus, adding severe overhead on debugging end-toend performance issues. In this thesis, an automated tool is designed that requires no special access to network devices and no special software installations on the network devices or end hosts. The system detects performance losses and locates the most likely problem nodes (routers/links) in the network. A key component of this system is the novel hybrid network monitoring/data collection system. The monitoring/data collection sub-system is designed to obtain the best of both active and passive monitoring techniques. Then, pattern analysis algorithms are designed. They locate the causes of performance loss using the data collected from above sub-system. This system is being tested on the GLORIAD (Global Ring Network for Advanced Application Development) network. One of the future goals is to in tegrate this system into the GLORIAD’s network monitoring tool set, to provide end-to-end network monitoring and problem mitigation capabilities.

Page generated in 0.126 seconds