• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3519
  • 654
  • 654
  • 654
  • 654
  • 654
  • 654
  • 62
  • 4
  • Tagged with
  • 6061
  • 6061
  • 6061
  • 560
  • 518
  • 474
  • 372
  • 351
  • 282
  • 260
  • 237
  • 232
  • 187
  • 184
  • 174
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Relay Selection Strategies for Multi-hop Cooperative Networks

Sun, Hui 09 June 2016 (has links)
In this dissertation we consider several relay selection strategies for multi-hop cooperative networks. The relay selection strategies we propose do not require a central controller (CC). Instead, the relay selection is on a hop-by-hop basis. As such, these strategies can be implemented in a distributed manner. Therefore, increasing the number of hops in the network would not increase the complexity or time consumed for the relay selection procedure of each hop. We first investigate the performance of a hop-by-hop relay selection strategy for multi-hop decode-and-forward (DF) cooperative networks. In each relay cluster, relays that successfully receive and decode the message from the previous hop form a decoding set for relaying, and the relay which has the highest signal-to-noise ratio (SNR) link to the next hop is then selected for retransmission. We analyze the performance of this method in terms of end-to-end outage probability, and we derive approximations for the ergodic capacity and the effective ergodic capacity of this strategy. Next we propose a novel hop-by-hop relay selection strategy where the relay in the decoding set with the largest number of ``good'' channels to the next stage is selected for retransmission. We analyze the performance of this method in terms of end-to-end outage probability in the case of perfect and imperfect channel state information (CSI). We also investigate relay selection strategies in underlay spectrum sharing cognitive relay networks. We consider a two-hop DF cognitive relay network with a constraint on the interference to the primary user. The outage probability of the secondary user and the interference probability at the primary user are analyzed under imperfect CSI scenario. Finally we introduce a hop-by-hop relay selection strategy for underlay spectrum sharing multi-hop relay networks. Relay selection in each stage is only based on the CSI in that hop. It is shown that in terms of outage probability, the performance of this method is nearly optimal.
202

Nano Cost Nano Patterned Template for Surface Enhanced Raman Scattering (SERS) for IN-VITRO and IN-VIVO Applications

Hou, Hsuan-Chao 30 May 2016 (has links)
Raman scattering is a well-known technique for detecting and identifying complex molecular level samples. The weak Raman signals are enormously enhanced in the presence of a nano-patterned metallic surface next to the specimens. This dissertation describes a technique to fabricate a novel, low cost, high sensitive, disposable, and reproducible metallic nanostructure on a transparent substrate for Surface Enhanced Raman Scattering (SERS). Raman signals can be obtained from the specimen surface of opaque specimens. Most importantly, the metallic nanostructure can be bonded on the end of a probe / a needle, and the other end is coupled to a distant spectrometer. This opens up the Raman spectroscopy for a use in a clinical environment with the patient simply sitting or lying near a spectrometer. This SERS system, one of molecular level early diagnosis technologies, can be divided into four parts: SERS nanostructure substrates, reflection Raman signal (in vitro), transmission (in vivo) Raman signal, and a probe / a needle with a gradient-index (GRIN) lens in an articulated arm system. In this work, the aluminum metal was employed as not only a base substrate for a sputtered Au nanostructure (conventional view) but also a sacrificial layer for the Au nanostructure on a transparent substrate (transmission view). The enhanced Raman Signal from reflection and transparent SERS substrates depended on aluminum etching methods, Au deposition angles, and Au deposition thicknesses. Rhodamine 6G solutions on both sides of the SERS substrates were used to analyze and characterize. Moreover, preliminary Raman Spectra from R6G and chicken specimen were obtained through a remote SERS probe head and an articulated arm system. The diameter of the invasive probe head was shrunk to 0.5 mm. The implication is that this system can be applied in medical applications.
203

Effective 3D Geometric Matching for Data Restoration and Its Forensic Application

Zhang, Kang 31 May 2016 (has links)
3D geometric matching is the technique to detect the similar patterns among multiple objects. It is an important and fundamental problem and can facilitate many tasks in computer graphics and vision, including shape comparison and retrieval, data fusion, scene understanding and object recognition, and data restoration. For example, 3D scans of an object from different angles are matched and stitched together to form the complete geometry. In medical image analysis, the motion of deforming organs is modeled and predicted by matching a series of CT images. This problem is challenging and remains unsolved, especially when the similar patterns are 1) small and lack geometric saliency; 2) incomplete due to the occlusion of the scanning and damage of the data. We study the reliable matching algorithm that can tackle the above difficulties and its application in data restoration. Data restoration is the problem to restore the fragmented or damaged model to its original complete state. It is a new area and has direct applications in many scientific fields such as Forensics and Archeology. In this dissertation, we study novel effective geometric matching algorithms, including curve matching, surface matching, pairwise matching, multi-piece matching and template matching. We demonstrate its applications in an integrated digital pipeline of skull reassembly, skull completion, and facial reconstruction, which is developed to facilitate the state-of-the-art forensic skull/facial reconstruction processing pipeline in law enforcement.
204

An Architecture for Configuring an Efficient Scan Path for a Subset of Elements

Ashrafi, Arash 05 May 2016 (has links)
<html> <head> <title>LaTeX4Web 1.4 OUTPUT</title> <style type="text/css"> <!-- body {color: black; background:"#FFCC99"; } div.p { margin-top: 7pt;} td div.comp { margin-top: -0.6ex; margin-bottom: -1ex;} td div.comb { margin-top: -0.6ex; margin-bottom: -.6ex;} td div.norm {line-height:normal;} td div.hrcomp { line-height: 0.9; margin-top: -0.8ex; margin-bottom: -1ex;} td.sqrt {border-top:2 solid black; border-left:2 solid black; border-bottom:none; border-right:none;} table.sqrt {border-top:2 solid black; border-left:2 solid black; border-bottom:none; border-right:none;} --> </style> </head> <body> Field Programmable Gate Arrays (FPGAs) have many modern applications. A feature of FPGAs is that they can be reconfigured to suit the computation. One such form of reconfiguration, called partial reconfiguration (PR), allows part of the chip to be altered. The smallest part that can be reconfigured is called a frame. To reconfigure a frame, a fixed number of configuration bits are input (typically from outside) to the frame. Thus PR involves (a) selecting a subset C <font face=symbol>Í</font> S of k out of n frames to configure and (b) inputting the configuration bits for these k frames. The, recently proposed, MU-Decoder has made it possible to select the subset C quickly. This thesis involves mechanisms to input the configuration bits to the selected frames. Specifically, we propose a class of architectures that, for any subset C <font face=symbol>Í</font> S (set of frames), constructs a path connecting only the k frames of C through which the configuration bits can be scanned in. We introduce a Basic Network that runs in <font face=symbol>Q</font> (k log n) time, where k is the number of frames selected out of the total number n of available frames; we assume the number of configuration bits per frame is constant. The Basic Network does not exploit any locality or other structure in the subset of frames selected. We show that for certain structures (such as frames that are relatively close to each other) the speed of reconfiguration can be improved. We introduce an addition to the Basic Network that suggests the fastest clock speed that can be employed for a given set of frames. This enhancement decreases configuration time to O(k log k) for certain cases. We then introduce a second enhancement, called shortcuts, that for certain cases reduces the time to an optimal O(k). All the proposed architectures require an optimal <font face=symbol>Q</font>(n) number of gates. We implement our networks on the CAD tools and show that the theoretical predictions are a good reflection of the network<font face=symbol>¢</font>s performance. Our work, although directed to FPGAs, may also apply to other applications; for example hardware testing and novel memory accesses.</body> </html>
205

Powers and Compensation in Three-Phase Systems with Nonsinusoidal and Asymmetrical Voltages and Currents

Bhattarai, Prashanna Dev 22 April 2016 (has links)
A contribution to power theory development of three-phase three-wire systems with asymmetrical and nonsinusoidal supply voltages is presented in this dissertation. It includes: contribution to explanation of power related phenomena contribution to methods of compensation The power equation of unbalanced Linear Time Invariant (LTI) loads at sinusoidal but asymmetrical voltage is first presented. The different current components of such a load and the phenomenon associated with these current components are described. The load current decomposition is used for the design of reactive balancing compensators for power factor improvement. Next, the current of LTI loads operating at nonsinusoidal asymmetrical voltage is decomposed, and the power equation of such a load is developed. Methods of the design of reactive compensators for the complete compensation of the reactive and unbalanced current components, as well as the design of optimized compensator for minimization of these currents are also presented. Next, the power equation of Harmonics Generating Loads (HGLs) connected to nonsinusoidal asymmetrical voltage is developed. The voltage and current harmonics are divided into two subsets, namely the subset of the harmonic orders originating in the supply, and the subset of the harmonic orders originating in the load. The load current is decomposed based on the Currents Physical Components (CPC) power theory, and the theory is also used for reference signal generation for the control of Switching Compensators used for power factor improvement. Results of simulation in MATLAB Simulink are presented as well.
206

Spectrum Allocation in Networks with Finite Sources and Data-Driven Characterization of Users' Stochastic Dynamics

Ali, Ahsan-Abbas 25 May 2015 (has links)
During emergency situations, the public safety communication systems (PSCSs) get overloaded with high traffic loads. Note that these PSCSs are finite source networks. The goal of our study is to propose techniques for an efficient allocation of spectrum in finite source networks that can help alleviate the overloading of PSCSs. In a PSCS, there are two system segments, one for the system-access control and the other for communications, each having dedicated frequency channels. The first part of our research, consisting of three projects, is based on modeling and analysis of finite source systems for optimal spectrum allocation, for both access-control and communications. In the first project, Chapter 2, we study the allocation of spectrum based on the concept of cognitive radio systems. In the second project, Chapter 3, we study the optimal communication channel allocation by call admission and preemption control. In the third project, Chapter 4, we study the optimal joint allocation of frequency channels for access-control and communications. Note that the aforementioned spectrum allocation techniques require the knowledge of the call traffic parameters and the priority levels of the users in the system. For practical systems, these required pieces of information are extracted from the call records meta-data. A key fact that should be considered while analyzing the call records is that the call arrival traffic and the users priority levels change with a change in events on the ground. This is so because a change in events on the ground affects the communication behavior of the users in the system, which affects the call arrival traffic and the priority levels of the users. Thus, the first and the foremost step in analyzing the call records data for a given user, for extracting the call traffic information, is to segment the data into time intervals of homogeneous or stationary communication behavior of the user. Note that such a segmentation of the data of a practical PSCS is the goal of our fourth project, Chapter 5, which constitutes the second part of our study.
207

SCALABLE TECHNIQUES FOR FAILURE RECOVERY AND LOCALIZATION

Cho, Sangman January 2011 (has links)
Failure localization and recovery is one of the most important issues in network management to provide continuous connectivity to users. In this dissertation, we develop several algorithms for network failure localization and recovery. First, to achieve resilient multipath routing we introduce the concept of Independent Directed Acyclic Graphs (IDAGs). Link-independent (Node-independent) DAGs satisfy the property that any path from a source to the root on one DAG is link-disjoint (node- disjoint) with any path from the source to the root on the other DAG. Given a network, we develop polynomial time algorithms to compute link-independent and node-independent DAGs. The algorithm developed in this dissertation: (1) provides multipath routing; (2) utilizes all possible edges; (3) guarantees recovery from single link failure; and (4) achieves all these with at most one bit per packet as overhead when routing is based on destination address and incoming edge. We show the effectiveness of the proposed IDAGs approach by comparing key performance indices to that of the independent trees and multiple pairs of independent trees techniques through extensive simulations. Secondly, we introduce the concept of monitoring tours (m-tours) to uniquely localize all possible failures up to k links in arbitrary all-optical networks. We establish paths and cycles that can traverse the same link at most twice (backward and forward) and call them m-tours. An m-tour is different from other existing schemes such as m-cycle and m-trail which traverse a link at most once. Closed (open) m-tours start and terminate at the same (distinct) monitor location(s). Each tour is constructed such that any shared risk linked group (SRLG) failure results in the failure of a unique combination of closed and open m-tours. We prove k-edge connectivity is a sufficient condition to localize all SRLG failures with up to k-link failures when only one monitor station is employed. We introduce an integer linear program (ILP) and a greedy scheme to find the placement of monitoring locations to uniquely localize any SRLG failures with up to k links. We provide a heuristic scheme to compute m-tours for a given network. We demonstrate the validity of the proposed monitoring method through simulations. We show that our approach using m-tours significantly reduces the number of required monitoring locations and contributes to reducing monitoring cost and network management complexity through these simulation results. Finally, this dissertation studies the problem of uniquely localizing single network element failures involving a link/node using monitoring cycles, paths, and tours. A monitoring cycle starts and ends at the same monitoring node. A monitoring path starts and ends at distinct monitoring nodes. A monitoring tour starts and ends at a monitoring station, however may traverse a link twice, once in each direction. The failure of any link/node results in the failure of a unique combination of cycles/paths/tours. We develop the necessary theories for monitoring single element (link/node) failures using only one monitoring station and cycles/tours respectively. We show that the scheme employing monitoring tours can decrease the number of monitors required compared to the scheme employing monitoring cycles and paths. With the efficient monitoring approach that uses monitoring tours, the problem of localizing up to k element (link/node) failures using only single monitor is also considered. Through the simulations, we verify the effectiveness of our monitoring algorithms.
208

OPTIMIZATION OF THE GENETIC ALGORITHM IN THE SHEHERAZADE WARGAMING SIMULATOR

Momen, Faisal January 2011 (has links)
Stability and Support Operations (SASO) continue to play an important role in modern military exercises. The Sheherazade simulation system was designed to facilitate SASO-type mission planning exercises by rapidly generating and evaluating hundreds of thousands of alternative courses-of-action (COAs). The system is comprised of a coevolution engine that employs a Genetic Algorithm (GA) to generate the COAs for each side in a multi-sided conflict and a wargamer that models various subjective factors such as regional attitudes and faction animosities to evaluate their effectiveness. This dissertation extends earlier work on Sheherazade, in the following ways: 1) The GA and coevolution framework have been parallelized for improved performance on current multi-core platforms 2) the effects of various algorithm parameters, both general and specific to Sheherazade, were analyzed 3) alternative search techniques reflecting recent developments in the field have been evaluated for their capacity to improve the quality of the results.
209

Improving the Error Floor Performance of LDCP Codes with Better Codes and Better Decoders

Nguyen, Dung Viet January 2012 (has links)
Error correcting codes are used in virtually all communication systems to ensure reliable transmission of information. In 1948, Shannon established an upper-bound on the maximum rate at which information can be transmitted reliably over a noisy channel. Reliably transmitting information with a rate close to this theoretical limit, known as the channel capacity, has been the goal of channel coding scientists for the last five decades. The rediscovery of low-density parity-check (LDPC) codes in the 1990s added much-renewed excitement in the coding community. LDPC codes are interesting because they can approach channel capacity under sub-optimum decoding algorithms whose complexity is linear in the code length. Unsurprisingly, LDPC codes quickly attained their popularity in practical applications such as magnetic storage, wireless and optical communications. One, if not the most, important and challenging problem in LDPC code research is the study and analysis of the error floor phenomenon. This phenomenon is described as an abrupt degradation in the frame error rate performance of LDPC codes in the high signal-to-noise ratio region. Error floor is harmful because its presence prevents the LDPC decoder from reaching very low probability of decoding failure, an important requirement for many applications. Not long after the rediscovery of LDPC codes, scientists established that error floor is caused by certain harmful structures, most commonly known as trapping sets, in the Tanner representation of a code. Since then, the study of error floor mostly consists of three major problems: 1) estimating error floor; 2) constructing LDPC codes with low error floor and 3) designing decoders that are less susceptible to error floor. Although some parts of this dissertation can be used as important elements in error floor estimation, our main contributions are a novel method for constructing LDPC codes with low error floor and a novel class of low complexity decoding algorithms that can collectively alleviate error floor. These contributions are summarized as follows. A method to construct LDPC codes with low error floors on the binary symmetric channel is presented. Codes are constructed so that their Tanner graphs are free of certain small trapping sets. These trapping sets are selected from the Trapping Set Ontology for the Gallager A/B decoder. They are selected based on their relative harmfulness for a given decoding algorithm. We evaluate the relative harmfulness of different trapping sets for the sum-product algorithm by using the topological relations among them and by analyzing the decoding failures on one trapping set in the presence or absence of other trapping sets. We apply this method to construct structured LDPC codes. To facilitate the discussion, we give a new description of structured LDPC codes whose parity-check matrices are arrays of permutation matrices. This description uses Latin squares to define a set of permutation matrices that have disjoint support and to derive a simple necessary and sufficient condition for the Tanner graph of a code to be free of four-cycles. A new class of bit flipping algorithms for LDPC codes over the binary symmetric channel is proposed. Compared to the regular (parallel or serial) bit flipping algorithms, the proposed algorithms employ one additional bit at a variable node to represent its "strength." The introduction of this additional bit allows an increase in the guaranteed error correction capability. An additional bit is also employed at a check node to capture information which is beneficial to decoding. A framework for failure analysis and selection of two-bit bit flipping algorithms is provided. The main component of this framework is the (re)definition of trapping sets, which are the most "compact" Tanner graphs that cause decoding failures of an algorithm. A recursive procedure to enumerate trapping sets is described. This procedure is the basis for selecting a collection of algorithms that work well together. It is demonstrated that decoders which employ a properly selected group of the proposed algorithms operating in parallel can offer high speed and low error floor decoding.
210

Geometric Modeling and Optimization Over Regular Domains for Graphics and Visual Computing

Wan, Shenghua 09 September 2013 (has links)
The effective construction of parametric representation of complicated geometric objects can facilitate many design, analysis, and simulation tasks in Computer-Aided Design (CAD), Computer-Aided Manufacturing (CAM), and Computer-Aided Engineering (CAE). Given a 3D shape, the procedure of finding such a parametric representation upon a canonical domain is called geometric parameterization. Regular geometric regions, such as polycubes and spheres, are desirable domains for parameterization. Parametric representations defined upon regular geometric domains have many desirable mathematical properties and can facilitate or simplify various surface/solid modeling and processing computation. This dissertation studies the construction of parameterization on regular geometric domains and explores their applications in shape modeling and computer-aided design. Specifically, we studies (1) the surface parameterization on the spherical domain for closed genus-zero surfaces; (2) the surface parameterization on the polycube domain for general closed surfaces; and (3) the volumetric parameterization for 3D-manifolds embedded in 3D Euclidean space. We propose novel computational models to solve these geometric problems. Our computational models reduce to nonlinear optimizations with various geometric constraints. Hence, we also need to explore effective optimization algorithms. The main contributions of this dissertation are three-folded. (1) We developed an effective progressive spherical parameterization algorithm, with an efficient nonlinear optimization scheme subject to the spherical constraint. Compared with the state-of-the-art spherical mapping algorithms, our algorithm demonstrates the advantages of great efficiency, lower distortion, and guaranteed bijectiveness, and we show its applications in spherical harmonic decomposition and shape analysis. (2) We propose a first topology-preserving polycube domain optimization algorithm that simultaneously optimizes polycube domain together with the parameterization to balance the mapping distortion and domain simplicity. We develop effective nonlinear geometric optimization algorithms dealing with variables with and without derivatives. This polycube parameterization algorithm can benefit the regular quadrilateral mesh generation and cross-surface parameterization. (3) We develop a novel quaternion-based optimization framework for 3D frame field construction and volumetric parameterization computation. We demonstrate our constructed 3D frame field has better smoothness, compared with state-of-the-art algorithms, and is effective in guiding low-distortion volumetric parameterization and high-quality hexahedral mesh generation.

Page generated in 0.106 seconds