• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1088
  • 239
  • 152
  • 123
  • 76
  • 51
  • 35
  • 24
  • 24
  • 23
  • 18
  • 16
  • 8
  • 7
  • 7
  • Tagged with
  • 2214
  • 322
  • 216
  • 175
  • 171
  • 169
  • 168
  • 163
  • 130
  • 128
  • 120
  • 118
  • 115
  • 111
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Mutually Exclusive Weighted Graph Matching Algorithm for Protein-Protein Interaction Network Alignment

Dunham, Brandan 20 October 2016 (has links)
No description available.
242

Modeling and Matching of Landmarks for Automation of Mars Rover Localization

Wang, Jue 05 September 2008 (has links)
No description available.
243

An Isometry-Invariant Spectral Approach for Macro-Molecular Docking

De Youngster, Dela 26 November 2013 (has links)
Proteins and the formation of large protein complexes are essential parts of living organisms. Proteins are present in all aspects of life processes, performing a multitude of various functions ranging from being structural components of cells, to facilitating the passage of certain molecules between various regions of cells. The 'protein docking problem' refers to the computational method of predicting the appropriate matching pair of a protein (receptor) with respect to another protein (ligand), when attempting to bind to one another to form a stable complex. Research shows that matching the three-dimensional (3D) geometric structures of candidate proteins plays a key role in determining a so-called docking pair, which is one of the key aspects of the Computer Aided Drug Design process. However, the active sites which are responsible for binding do not always present a rigid-body shape matching problem. Rather, they may undergo sufficient deformation when docking occurs, which complicates the problem of finding a match. To address this issue, we present an isometry-invariant and topologically robust partial shape matching method for finding complementary protein binding sites, which we call the ProtoDock algorithm. The ProtoDock algorithm comes in two variations. The first version performs a partial shape complementarity matching by initially segmenting the underlying protein object mesh into smaller portions using a spectral mesh segmentation approach. The Heat Kernel Signature (HKS), the underlying basis of our shape descriptor, is subsequently computed for the obtained segments. A final descriptor vector is constructed from the Heat Kernel Signatures and used as the basis for the segment matching. The three different descriptor methods employed are, the accepted Bag of Features (BoF) technique, and our two novel approaches, Closest Medoid Set (CMS) and Medoid Set Average (MSA). The second variation of our ProtoDock algorithm aims to perform the partial matching by utilizing the pointwise HKS descriptors. The use of the pointwise HKS is mainly motivated by the suggestion that, at adequate times, the Heat Kernel Signature of a point on a surface sufficiently describes its neighbourhood. Hence, the HKS of a point may serve as the representative descriptor of its given region of which it forms a part. We propose three (3) sampling methods---Uniform, Random, and Segment-based Random sampling---for selecting these points for the partial matching. Random and Segment-based Random sampling both prove superior to the Uniform sampling method. Our experimental results, run against the Protein-Protein Benchmark 4.0, demonstrate the viability of our approach, in that, it successfully returns known binding segments for known pairing proteins. Furthermore, our ProtoDock-1 algorithm still still yields good results for low resolution protein meshes. This results in even faster processing and matching times with sufficiently reduced computational requirements when obtaining the HKS.
244

Weighted Granular Best Matching Algorithm For Context-aware Computing Systems

Kocaballi, Ahmet Baki 01 January 2005 (has links) (PDF)
Weighted granular best matching algorithm is proposed for the operation of context matching in context-aware computing systems. New algorithm deals with the subjective, fuzzy and multidimensional characteristics of contextual information by using weights and a granular structure for contextual information. The proposal is applied on a case: CAPRA &ndash / Context-Aware Personal Reminder Agent tool to show the applicability of the new context matching algorithm. The obtained outputs showed that proposed algorithm produces the results which are more sensitive to the user&rsquo / s intention, more adaptive to the characteristics of the contextual information and applicable to a current Context-aware system.
245

Generalizations Of The Popular Matching Problem

Nasre, Meghana 08 1900 (has links) (PDF)
Matching problems arise in several real-world scenarios like assigning posts to applicants, houses to trainees and room-mates to one another. In this thesis we consider the bipartite matching problem where one side of the bipartition specifies preferences over the other side. That is, we are given a bipartite graph G = (A ∪ P,E) where A denotes the set of applicants, P denotes the set of posts, and the preferences of applicants are specified by ranks on the edges. Several notions of optimality like pareto-optimality, rank-maximality, popularity have been studied in the literature; we focus on the notion of popularity. A matching M is more popular than another matching M′ if the number of applicants that prefer M to M′ exceeds the number of applicants that prefer M′ to M. A matching M is said to be popular if there exists no matching that is more popular than M. Popular matchings have the desirable property that no applicant majority can force a migration to another matching. However, popular matchings do not provide a complete answer since there exist simple instances that do not admit any popular matching. Abraham et al. (SICOMP 2007) characterized instances that admit a popular matching and also gave efficient algorithms to find one when it exists. We present several generalizations of the popular matchings problem in this thesis. Majority of our work deals with instances that do not admit any popular matching. We propose three different solution concepts for such instances. A reasonable solution when an instance does not admit a popular matching is to output a matching that is least unpopular amongst the set of unpopular matchings. McCutchen (LATIN 2008) introduced and studied measures of unpopularity, namely the unpopularity factor and unpopularity margin. He proved that computing either a least unpopularity factor matching or a least unpopularity margin matching is NP-hard. We build upon this work and design an O(km√n) time algorithm which produces matchings with bounded unpopularity provided a certain subgraph of G admits an A-complete matching (a matching that matches all the applicants). Here n and m denote the number of vertices and the number of edges in G respectively, and k, which is bounded by |A|, is the number of iterations taken by our algorithm to terminate. We also show that if a certain subgraph of G admits an A-complete matching, then we have computed a matching with the least unpopularity factor. Another feasible solution for instances without any popular matching is to output a mixed matching that is popular. A mixed matching is simply a probability distribution over the set of matchings. A mixed matching Q is popular if no mixed matching is more popular than Q. We seek to answer the existence and computation of popular mixed matchings in a given instance G. We begin with a linear programming formulation to compute a mixed matching with the least unpopularity margin. We show that although the linear program has exponentially many constraints, we have a polynomial time separation oracle and hence a least unpopularity margin mixed matching can be computed in polynomial time. By casting the popular mixed matchings problem as a two player zero-sum game, it is possible to prove that every instance of the popular matchings problem admits a popular mixed matching. Therefore, the matching returned by our linear program is indeed a popular mixed matching. Finally, we propose augmentation of the input graph for instances that do not admit any popular matching. Assume that we are dealing with a set of items B (say, DVDs/books) instead of posts and it is possible to make duplicates of these items. Our goal is to make duplicates of appropriate items such that the augmented graph admits a popular matching. However, since allowing arbitrarily many copies for items is not feasible in practice, we impose restrictions in two forms – (i) associating costs with items, and (ii) bounding the number of copies. In the first case, we assume that we pay a price of cost(b) for every extra copy of b that we make; the first copy is assumed to be given to us at free. The total cost of the augmented instance is the sum of costs of all the extra copies that we make. Our goal is to compute a minimum cost augmented instance which admits a popular matching. In the second case, along with the input graph G = (A ∪ B,E), we are given a vector hc1, c2, . . . , c|B|i denoting upper bounds on the number of copies of every item. We seek to answer whether there exists a vector hx1, x2, . . . , x|B|i such that having xi copies of item bi where 1 ≤ xi ≤ ci enables the augmented graph to admit a popular matching. We prove that several problems under both these models turn out to be NP-hard – in fact they remain NP-hard even under severe restrictions on the preference lists. Our final results deal with instances that admit popular matchings. When the input instance admits a popular matching, there may be several popular matchings – in fact there may be several maximum cardinality popular matchings. Hence one may not be content with returning any maximum cardinality popular matching and instead ask for an optimal popular matching. Assuming that the notion of optimality is specified as a part of the problem, we present an O(m + n21 ) time algorithm for computing an optimal popular matching in G. Here m denotes the number of edges in G and n1 denotes the number of applicants. We also consider the problem of computing a minimum cost popular matching where with every post p, a price cost(p) and a capacity cap(p) are associated. A post with capacity cap(p) can be matched with up to cap(p) many applicants. We present an O(mn1) time algorithm to compute a minimum cost popular matching in such instances. We believe that the work provides interesting insights into the popular matchings problem and its variants.
246

An Isometry-Invariant Spectral Approach for Macro-Molecular Docking

De Youngster, Dela January 2013 (has links)
Proteins and the formation of large protein complexes are essential parts of living organisms. Proteins are present in all aspects of life processes, performing a multitude of various functions ranging from being structural components of cells, to facilitating the passage of certain molecules between various regions of cells. The 'protein docking problem' refers to the computational method of predicting the appropriate matching pair of a protein (receptor) with respect to another protein (ligand), when attempting to bind to one another to form a stable complex. Research shows that matching the three-dimensional (3D) geometric structures of candidate proteins plays a key role in determining a so-called docking pair, which is one of the key aspects of the Computer Aided Drug Design process. However, the active sites which are responsible for binding do not always present a rigid-body shape matching problem. Rather, they may undergo sufficient deformation when docking occurs, which complicates the problem of finding a match. To address this issue, we present an isometry-invariant and topologically robust partial shape matching method for finding complementary protein binding sites, which we call the ProtoDock algorithm. The ProtoDock algorithm comes in two variations. The first version performs a partial shape complementarity matching by initially segmenting the underlying protein object mesh into smaller portions using a spectral mesh segmentation approach. The Heat Kernel Signature (HKS), the underlying basis of our shape descriptor, is subsequently computed for the obtained segments. A final descriptor vector is constructed from the Heat Kernel Signatures and used as the basis for the segment matching. The three different descriptor methods employed are, the accepted Bag of Features (BoF) technique, and our two novel approaches, Closest Medoid Set (CMS) and Medoid Set Average (MSA). The second variation of our ProtoDock algorithm aims to perform the partial matching by utilizing the pointwise HKS descriptors. The use of the pointwise HKS is mainly motivated by the suggestion that, at adequate times, the Heat Kernel Signature of a point on a surface sufficiently describes its neighbourhood. Hence, the HKS of a point may serve as the representative descriptor of its given region of which it forms a part. We propose three (3) sampling methods---Uniform, Random, and Segment-based Random sampling---for selecting these points for the partial matching. Random and Segment-based Random sampling both prove superior to the Uniform sampling method. Our experimental results, run against the Protein-Protein Benchmark 4.0, demonstrate the viability of our approach, in that, it successfully returns known binding segments for known pairing proteins. Furthermore, our ProtoDock-1 algorithm still still yields good results for low resolution protein meshes. This results in even faster processing and matching times with sufficiently reduced computational requirements when obtaining the HKS.
247

Registration algorithm optimized for simultaneous localization and mapping / Algorithme de référencement optimisé pour la localisation et la cartographie simultanées

Pomerleau, François January 2008 (has links)
Building maps within an unknown environment while keeping track of the current position is a major step to accomplish safe and autonomous robot navigation. Within the last 20 years, Simultaneous Localization And Mapping (SLAM) became a topic of great interest in robotics. The basic idea of this technique is to combine proprioceptive robot motion information with external environmental information to minimize global positioning errors. Because the robot is moving in its environment, exteroceptive data comes from different points of view and must be expressed in the same coordinate system to be combined. The latter process is called registration. Iterative Closest Point (ICP) is a registration algorithm with very good performances in several 3D model reconstruction applications, and was recently applied to SLAM. However, SLAM has specific needs in terms of real-time and robustness comparatively to 3D model reconstructions, leaving room for specialized robotic mapping optimizations in relation to robot mapping. After reviewing existing SLAM approaches, this thesis introduces a new registration variant called Kd-ICP. This referencing technique iteratively decreases the error between misaligned point clouds without extracting specific environmental features. Results demonstrate that the new rejection technique used to achieve mapping registration is more robust to large initial positioning errors. Experiments with simulated and real environments suggest that Kd-ICP is more robust compared to other ICP variants. Moreover, the Kd-ICP is fast enough for real-time applications and is able to deal with sensor occlusions and partially overlapping maps. Realizing fast and robust local map registrations opens the door to new opportunities in SLAM. It becomes feasible to minimize the cumulation of robot positioning errors, to fuse local environmental information, to reduce memory usage when the robot is revisiting the same location. It is also possible to evaluate network constrains needed to minimize global mapping errors.
248

Impedance matching and DC-DC converter designs for tunable radio frequency based mobile telecommunication systems

Wong, Yan Chiew January 2014 (has links)
Tunability and adaptability for radio frequency (RF) front-ends are highly desirable because they not only enhance functionality and performance but also reduce the circuit size and cost. This thesis presents a number of novel design strategies in DC-DC converters, impedance networks and adaptive algorithms for tunable and adaptable RF based mobile telecommunication systems. Specifically, the studies are divided into three major directions: (a) high voltage switch controller based DC-DC converters for RF switch actuation; (b) impedance network designs for impedance transformation of RF switches; and (c) adaptive algorithms for determining the required impedance states at the RF switches. In the first stage, two-phase step-up switched-capacitor (SC) DC-DC converters are explored. The SC converter has a simple control method and a reduced physical volume. The research investigations started with the linear and the non-linear voltage gain topologies. The non-linear voltage gain topology provides a higher voltage gain in a smaller number of stages compared to the linear voltage gain topology. Amongst the non-linear voltage gain topologies, a Fibonacci SC converter has been identified as having lower losses and a higher conversion ratio compared to other topologies. However, the implementation of a high voltage (HV) gain Fibonacci SC converter is complex due to the requirement of widely different gate voltages for the transistors in the Fibonacci converter. Gate driving strategies have been proposed that only require a few auxiliary transistors in order to provide the required boosted voltages for switching the transistors on and off. This technique reduces the design complexity and increases the reliability of the HV Fibonacci SC converter. For the linear voltage gain topology, a high performance complementary-metaloxide- semiconductor (CMOS) based SC DC-DC converter has been proposed in this work. The HV SC DC-DC converter has been designed in low voltage (LV) transistors technology in order to achieve higher voltage gain. Adaptive biasing circuits have been proposed to eliminate the leakage current, hence avoiding latch-up which normally occurs with low voltage transistors when they are used in a high voltage design. Thus, the SC DC-DC converter achieves more than 25% higher boosted voltage compared to converters that use HV transistors. The proposed design provides a 40% power reduction through the charge recycling circuit that reduces the effect of non-ideality in integrated HV capacitors. Moreover, the SC DC-DC converter achieves a 45% smaller area than the conventional converter through optimising the design parameters. In the second stage, the impedance network designs for transforming the impedance of RF switches to the maximum achievable impedance tuning region are investigated. The maximum achievable tuning region is bounded by the fundamental properties of the selected impedance network topology and by the tunable values of the RF switches that are variable over a limited range. A novel design technique has been proposed in order to achieve the maximum impedance tuning region, through identifying the optimum electrical distance between the RF switches at the impedance network. By varying the electrical distance between the RF switches, high impedance tuning regions are achieved across multi frequency standards. This technique reduces the cost and the insertion loss of an impedance network as the required number of RF switches is reduced. The prototype demonstrates high impedance coverages at LTE (700MHz), GSM (900MHz) and GPS (1575MHz). Integration of a tunable impedance network with an antenna for frequency-agility at the RF front-end has also been discussed in this work. The integrated system enlarges the bandwidth of a patch antenna by four times the original bandwidth and also improves the antenna return loss. The prototype achieves frequency-agility from 700MHz to 3GHz. This work demonstrates that a single transceiver with multi frequency standards can be realised by using a tunable impedance network. In the final stage, improvement to an adaptive algorithm for determining the impedance states at the RF switches has been proposed. The work has resulted in one more novel design techniques which reduce the search time in the algorithm, thus minimising the risk of data loss during the impedance tuning process. The approach reduces the search time by more than an order of magnitude by exploiting the relationships among the mass spring’s coefficient values derived from the impedance network parameters, thereby significantly reducing the convergence time of the algorithm. The algorithm with the proposed technique converges in less than half of the computational time compared to the conventional approach, hence significantly improving the search time of the algorithm. The design strategies proposed in this work contribute towards the realisation of tunable and adaptable RF based mobile telecommunication systems.
249

Spoken Language Identification from Processing and Pattern Analysis of Spectrograms

Ford, George Harold 01 January 2014 (has links)
Prior speech and linguistics research has focused on the use of phonemes recognition in speech, and their use in formulation of recognizable words, to determine language identification. Some languages have additional phoneme sounds, which can help identify a language; however, most of the phonemes are common to a wide variety of languages. Legacy approaches recognize strings of phonemes as syllables, used by dictionary queries to see if a word can be found to uniquely identify a language. This dissertation research considers an alternative means of determining language identification of speech data based solely on analysis of frequency-domain data. An analytical approach to speech language identification by three comparative techniques is performed. First, a character-based pattern analysis is performed using the Rix and Forster algorithm to replicate their research on language identification. Second, techniques of phoneme recognition and their relative pattern of occurrence in speech samples are measured for performance in ability for language identification using the Rix and Forster approach. Finally, an experiment using statistical analysis of time-ensemble frequency spectrum data is assessed for its ability to establish spectral patterns for language identification, along with performance. This novel approach is applied to spectrogram audio data using pattern analysis techniques for language identification. It applies the Rix and Forster method to the ensemble of spectral frequencies used over the duration of a speech waveform. This novel approach is compared to the applications of the Rix and Forster algorithm to character-based and phoneme symbols for language identification on the basis of statistical accuracy, processing time requirements, and spatial processing resource needs. The audio spectrum analysis also demonstrates the ability to perform speaker identification using the same techniques performed for language identification. The results of this research demonstrate the efficacy of audio frequency-domain pattern analysis applied to speech waveform data. It provides an efficient technique in language identification without reliance upon linguistic approaches using phonemes or word derivations. This work also demonstrates a quick, automated means by which information gatherers, travelers, and diplomatic officials might obtain rapid language identification supporting time-critical determination of appropriate translator resource needs.
250

Parallelization of a software based intrusion detection system - Snort

Zhang, Huan January 2011 (has links)
Computer networks are already ubiquitous in people’s lives and work and network security is becoming a critical part. A simple firewall, which can only scan the bottom four OSI layers, cannot satisfy all security requirements. An intrusion detection system (IDS) with deep packet inspection, which can filter all seven OSI layers, is becoming necessary for more and more networks. However, the processing throughputs of the IDSs are far behind the current network speed. People have begun to improve the performance of the IDSs by implementing them on different hardware platforms, such as Field-Programmable Gate Array (FPGA) or some special network processors. Nevertheless, all of these options are either less flexible or more expensive to deploy. This research focuses on some possibilities of implementing a parallelized IDS on a general computer environment based on Snort, which is the most popular open-source IDS at the moment. In this thesis, some possible methods have been analyzed for the parallelization of the pattern-matching engine based on a multicore computer. However, owing to the small granularity of the network packets, the pattern-matching engine of Snort is unsuitable for parallelization. In addition, a pipelined structure of Snort has been implemented and analyzed. The universal packet capture API - LibPCAP has been modified for a new feature, which can capture a packet directly to an external buffer. Then, the performance of the pipelined Snort can have an improvement up to 60% on an Intel i7 multicore computer for jumbo frames. A primary limitation is on the memory bandwidth. With a higher bandwidth, the performance of the parallelization can be further improved.

Page generated in 0.0271 seconds