• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 263
  • 38
  • 25
  • 24
  • 5
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 437
  • 87
  • 68
  • 62
  • 56
  • 53
  • 46
  • 40
  • 40
  • 39
  • 38
  • 38
  • 37
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Quantization Based Data Hiding Strategies With Visual Applications

Esen, Ersin 01 February 2010 (has links) (PDF)
The first explored area in this thesis is the proposed data hiding method, TCQ-IS. The method is based on Trellis Coded Quantization (TCQ), whose initial state selection is arbitrary. TCQ-IS exploits this fact to hide data. It is a practical multi-dimensional that eliminates the prohibitive task of designing high dimensional quantizers. The strength and weaknesses of the method are stated by various experiments. The second contribution is the proposed data hiding method, Forbidden Zone Data Hiding (FZDH), which relies on the concept of &ldquo / forbidden zone&rdquo / , where host signal is not altered. The main motive of FZDH is to introduce distortion as much as needed, while keeping a range of host signal intact depending on the desired level of robustness. FZDH is compared against Quantization Index Modulation (QIM) as well as DC-QIM and ST-QIM. FZDH outperforms QIM even in 1-D and DC-QIM in higher dimensions. Furthermore, FZDH is comparable with ST-QIM for certain operation regimes. The final contribution is the video data hiding framework that includes FZDH, selective embedding and Repeat Accumulate (RA) codes. De-synchronization due to selective embedding is handled with RA codes. By means of simple rules applied to the embedded frame markers, certain level of robustness against temporal attacks is introduced. Selected coefficients are used to embed message bits by employing multi-dimensional FZDH. The framework is tested with typical broadcast material against common video processing attacks. The results indicate that the framework can be utilized in real life applications.
292

Neural-Symbolic Integration / Neuro-Symbolische Integration

Bader, Sebastian 15 December 2009 (has links) (PDF)
In this thesis, we discuss different techniques to bridge the gap between two different approaches to artificial intelligence: the symbolic and the connectionist paradigm. Both approaches have quite contrasting advantages and disadvantages. Research in the area of neural-symbolic integration aims at bridging the gap between them. Starting from a human readable logic program, we construct connectionist systems, which behave equivalently. Afterwards, those systems can be trained, and later the refined knowledge be extracted.
293

Interposer platforms featuring polymer-enhanced through silicon vias for microelectronic systems

Thadesar, Paragkumar A. 08 June 2015 (has links)
Novel polymer-enhanced photodefined through-silicon via (TSV) and passive technologies have been demonstrated for silicon interposers to obtain compact heterogeneous computing and mixed-signal systems. These technologies include: (1) Polymer-clad TSVs with thick (~20 µm) liners to help reduce TSV losses and stress, and obtain optical TSVs in parallel for interposer-to-interposer long-distance communication; (2) Polymer-embedded vias with copper vias embedded in polymer wells to significantly reduce the TSV losses; (3) Coaxial vias in polymer wells to reduce the TSV losses with controlled impedance; (4) Antennas over polymer wells to attain a high radiation efficiency; and (5) High-Q inductors over polymer wells. Cleanroom fabrication and characterization of the technologies have been demonstrated. For the fabricated polymer-clad TSVs, resistance and synchrotron x-ray diffraction (XRD) measurements have been demonstrated. High-frequency measurements up to 170 GHz and time-domain measurements up to 10 Gbps have been demonstrated for the fabricated polymer-embedded vias. For the fabricated coaxial vias and inductors, high-frequency measurements up to 50 GHz have been demonstrated. Lastly, for the fabricated antennas, measurements in the W-band have been demonstrated.
294

Large-scale network analytics

Song, Han Hee, 1978- 05 October 2012 (has links)
Scalable and accurate analysis of networks is essential to a wide variety of existing and emerging network systems. Specifically, network measurement and analysis helps to understand networks, improve existing services, and enable new data-mining applications. To support various services and applications in large-scale networks, network analytics must address the following challenges: (i) how to conduct scalable analysis in networks with a large number of nodes and links, (ii) how to flexibly accommodate various objectives from different administrative tasks, (iii) and how to cope with the dynamic changes in the networks. This dissertation presents novel path analysis schemes that effectively address the above challenges in analyzing pair-wise relationships among networked entities. In doing so, we make the following three major contributions to large-scale IP networks, social networks, and application service networks. For IP networks, we propose an accurate and flexible framework for path property monitoring. Analyzing the performance side of paths between pairs of nodes, our framework incorporates approaches that perform exact reconstruction of path properties as well as approximate reconstruction. Our framework is highly scalable to design measurement experiments that span thousands of routers and end hosts. It is also flexible to accommodate a variety of design requirements. For social networks, we present scalable and accurate graph embedding schemes. Aimed at analyzing the pair-wise relationships of social network users, we present three dimensionality reduction schemes leveraging matrix factorization, count-min sketch, and graph clustering paired with spectral graph embedding. As concrete applications showing the practical value of our schemes, we apply them to the important social analysis tasks of proximity estimation, missing link inference, and link prediction. The results clearly demonstrate the accuracy, scalability, and flexibility of our schemes for analyzing social networks with millions of nodes and tens of millions of links. For application service networks, we provide a proactive service quality assessment scheme. Analyzing the relationship between the satisfaction level of subscribers of an IPTV service and network performance indicators, our proposed scheme proactively (i.e., detect issues before IPTV subscribers complain) assesses user-perceived service quality using performance metrics collected from the network. From our evaluation using network data collected from a commercial IPTV service provider, we show that our scheme is able to predict 60% of the service problems that are complained by customers with only 0.1% of false positives. / text
295

Discriminative object categorization with external semantic knowledge

Hwang, Sung Ju 25 September 2013 (has links)
Visual object category recognition is one of the most challenging problems in computer vision. Even assuming that we can obtain a near-perfect instance level representation with the advances in visual input devices and low-level vision techniques, object categorization still remains as a difficult problem because it requires drawing boundaries between instances in a continuous world, where the boundaries are solely defined by human conceptualization. Object categorization is essentially a perceptual process that takes place in a human-defined semantic space. In this semantic space, the categories reside not in isolation, but in relation to others. Some categories are similar, grouped, or co-occur, and some are not. However, despite this semantic nature of object categorization, most of the today's automatic visual category recognition systems rely only on the category labels for training discriminative recognition with statistical machine learning techniques. In many cases, this could result in the recognition model being misled into learning incorrect associations between visual features and the semantic labels, from essentially overfitting to training set biases. This limits the model's prediction power when new test instances are given. Using semantic knowledge has great potential to benefit object category recognition. First, semantic knowledge could guide the training model to learn a correct association between visual features and the categories. Second, semantics provide much richer information beyond the membership information given by the labels, in the form of inter-category and category-attribute distances, relations, and structures. Finally, the semantic knowledge scales well as the relations between categories become larger with an increasing number of categories. My goal in this thesis is to learn discriminative models for categorization that leverage semantic knowledge for object recognition, with a special focus on the semantic relationships among different categories and concepts. To this end, I explore three semantic sources, namely attributes, taxonomies, and analogies, and I show how to incorporate them into the original discriminative model as a form of structural regularization. In particular, for each form of semantic knowledge I present a feature learning approach that defines a semantic embedding to support the object categorization task. The regularization penalizes the models that deviate from the known structures according to the semantic knowledge provided. The first semantic source I explore is attributes, which are human-describable semantic characteristics of an instance. While the existing work treated them as mid-level features which did not introduce new information, I focus on their potential as a means to better guide the learning of object categories, by enforcing the object category classifiers to share features with attribute classifiers, in a multitask feature learning framework. This approach essentially discovers the common low-dimensional features that support predictions in both semantic spaces. Then, I move on to the semantic taxonomy, which is another valuable source of semantic knowledge. The merging and splitting criteria for the categories on a taxonomy are human-defined, and I aim to exploit this implicit semantic knowledge. Specifically, I propose a tree of metrics (ToM) that learns metrics that capture granularity-specific similarities at different nodes of a given semantic taxonomy, and uses a regularizer to isolate granularity-specific disjoint features. This approach captures the intuition that the features used for the discrimination of the parent class should be different from the features used for the children classes. Such learned metrics can be used for hierarchical classification. The use of a single taxonomy can be limited in that its structure is not optimal for hierarchical classification, and there may exist no single optimal semantic taxonomy that perfectly aligns with visual distributions. Thus, I next propose a way to overcome this limitation by leveraging multiple taxonomies as semantic sources to exploit, and combine the acquired complementary information across multiple semantic views and granularities. This allows us, for example, to synthesize semantics from both 'Biological', and 'Appearance'-based taxonomies when learning the visual features. Finally, as a further exploration of more complex semantic relations different from the previous two pairwise similarity-based models, I exploit analogies, which encode the relational similarities between two related pairs of categories. Specifically, I use analogies to regularize a discriminatively learned semantic embedding space for categorization, such that the displacements between the two category embeddings in both category pairs of the analogy are enforced to be the same. Such a constraint allows for a more confusing pair of categories to benefit from a clear separation in the matched pair of categories that share the same relation. All of these methods are evaluated on challenging public datasets, and are shown to effectively improve the recognition accuracy over purely discriminative models, while also guiding the recognition to be more semantic to human perception. Further, the applications of the proposed methods are not limited to visual object categorization in computer vision, but they can be applied to any classification problems where there exists some domain knowledge about the relationships or structures between the classes. Possible applications of my methods outside the visual recognition domain include document classification in natural language processing, and gene-based animal or protein classification in computational biology. / text
296

A Combinatorial Algorithm for Minimizing the Maximum Laplacian Eigenvalue of Weighted Bipartite Graphs

Helmberg, Christoph, Rocha, Israel, Schwerdtfeger, Uwe 13 November 2015 (has links) (PDF)
We give a strongly polynomial time combinatorial algorithm to minimise the largest eigenvalue of the weighted Laplacian of a bipartite graph. This is accomplished by solving the dual graph embedding problem which arises from a semidefinite programming formulation. In particular, the problem for trees can be solved in time cubic in the number of vertices.
297

A Complexity Theory for VLSI

Thompson, C. D. 01 August 1980 (has links)
The established methodologies for studying computational complexity can be applied to the new problems posed by very large-scale integrated (VLSI) circuits. This thesis develops a ''VLSI model of computation'' and derives upper and lower bounds on the silicon area and time required to solve the problems of sorting and discrete Fourier transformation. In particular, the area A and time T taken by any VLSI chip using any algorithm to perform an N-point Fourier transform must satisfy AT2 ≥ c N2 log2 N, for some fixed c > 0. A more general result for both sorting and Fourier transformation is that AT2x = Ω(N1 + x log2x N) for any x in the range 0 < x < 1. Also, the energy dissipated by a VLSI chip during the solution of either of these problems is at least Ω(N3/2 log N). The tightness of these bounds is demonstrated by the existence of nearly optimal circuits for both sorting and Fourier transformation. The circuits based on the shuffle-exchange interconnection pattern are fast but large: T = O(log2 N) for Fourier transformation, T = O(log3 N) for sorting; both have area A of at most O(N2 / log1/2 N). The circuits based on the mesh interconnection pattern are slow but small: T = O(N1/2 loglog N), A = O(N log2 N).
298

Caractérisation et modélisation des performances hautes fréquences des réseaux d'interconnexions de circuits avancés 3D : application à la réalisation d'imageurs de nouvelle génération

Fourneaud, Ludovic 11 December 2012 (has links) (PDF)
Le travail de doctorat réalisé s'attache à étudier les nouveaux types d'interconnexions comme les TSV (Through Silicon Via), les lignes de redistribution (RDL) et les piliers de cuivre (Cu-Pillar) présentes dans le domaine de l'intégration 3D en microélectronique avancée, par exemple pour des applications de type " imager " où une puce " capteur optique " est empilée sur une puce " processeur ". Afin de comprendre et quantifier le comportement électrique de ces nouveaux composants d'interconnexion, une première problématique de la thèse s'articulait autour de la caractérisation électrique, sur une très large bande de fréquence (10 MHz - 60 GHz) de ces éléments, enfouis dans leurs environnements complexes d'intégration, en particulier avec l'analyse de l'impact des pertes dans les substrats de silicium dans une gamme de conductivités allant de très faible (0 S/m) à très forte (10 000 S/m). Par la suite, une nouvelle problématique prend alors naissance sur la nécessité de développer des modèles mathématiques permettant de prédire le comportement électrique des interconnexions 3D. Les modèles électriques développés doivent tenir compte des pertes, des couplages ainsi que de certains phénomènes liés à la montée en fréquence (courants de Foucault) en fonction des caractéristiques matériaux, des dimensions et des architectures (haute à faible densité d'intégration). Enfin, à partir des modèles développés, une dernière partie propose une étude sur les stratégies de routage dans les empilements 3D de puces à partir d'une analyse sur l'intégrité de signaux. En opposant différents environnements, débit de signaux binaires ou dimensions des TSV et des RDL des conclusions émergent sur les stratégies à adopter pour améliorer les performances des circuits conçus en intégration 3D.
299

Simple, Faster Kinetic Data Structures

Rahmati, Zahed 28 August 2014 (has links)
Proximity problems and point set embeddability problems are fundamental and well-studied in computational geometry and graph drawing. Examples of such problems that are of particular interest to us in this dissertation include: finding the closest pair among a set P of points, finding the k-nearest neighbors to each point p in P, answering reverse k-nearest neighbor queries, computing the Yao graph, the Semi-Yao graph and the Euclidean minimum spanning tree of P, and mapping the vertices of a planar graph to a set P of points without inducing edge crossings. In this dissertation, we consider so-called kinetic version of these problems, that is, the points are allowed to move continuously along known trajectories, which are subject to change. We design a set of data structures and a mechanism to efficiently update the data structures. These updates occur at critical, discrete times. Also, a query may arrive at any time. We want to answer queries quickly without solving problems from scratch, so we maintain solutions continuously. We present new techniques for giving kinetic solutions with better performance for some these problems, and we provide the first kinetic results for others. In particular, we provide: • A simple kinetic data structure (KDS) to maintain all the nearest neighbors and the closest pair. Our deterministic kinetic approach for maintenance of all the nearest neighbors improves the previous randomized kinetic algorithm. • An exact KDS for maintenance of the Euclidean minimum spanning tree, which improves the previous KDS. • The first KDS's for maintenance of the Yao graph and the Semi-Yao graph. • The first KDS to consider maintaining plane graphs on moving points. • The first KDS for maintenance of all the k-nearest neighbors, for any k ≥ 1. • The first KDS to answer the reverse k-nearest neighbor queries, for any k ≥ 1 in any fixed dimension, on a set of moving points. / Graduate
300

Digital image watermarking methods for copyright protection and authentication

Woo, Chaw-Seng January 2007 (has links)
The ease of digital media modification and dissemination necessitates content protection beyond encryption. Information hidden as digital watermarks in multimedia enables protection mechanism in decrypted contents. The aims of this research are three-fold: (i) to investigate the strength and limitations of current watermarking schemes, (ii) to design and develop new schemes to overcome the limitations, and (iii) to evaluate the new schemes using application scenarios of copyright protection, tamper detection and authentication. We focus on geometrically robust watermarking and semi-fragile watermarking for digital images. Additionally, hybrid schemes that combine the strength of both robust and semi-fragile watermarks are studied. Robust watermarks are well suited for copyright protection because they stay intact with the image under various manipulations. We investigated two major approaches of robust watermarking. In the synchronization approach, we employed motion estimation for watermark resynchronization. We also developed a novel watermark resynchronization method that has low computational cost using scale normalization and flowline curvature. In another approach, we firstly analyzed and improved a blind watermark detection method. The new method reduces significantly the computational cost of its watermark embedding. Secondly, we created a geometric invariant domain using a combination of transforms, and adapted the blind watermark detection method that we improved. It totally eliminates the need of resynchronization in watermark detection, which is a very desirable achievement that can hardly be found in existing schemes. On the other hand, semi-fragile watermarks are good at content authentication because they can differentiate minor image enhancements from major manipulations. New capabilities of semi-fragile watermarks are identified. Then, we developed a semi-fragile watermarking method in wavelet domain that offers content authentication and tamper localization. Unlike others, our scheme overcomes a major challenge called cropping attack and provides approximate content recovery without resorting to an original image. Hybrid schemes combine robust and semi-fragile watermarks to offer deductive information in digital media forensics. We firstly carried out a pilot study by combining robust and fragile watermarks. Then, we performed a comparative analysis on two implementation methods of a hybrid watermarking scheme. The first method has the robust watermark and the fragile watermark overlapped while the second method uses non-overlapping robust and fragile watermarks. Based on the results of the comparative analysis, we merge our geometric invariant domain with our semi-fragile watermark to produce a hybrid scheme. This hybrid scheme fulfilled the copyright protection, tamper detection, and content authentication objectives when evaluated in an investigation scenario.

Page generated in 0.0832 seconds