Spelling suggestions: "subject:"deembedding"" "subject:"disembedding""
291 |
On Graph Embeddings and a new Minor Monotone Graph Parameter associated with the Algebraic Connectivity of a GraphWappler, Markus 07 June 2013 (has links) (PDF)
We consider the problem of maximizing the second smallest eigenvalue of the weighted Laplacian of a (simple) graph over all nonnegative edge weightings with bounded total weight.
We generalize this problem by introducing node significances and edge lengths.
We give a formulation of this generalized problem as a semidefinite program.
The dual program can be equivalently written as embedding problem. This is fifinding an embedding of the n nodes of the graph in n-space so that their barycenter is at the origin, the distance between adjacent nodes is bounded by the respective edge length, and the embedded nodes are spread as much as possible. (The sum of the squared norms is maximized.)
We proof the following necessary condition for optimal embeddings. For any separator of the graph at least one of the components fulfills the following property: Each straight-line segment between the origin and an embedded node of the component intersects the convex hull of the embedded nodes of the separator.
There exists always an optimal embedding of the graph whose dimension is bounded by the tree-width of the graph plus one.
We defifine the rotational dimension of a graph. This is the minimal dimension k such that for all choices of the node significances and edge lengths an optimal embedding of the graph can be found in k-space.
The rotational dimension of a graph is a minor monotone graph parameter.
We characterize the graphs with rotational dimension up to two.
|
292 |
Financial time series analysis : Chaos and neurodynamics approachSawaya, Antonio January 2010 (has links)
This work aims at combining the Chaos theory postulates and Artificial Neural Networks classification and predictive capability, in the field of financial time series prediction. Chaos theory, provides valuable qualitative and quantitative tools to decide on the predictability of a chaotic system. Quantitative measurements based on Chaos theory, are used, to decide a-priori whether a time series, or a portion of a time series is predictable, while Chaos theory based qualitative tools are used to provide further observations and analysis on the predictability, in cases where measurements provide negative answers. Phase space reconstruction is achieved by time delay embedding resulting in multiple embedded vectors. The cognitive approach suggested, is inspired by the capability of some chartists to predict the direction of an index by looking at the price time series. Thus, in this work, the calculation of the embedding dimension and the separation, in Takens‘ embedding theorem for phase space reconstruction, is not limited to False Nearest Neighbor, Differential Entropy or other specific method, rather, this work is interested in all embedding dimensions and separations that are regarded as different ways of looking at a time series by different chartists, based on their expectations. Prior to the prediction, the embedded vectors of the phase space are classified with Fuzzy-ART, then, for each class a back propagation Neural Network is trained to predict the last element of each vector, whereas all previous elements of a vector are used as features.
|
293 |
Control Of Hexapedal Pronking Through A Dynamically Embedded Spring Loaded Inverted Pendulum TemplateAnkarali, Mustafa Mert 01 February 2010 (has links) (PDF)
Pronking is a legged locomotory gait in which all legs are used in synchrony, usually resulting in slow speeds but long flight phases and large jumping heights that may potentially be useful for mobile robots locomoting in cluttered natural environments. Instantiations of this gait for robotic systems suffer from severe pitch instability either due to underactuated leg designs, or the open-loop nature of proposed controllers. Nevertheless, both the kinematic simplicity of this gait and its dynamic nature suggest that the Spring-Loaded Inverted Pendulum Model (SLIP), a very successful predictive model for both natural and robotic runners, would be a good basis for more robust and maneuverable robotic pronking. In the scope of thesis, we describe a novel controller to achieve stable and controllable pronking for a planar, underactuated hexapod model, based on the idea of &ldquo / template-based control&rdquo / , a controller structure based on the embedding of a simple dynamical template within a more complex anchor system. In this context, high-level control of the gait is regulated through speed and height commands to the SLIP template, while the embedding controller based on approximate inverse-dynamics and carefully designed passive robot morphology ensures the stability of the remaining degrees of freedom. We show through extensive simulation experiments that unlike existing open-loop alternatives, the resulting control structure provides stability, explicit maneuverability and significant robustness against sensor noise.
|
294 |
Quantization Based Data Hiding Strategies With Visual ApplicationsEsen, Ersin 01 February 2010 (has links) (PDF)
The first explored area in this thesis is the proposed data hiding method, TCQ-IS. The method is based on Trellis Coded Quantization (TCQ), whose initial state selection is arbitrary. TCQ-IS exploits this fact to hide data. It is a practical multi-dimensional that eliminates the prohibitive task of designing high dimensional quantizers. The strength and weaknesses of the method are stated by various experiments.
The second contribution is the proposed data hiding method, Forbidden Zone Data Hiding (FZDH), which relies on the concept of &ldquo / forbidden zone&rdquo / , where host signal is not altered. The main motive of FZDH is to introduce distortion as much as needed, while keeping a range of host signal intact depending on the desired level of robustness. FZDH is compared against Quantization Index Modulation (QIM) as well as DC-QIM and ST-QIM. FZDH outperforms QIM even in 1-D and DC-QIM in higher dimensions. Furthermore, FZDH is comparable with ST-QIM for certain operation regimes.
The final contribution is the video data hiding framework that includes FZDH, selective embedding and Repeat Accumulate (RA) codes. De-synchronization due to selective embedding is handled with RA codes. By means of simple rules applied to the embedded frame markers, certain level of robustness against temporal attacks is introduced. Selected coefficients are used to embed message bits by employing multi-dimensional FZDH. The framework is tested with typical broadcast material against common video processing attacks. The results indicate that the framework can be utilized in real life applications.
|
295 |
Neural-Symbolic Integration / Neuro-Symbolische IntegrationBader, Sebastian 15 December 2009 (has links) (PDF)
In this thesis, we discuss different techniques to bridge the gap between two different approaches to artificial intelligence: the symbolic and the connectionist paradigm. Both approaches have quite contrasting advantages and disadvantages. Research in the area of neural-symbolic integration aims at bridging the gap between them.
Starting from a human readable logic program, we construct connectionist systems, which behave equivalently. Afterwards, those systems can be trained, and later the refined knowledge be extracted.
|
296 |
Interposer platforms featuring polymer-enhanced through silicon vias for microelectronic systemsThadesar, Paragkumar A. 08 June 2015 (has links)
Novel polymer-enhanced photodefined through-silicon via (TSV) and passive technologies have been demonstrated for silicon interposers to obtain compact heterogeneous computing and mixed-signal systems. These technologies include: (1) Polymer-clad TSVs with thick (~20 µm) liners to help reduce TSV losses and stress, and obtain optical TSVs in parallel for interposer-to-interposer long-distance communication; (2) Polymer-embedded vias with copper vias embedded in polymer wells to significantly reduce the TSV losses; (3) Coaxial vias in polymer wells to reduce the TSV losses with controlled impedance; (4) Antennas over polymer wells to attain a high radiation efficiency; and (5) High-Q inductors over polymer wells.
Cleanroom fabrication and characterization of the technologies have been demonstrated. For the fabricated polymer-clad TSVs, resistance and synchrotron x-ray diffraction (XRD) measurements have been demonstrated. High-frequency measurements up to 170 GHz and time-domain measurements up to 10 Gbps have been demonstrated for the fabricated polymer-embedded vias. For the fabricated coaxial vias and inductors, high-frequency measurements up to 50 GHz have been demonstrated. Lastly, for the fabricated antennas, measurements in the W-band have been demonstrated.
|
297 |
Large-scale network analyticsSong, Han Hee, 1978- 05 October 2012 (has links)
Scalable and accurate analysis of networks is essential to a wide variety of existing and emerging network systems. Specifically, network measurement and analysis helps to understand networks, improve existing services, and enable new data-mining applications. To support various services and applications in large-scale networks, network analytics must address the following challenges: (i) how to conduct scalable analysis in networks with a large number of nodes and links, (ii) how to flexibly accommodate various objectives from different administrative tasks, (iii) and how to cope with the dynamic changes in the networks. This dissertation presents novel path analysis schemes that effectively address the above challenges in analyzing pair-wise relationships among networked entities. In doing so, we make the following three major contributions to large-scale IP networks, social networks, and application service networks. For IP networks, we propose an accurate and flexible framework for path property monitoring. Analyzing the performance side of paths between pairs of nodes, our framework incorporates approaches that perform exact reconstruction of path properties as well as approximate reconstruction. Our framework is highly scalable to design measurement experiments that span thousands of routers and end hosts. It is also flexible to accommodate a variety of design requirements. For social networks, we present scalable and accurate graph embedding schemes. Aimed at analyzing the pair-wise relationships of social network users, we present three dimensionality reduction schemes leveraging matrix factorization, count-min sketch, and graph clustering paired with spectral graph embedding. As concrete applications showing the practical value of our schemes, we apply them to the important social analysis tasks of proximity estimation, missing link inference, and link prediction. The results clearly demonstrate the accuracy, scalability, and flexibility of our schemes for analyzing social networks with millions of nodes and tens of millions of links. For application service networks, we provide a proactive service quality assessment scheme. Analyzing the relationship between the satisfaction level of subscribers of an IPTV service and network performance indicators, our proposed scheme proactively (i.e., detect issues before IPTV subscribers complain) assesses user-perceived service quality using performance metrics collected from the network. From our evaluation using network data collected from a commercial IPTV service provider, we show that our scheme is able to predict 60% of the service problems that are complained by customers with only 0.1% of false positives. / text
|
298 |
Discriminative object categorization with external semantic knowledgeHwang, Sung Ju 25 September 2013 (has links)
Visual object category recognition is one of the most challenging problems in computer vision. Even assuming that we can obtain a near-perfect instance level representation with the advances in visual input devices and low-level vision techniques, object categorization still remains as a difficult problem because it requires drawing boundaries between instances in a continuous world, where the boundaries are solely defined by human conceptualization. Object categorization is essentially a perceptual process that takes place in a human-defined semantic space. In this semantic space, the categories reside not in isolation, but in relation to others. Some categories are similar, grouped, or co-occur, and some are not. However, despite this semantic nature of object categorization, most of the today's automatic visual category recognition systems rely only on the category labels for training discriminative recognition with statistical machine learning techniques. In many cases, this could result in the recognition model being misled into learning incorrect associations between visual features and the semantic labels, from essentially overfitting to training set biases. This limits the model's prediction power when new test instances are given. Using semantic knowledge has great potential to benefit object category recognition. First, semantic knowledge could guide the training model to learn a correct association between visual features and the categories. Second, semantics provide much richer information beyond the membership information given by the labels, in the form of inter-category and category-attribute distances, relations, and structures. Finally, the semantic knowledge scales well as the relations between categories become larger with an increasing number of categories. My goal in this thesis is to learn discriminative models for categorization that leverage semantic knowledge for object recognition, with a special focus on the semantic relationships among different categories and concepts. To this end, I explore three semantic sources, namely attributes, taxonomies, and analogies, and I show how to incorporate them into the original discriminative model as a form of structural regularization. In particular, for each form of semantic knowledge I present a feature learning approach that defines a semantic embedding to support the object categorization task. The regularization penalizes the models that deviate from the known structures according to the semantic knowledge provided. The first semantic source I explore is attributes, which are human-describable semantic characteristics of an instance. While the existing work treated them as mid-level features which did not introduce new information, I focus on their potential as a means to better guide the learning of object categories, by enforcing the object category classifiers to share features with attribute classifiers, in a multitask feature learning framework. This approach essentially discovers the common low-dimensional features that support predictions in both semantic spaces. Then, I move on to the semantic taxonomy, which is another valuable source of semantic knowledge. The merging and splitting criteria for the categories on a taxonomy are human-defined, and I aim to exploit this implicit semantic knowledge. Specifically, I propose a tree of metrics (ToM) that learns metrics that capture granularity-specific similarities at different nodes of a given semantic taxonomy, and uses a regularizer to isolate granularity-specific disjoint features. This approach captures the intuition that the features used for the discrimination of the parent class should be different from the features used for the children classes. Such learned metrics can be used for hierarchical classification. The use of a single taxonomy can be limited in that its structure is not optimal for hierarchical classification, and there may exist no single optimal semantic taxonomy that perfectly aligns with visual distributions. Thus, I next propose a way to overcome this limitation by leveraging multiple taxonomies as semantic sources to exploit, and combine the acquired complementary information across multiple semantic views and granularities. This allows us, for example, to synthesize semantics from both 'Biological', and 'Appearance'-based taxonomies when learning the visual features. Finally, as a further exploration of more complex semantic relations different from the previous two pairwise similarity-based models, I exploit analogies, which encode the relational similarities between two related pairs of categories. Specifically, I use analogies to regularize a discriminatively learned semantic embedding space for categorization, such that the displacements between the two category embeddings in both category pairs of the analogy are enforced to be the same. Such a constraint allows for a more confusing pair of categories to benefit from a clear separation in the matched pair of categories that share the same relation. All of these methods are evaluated on challenging public datasets, and are shown to effectively improve the recognition accuracy over purely discriminative models, while also guiding the recognition to be more semantic to human perception. Further, the applications of the proposed methods are not limited to visual object categorization in computer vision, but they can be applied to any classification problems where there exists some domain knowledge about the relationships or structures between the classes. Possible applications of my methods outside the visual recognition domain include document classification in natural language processing, and gene-based animal or protein classification in computational biology. / text
|
299 |
A Combinatorial Algorithm for Minimizing the Maximum Laplacian Eigenvalue of Weighted Bipartite GraphsHelmberg, Christoph, Rocha, Israel, Schwerdtfeger, Uwe 13 November 2015 (has links) (PDF)
We give a strongly polynomial time combinatorial algorithm to minimise the largest eigenvalue of the weighted Laplacian of a bipartite graph. This is accomplished by solving the dual graph embedding problem which arises from a semidefinite programming formulation. In particular, the problem for trees can be solved in time cubic in the number of vertices.
|
300 |
A Complexity Theory for VLSIThompson, C. D. 01 August 1980 (has links)
The established methodologies for studying computational complexity can be applied to the new problems posed by very large-scale integrated (VLSI) circuits. This thesis develops a ''VLSI model of computation'' and derives upper and lower bounds on the silicon area and time required to solve the problems of sorting and discrete Fourier transformation. In particular, the area A and time T taken by any VLSI chip using any algorithm to perform an N-point Fourier transform must satisfy AT2 ≥ c N2 log2 N, for some fixed c > 0. A more general result for both sorting and Fourier transformation is that AT2x = Ω(N1 + x log2x N) for any x in the range 0 < x < 1. Also, the energy dissipated by a VLSI chip during the solution of either of these problems is at least Ω(N3/2 log N). The tightness of these bounds is demonstrated by the existence of nearly optimal circuits for both sorting and Fourier transformation. The circuits based on the shuffle-exchange interconnection pattern are fast but large: T = O(log2 N) for Fourier transformation, T = O(log3 N) for sorting; both have area A of at most O(N2 / log1/2 N). The circuits based on the mesh interconnection pattern are slow but small: T = O(N1/2 loglog N), A = O(N log2 N).
|
Page generated in 1.1598 seconds