• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • Tagged with
  • 31
  • 31
  • 31
  • 31
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A scalable resource management framework for QoS-enabled multidomain networks.

Mantar, Haci Ali. Chapin, Stephen J. Hwang, Junseok January 2003 (has links)
Thesis (PH.D.)--Syracuse University, 2003. / "Publication number AAT 3099739."
2

Congestion driven global routing and cross point assignment

Krishna, Bharat. January 2005 (has links)
Thesis (Ph. D.)--Syracuse University, 2005. / "Publication number AAT 3177003."
3

Acquisition of 3D models from a set of 2D images

Cheng, Yong-Qing 01 January 1997 (has links)
The acquisition of accurate 3D models from a set of images is an important and difficult problem in computer vision. The general problems considered in this thesis are how to compute the camera parameters and build 3D models given a set of 2D images. The first set of algorithms presented in this thesis deal with the problem of camera calibration in which some or all of the camera parameters must be determined. A new analytical technique is derived to find relative camera poses for three images, given only calibrated 2D image line correspondences across three images. Then, a general non-linear algorithm is developed to estimate relative camera poses over a set of images. Finally, the presented algorithms are extended to simultaneously compute the intrinsic camera parameters and relative camera poses from 2D image line correspondences over multiple uncalibrated images. To reconstruct and refine 3D lines of the models, a multi-image and multi-line triangulation method using known correspondences is presented. A novel non-iterative line reconstruction algorithm is proposed. Then, a robust algorithm is presented to simultaneously estimate a model consisting of a set of 3D lines while satisfying object-level constraints such as angular, coplanar, and other geometric 3D constraints. Finally, to make the proposed approach widely applicable, an integrated approach to matching and triangulation from noisy 2D image points across two images is first presented by introducing an affinity measure between image point features, based on their distance from a hypothetical projected 3D pseudo-intersection point. A similar approach to matching and triangulation from noisy 2D image line segments across three images is proposed by introducing an affinity measure among 2D image line segments via a 3D pseudo-intersection line.
4

Matching affine-distorted images

Manmatha, Raghavan 01 January 1997 (has links)
Many visual tasks involve the matching of image patches derived from imaging a scene from different viewpoints. Matching two image patches can, however, be a difficult task. This is because changes in the relative orientation of a surface with respect to a camera cause deformations in the image of the surface. Thus this deformation needs to be taken into account when matching or registering two image patches of an object under changes in viewpoint. Up to first order these deformations can be described using an affine transform. Here, a computational scheme to match two image patches under an affine transform is presented. The two image patches are filtered with Gaussian and derivative of Gaussian filters. The problem of matching the two image patches is then recast as one of finding amount by which these filters must be deformed so that the filtered outputs from the two images are equal. For robustness, it is necessary to use the filter outputs from many points in a small region to obtain an overconstrained system of equations. The resulting equations are linearized with respect to the affine transforms and then iteratively solved for the affine transforms. The method is local and can match image patches in situations where other algorithms fail. It is also shown that the same framework may be used to match points and lines.
5

Data flow analysis for verification of application-specific properties of concurrent software

Naumovich, Gleb N 01 January 1999 (has links)
With the proliferation of concurrent software systems, automated finite state verification techniques for checking that a software system conforms to a behavior specification become extremely important in improving software quality. Such techniques can be used both for detecting faults of certain kinds and proving that such faults are absent from the given software system. In this thesis, we adapt the promising approach of FLAVERS, a data flow analysis-based finite state verification technique, to the analysis of concurrent Java programs. We investigate two alternative approaches to modeling Java concurrency with FLAVERS and demonstrate experimentally that one of these two approaches is more efficient. In addition, we present three general optimizations of the general approach of FLAVERS. One of these optimizations improves the space requirements of the FLAVERS analysis by about an order of magnitude and all three optimizations combined reduce the analysis time approximately in half. Finally, we describe three case studies evaluating the applicability of FLAVERS to several application domains: communication protocols, high-level software architectures, and user interfaces. We demonstrate that FLAVERS is an efficient tool for detecting faults or proving the absence of faults of certain kind in these domains. We also describe two polynomial data flow algorithms for computing a conservative estimate of which pairs of statements may execute in parallel in concurrent programs. One of these algorithms computes such pairs for concurrent Ada programs and the other algorithm computes such pairs for concurrent Java programs. The empirical comparison of each of the algorithms with a precise exponential-time algorithm shows that our algorithms are very precise in practice. In addition, we compare our algorithm for Ada with the most precise of the previously proposed approaches. It turns out that our algorithm tends to be more precise in practice.
6

Scalable reliable multicast in wide area networks

Kasera, Sneha Kumar 01 January 1999 (has links)
Many applications including one-to-many file transfer, information dissemination (e.g., stock quotes and web cache updates), distance learning, shared whiteboards, multi-party games, and distributed computing can benefit from reliable multicast. The goal of this dissertation is to design and evaluate large scale multicast loss recovery architectures and protocols for IP multicast capable networks, that make efficient use of both the network and end-system resources and scale to applications that can have thousands of receivers spanning wide area networks. One of the important problems in multicast loss recovery is that of a receiver receiving unwanted retransmissions of packets lost at other receivers. We present a new approach to scoping retransmissions in which a single multicast channel is used for the original transmission of packets and separate multicast channels are used for scoping retransmissions to “interested” receivers only. We find that a small to moderate number of multicast channels can be recycled to achieve almost perfect retransmission scoping for a wide range of system parameters. We also propose two mechanisms for implementing retransmission channels, one using multiple IP multicast groups and the other using a single IP multicast group in conjunction with additional router support. We show that the second approach reduces both host processing costs and network bandwidth usage. Another problem arises when the sender alone bears the burden of handling loss-feedback and supplying retransmissions to a large group of receivers. Local recovery approaches, in which entities other than the sender aid in loss recovery, distribute the loss recovery burden and also reduce network bandwidth consumption and recovery latency. We propose a new local recovery approach that co-locates designated repair servers with routers at strategic locations inside the network. We demonstrate the superior performance of our approach over traditional approaches. We address the important issues of repair server placement, repair server resource requirements, and performance degradation due to insufficient resources. We also demonstrate how the repair server functionality can be provided as a dynamically invocable/revocable service with minimal router support.
7

Flow and congestion control for reliable multicast communication in wide -area networks

Bhattacharyya, Supratik 01 January 2000 (has links)
Applications involving the reliable transfer of large volumes of data from a source to multiple destinations across wide-area networks are expected to become increasingly important in the near future. A few examples are point-to-multipoint ftp, news distributions, Web caching and software updates. Multicasting technology promises to enhance the capabilities of wide-area networks for Supporting these applications. Flow and congestion control have emerged as one of the biggest challenges in the large-scale deployment of reliable multicast in the Internet. This thesis addresses two specific problems in the design of transport-level flow/congestion control schemes for reliable multicast data transfer: (1) How should feedback-based congestion control schemes be designed for regulating the rate of transmission of data to a single multicast group? (2) How can multiple multicast groups be used to improve the efficiency of bulk data delivery? The first part of this thesis focuses on the design and evaluation of multicast congestion control schemes in which a source regulates its transmission rate in response to packet loss indications from its receivers. We identify and analyze an important problem that arises because a transmitted packet may get lost on one or more of the many end-to-end paths in a multicast tree, and also study its impact on fair bandwidth sharing among co-existing multicast and unicast sessions. An outcome of this work is a fair congestion control approach that scales well to large multicast groups. We also design and examine a prototype protocol that is “TCP-friendly”. The second part of this thesis considers the problem of efficiently transferring data to a large number of destinations in the presence of heterogeneous bandwidth constraints in different parts of a network. We propose a novel approach in which the sender stripes data across multiple multicast groups and transmits it to different sub-groups of receivers at different rates. We also design and evaluate simple and bandwidth-efficient algorithms for determining the transmission rates associated with each multicast group.
8

On multi-scale differential features and their representations for image retrieval and recognition

Ravela, Srinivas S 01 January 2003 (has links)
Visual appearance is described as a cue with which we discriminate images. It has been conjectured that appearance similarity emerges from similarity between features of image surfaces. However, the design of effective appearance features and their efficient representations is an open problem. In this dissertation, appearance features are developed by decomposing image brightness surfaces differentially in space, and in scale. Image representations constructed from multi-scale differential features are compared to determine appearance similarity. The first part of this thesis explores image structure in scale and space. Multi-scale differential features are generated by filtering images with Gaussian derivatives at multiple scales (GMDFs). This provides a robust local characterization of the brightness surface; filtered outputs can be transformed to seek rotation, illumination, view and scale tolerance. Differential features are also shown to be descriptive; both local and global representations of images can be composed from them. The second part of this thesis begins by illustrating local and global representations including feature-templates, -graphs, -ensembles and -distributions. It continues by developing one algorithm, CO-1, in detail. In this algorithm, two robust differential-features, the orientation of the local gradient and the shape-index, are selected for constructing representations. GMDF distributions of the first type are used to represent images and euclidean distance measure is used to determine similarity between representations. The first application of CO-1 is to image retrieval, a task central to developing search and organization tools for digital multimedia collections. CO-1 is applied to example-based browsing of image collections and trademark: retrieval, where appearance similarity can be important for adjudicating relevance. The second application of this work is to image-based and view-based object recognition. Results are demonstrated for face recognition using several standard collections. The central contribution of this work in the words of a reviewer is “… in the simplicity and elegance of the approach of using low-level multi-scale differential image structure.” We posit that this thesis highlights the utility of exploring differential image structure to synthesize features effective in a wide range of appearance-based retrieval and recognition tasks.
9

Toward quantified control for organizationally situated agents

Wagner, Thomas Anderson 01 January 2000 (has links)
Software agents are situated in an environment that they can sense and effect, flexible, having choices and responding to the environment, and autonomous, making choices independently. This dissertation focuses on the issue of choice in autonomous agents. We use the expression agent control to denote the choice process—the process of deciding which tasks to perform, when to perform them, and possibly with whom a given agent should cooperate to perform a given task. In this dissertation, we explore a detailed approach to local agent control called Design-to-Criteria Scheduling and then move agent control to a higher level of abstraction where agents reason about their larger organizational context and the motivations for performing particular tasks. This higher reasoning level is called the Motivational Quantities level.
10

Equivalence checking of arithmetic expressions with applications in DSP synthesis

Zhou, Zheng 01 January 1996 (has links)
Numerous formal verification systems have been proposed and developed for FSM based control units (notably SMV (71) as well as others). However, most research on the equivalence checking of datapaths is still confined to the bit- or word-level. Formal verification of arithmetic expressions and synthesized datapaths, especially considering finite word-length computation, has not been addressed. Thus formal verification techniques have been prohibited from more extensive applications in numerical and Digital Signal Processing. In this dissertation a formal system, called Conditional Term Rewriting on Attribute Syntax Trees (ConTRAST) is developed and demonstrated for verifying the equivalence between two differently synthesized datapaths. This result arises from a sophisticated integration of three key techniques: attribute grammars, which contribute expressive data structures for syntactic and semantic information about designed datapaths, term rewrite systems, which transform functionally equivalent datapaths into the same canonical form, and LR parsing, which provides an efficient tool for integrating the attribute grammar and the term rewriting system. Unlike other canonical representations, such as BDD (15), and BMD$\sp*$ (17), ConTRAST makes canonicity by manipulating symbolic expressions instead of enumerating values of expressions at the bit- or word-level. Furthermore, the effect of finite word-lengths and their associated arithmetic precision are also considered in the definition of equivalence classes. As a particular application of ConTRAST, a DSP design verification tool called Fixed-Point Verifier (FPV) has been developed. Similar to present DSP hardware design tools, FPV allows users to describe filters in the form of arithmetic expressions and to specify arbitrary fixed-point wordlengths on various signals. However, unlike simulation-based verification methods like Cadence/Alta's Fixed Point Optimizer and Mentor's DSPstation, FPV can automatically perform correctness-checking and equivalence-checking for a given filter design under the effect of finite word length.

Page generated in 0.1319 seconds