• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 415
  • 146
  • 56
  • 39
  • 37
  • 14
  • 10
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 863
  • 111
  • 110
  • 92
  • 80
  • 79
  • 78
  • 77
  • 63
  • 61
  • 58
  • 55
  • 53
  • 49
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Error Control for Network Coding

Silva, Danilo 03 March 2010 (has links)
Network coding has emerged as a new paradigm for communication in networks, allowing packets to be algebraically combined at internal nodes, rather than simply routed or replicated. The very nature of packet-mixing, however, makes the system highly sensitive to error propagation. Classical error correction approaches are therefore insufficient to solve the problem, which calls for novel techniques and insights. The main portion of this work is devoted to the problem of error control assuming an adversarial or worst-case error model. We start by proposing a general coding theory for adversarial channels, whose aim is to characterize the correction capability of a code. We then specialize this theory to the cases of coherent and noncoherent network coding. For coherent network coding, we show that the correction capability is given by the rank metric, while for noncoherent network coding, it is given by a new metric, called the injection metric. For both cases, optimal or near-optimal coding schemes are proposed based on rank-metric codes. In addition, we show how existing decoding algorithms for rank-metric codes can be conveniently adapted to work over a network coding channel. We also present several speed improvements that make these algorithms the fastest known to date. The second part of this work investigates a probabilistic error model. Upper and lower bounds on capacity are obtained for any channel parameters, and asymptotic expressions are provided in the limit of long packet length and/or large field size. A simple coding scheme is presented that achieves capacity in both limiting cases. The scheme has fairly low decoding complexity and a probability of failure that decreases exponentially both in the packet length and in the field size in bits. Extensions of the scheme are provided for several variations of the channel. A final contribution of this work is to apply rank-metric codes to a closely related problem: securing a network coding system against an eavesdropper. We show that the maximum possible rate can be achieved with a coset coding scheme based on rank-metric codes. Unlike previous schemes, our scheme has the distinctive property of being universal: it can be applied on top of any communication network without requiring knowledge of or any modifications on the underlying network code. In addition, the scheme can be easily combined with a rank-metric-based error control scheme to provide both security and reliability.
242

Kerr Black Holes And Its Generalizations

Cebeci, Hakan 01 October 2003 (has links) (PDF)
The scalar tensor theory of gravitation is constructed in D dimensions in all possible geometries of spacetime. In Riemannian geometry, theory of gravitation involves a spacetime metric g with a torsion-free, metric compatible connection structure. If the geometry is non-Riemannian, then the gauge theory of gravitation can be constructed with a spacetime metric g and a connection structure with torsion. In non-Riemannian theory, connections may be metric compatible or non-metric compatible. It is shown that theory of gravitation which involves non-metric compatible connection and torsion, can be rewritten in terms of torsion-free theory. It is also shown that scalar tensor theory can be reformulated in Einstein frame by applying a conformal transformation. By adding an antisymmetric axion field, the axi-dilaton theory is studied in Riemannian and non-Riemannian geometries. Motion of massive test particles is examined in all these geometries. The static, spherically symmetric and stationary, Kerr-type axially symmetric solutions of the scalar tensor and axi-dilaton theories are presented. As an application, the geodesic elliptical orbits based on a torsion-free connection and the autoparallel orbits based on a connection with a torsion, are examined in Kerr Brans-Dicke geometry. Perihelion shift of the elliptical orbit is calculated in both cases and the results are compared.
243

De Rham Theory and Semialgebraic Geometry

Shartser, Leonid 31 August 2011 (has links)
This thesis consists of six chapters and deals with four topics related to De Rham Theory on semialgebraic sets. The first topic deals with L-infinity cohomology on semialgebraic sets. We introduce smooth L-infinity differential forms on a singular (semialgebraic) space X in Rn. Roughly speaking, a smooth L-infinity differential form is a collection of smooth forms on disjoint smooth subsets (stratification) of X with matching tangential components on the adjacent strata and of bounded size (in the metric induced from Rn). We identify the singular homology of X as the homology of the chain complex generated by semialgebraic singular simplices, i.e. continuous semialgebraic maps from the standard simplex into X. Singular cohomology of X is defined as the homology of the Hom dual to the chain complex of the singular chains. Finally, we prove a De Rham type theorem establishing a natural isomorphism between the singular cohomology and the cohomology of smooth L-infinity forms. The second topic is a construction of a Lipschitz deformation retraction on a neighborhood of a point in a semialgebraic set with estimates on its derivatives. Such a deformation retraction is the key to the results of the first and the third topics. The third topic is related to Poincare inequality on a semialgebraic set. We study Poincare type Lp inequality for differential forms on a compact semialgebraic subset of Rn for p >> 1. First we derive a local inequality by using a Lipschitz deformation retraction with estimates on its derivatives from the second topic and then we extend it to a global inequality by employing a technique developed in the appendix. As a consequence we obtain an isomorphism between Lp cohomology and singular cohomology of a normal compact semialgebraic set. The final topic is in the appendix. It deals with an explicit proof of Poincare type inequality for differential forms on compact manifolds. We prove the latter inequality by means of a constructive 'globalization' method of a local Poincare inequality on convex sets. The appendix serves as a model case for the results of the third topic in Chapter 5.
244

Error Control for Network Coding

Silva, Danilo 03 March 2010 (has links)
Network coding has emerged as a new paradigm for communication in networks, allowing packets to be algebraically combined at internal nodes, rather than simply routed or replicated. The very nature of packet-mixing, however, makes the system highly sensitive to error propagation. Classical error correction approaches are therefore insufficient to solve the problem, which calls for novel techniques and insights. The main portion of this work is devoted to the problem of error control assuming an adversarial or worst-case error model. We start by proposing a general coding theory for adversarial channels, whose aim is to characterize the correction capability of a code. We then specialize this theory to the cases of coherent and noncoherent network coding. For coherent network coding, we show that the correction capability is given by the rank metric, while for noncoherent network coding, it is given by a new metric, called the injection metric. For both cases, optimal or near-optimal coding schemes are proposed based on rank-metric codes. In addition, we show how existing decoding algorithms for rank-metric codes can be conveniently adapted to work over a network coding channel. We also present several speed improvements that make these algorithms the fastest known to date. The second part of this work investigates a probabilistic error model. Upper and lower bounds on capacity are obtained for any channel parameters, and asymptotic expressions are provided in the limit of long packet length and/or large field size. A simple coding scheme is presented that achieves capacity in both limiting cases. The scheme has fairly low decoding complexity and a probability of failure that decreases exponentially both in the packet length and in the field size in bits. Extensions of the scheme are provided for several variations of the channel. A final contribution of this work is to apply rank-metric codes to a closely related problem: securing a network coding system against an eavesdropper. We show that the maximum possible rate can be achieved with a coset coding scheme based on rank-metric codes. Unlike previous schemes, our scheme has the distinctive property of being universal: it can be applied on top of any communication network without requiring knowledge of or any modifications on the underlying network code. In addition, the scheme can be easily combined with a rank-metric-based error control scheme to provide both security and reliability.
245

De Rham Theory and Semialgebraic Geometry

Shartser, Leonid 31 August 2011 (has links)
This thesis consists of six chapters and deals with four topics related to De Rham Theory on semialgebraic sets. The first topic deals with L-infinity cohomology on semialgebraic sets. We introduce smooth L-infinity differential forms on a singular (semialgebraic) space X in Rn. Roughly speaking, a smooth L-infinity differential form is a collection of smooth forms on disjoint smooth subsets (stratification) of X with matching tangential components on the adjacent strata and of bounded size (in the metric induced from Rn). We identify the singular homology of X as the homology of the chain complex generated by semialgebraic singular simplices, i.e. continuous semialgebraic maps from the standard simplex into X. Singular cohomology of X is defined as the homology of the Hom dual to the chain complex of the singular chains. Finally, we prove a De Rham type theorem establishing a natural isomorphism between the singular cohomology and the cohomology of smooth L-infinity forms. The second topic is a construction of a Lipschitz deformation retraction on a neighborhood of a point in a semialgebraic set with estimates on its derivatives. Such a deformation retraction is the key to the results of the first and the third topics. The third topic is related to Poincare inequality on a semialgebraic set. We study Poincare type Lp inequality for differential forms on a compact semialgebraic subset of Rn for p >> 1. First we derive a local inequality by using a Lipschitz deformation retraction with estimates on its derivatives from the second topic and then we extend it to a global inequality by employing a technique developed in the appendix. As a consequence we obtain an isomorphism between Lp cohomology and singular cohomology of a normal compact semialgebraic set. The final topic is in the appendix. It deals with an explicit proof of Poincare type inequality for differential forms on compact manifolds. We prove the latter inequality by means of a constructive 'globalization' method of a local Poincare inequality on convex sets. The appendix serves as a model case for the results of the third topic in Chapter 5.
246

Adaptive Monitoring of Complex Software Systems using Management Metrics

Munawar, Mohammad Ahmad 30 September 2009 (has links)
Software systems supporting networked, transaction-oriented services are large and complex; they comprise a multitude of inter-dependent layers and components, and they implement many dynamic optimization mechanisms. In addition, these systems are subject to workload that is hard to predict. These factors make monitoring these systems as well as performing problem determination challenging and costly. In this thesis we tackle these challenges with the goal of lowering the cost and improving the effectiveness of monitoring and problem determination by reducing the dependence on human operators. Specifically, this thesis presents and demonstrates the effectiveness of an efficient, automated monitoring approach which enables detection of errors and failures, and which assists in localizing faults. Software systems expose various types of monitoring data; this thesis focuses on the use of management metrics to monitor a system's health. We devise a system modeling approach which entails modeling stable, statistical correlations among management metrics; these correlations characterize a system's normal behaviour This approach allows a system model to be built automatically and efficiently using the monitoring data alone. In order to control the monitoring overhead, and yet allow a system's health to be assessed reliably, we design an adaptive monitoring approach. This adaptive capability builds on the flexible nature of our system modeling approach, which allows the set of monitored metrics to be altered at runtime. We develop methods to automatically select management metrics to collect at the minimal monitoring level, without any domain knowledge. In addition, we devise an automated fault localization approach, which leverages the ability of the monitoring system to analyze individual metrics. Using a realistic, multi-tier software system, including different applications based on Java Enterprise Edition and industrial-strength products, we evaluate our system modeling approach. We show that stable metric correlations exist in complex software systems and that many of these correlations can be modeled using simple, efficient techniques. We investigate the effect of the collection of management metrics on system performance. We show that the monitoring overhead can be high and thus needs to be controlled. We employ fault injection experiments to evaluate the effectiveness of our adaptive monitoring and fault localization approach. We demonstrate that our approach is cost-effective, has high fault coverage and, in the majority of the cases studied, provides pertinent diagnosis information. The main contribution of this work is to show how to monitor complex software systems and determine problems in them automatically and efficiently. Our solution approach has wide applicability and the techniques we use are simple and yet effective. Our work suggests that the cost of monitoring software systems is not necessarily a function of their complexity, providing hope that the health of increasingly large and complex systems can be tracked with a limited amount of human resources and without sacrificing much system performance.
247

The Proper Metric of Justice in Justice as Fairness

Carmichael, Charles Benjamin 08 May 2009 (has links)
I explore the problem of using primary goods as the index for determining the least-advantaged members in a society in Rawls’s theory of justice. I look at the problems presented to Rawls by Amartya Sen and his capabilities approach. I discuss the solutions to Sen’s problems given by Norman Daniels, who argues that primary goods are able to take capabilities into account. Finally, I supplement Daniels, arguing that the parameters Rawls uses to define his theory limit Sen’s objection and that primary goods are the appropriate metric of justice in Rawls’s theory.
248

Examining Scholarly Influence: A Study in Hirsch Metrics and Social Network Analysis

Takeda, Hirotoshi 06 January 2011 (has links)
This dissertation research is focused on how we, as researchers, ‘influence’ others researchers. In particular, I am concerned with the notion of what constitutes the ‘influence’ of a scholar and how ‘influence’ is conferred upon scholars. This research is concerned with the construct called ‘scholarly influence’. Scholarly influence is of interest because a clear “theory of scholarly influence” does not yet exist. Rather a number of surrogate measures or concepts that are variable are used to evaluate the value of one’s academic work. ‘Scholarly influence’ is broken down into ‘ideational influence’ or the influence that one has through publication and the uptake of the ideas presented in the publication, and ‘social influence’ or the influence that one has through working with other researchers. Finally through the use of the definition of ‘scholarly influence’ this dissertation tries to commence a definition of ‘quality’ in scholarly work.
249

Adaptive Similarity Measures for Material Identification in Hyperspectral Imagery

Bue, Brian 16 September 2013 (has links)
Remotely-sensed hyperspectral imagery has become one the most advanced tools for analyzing the processes that shape the Earth and other planets. Effective, rapid analysis of high-volume, high-dimensional hyperspectral image data sets demands efficient, automated techniques to identify signatures of known materials in such imagery. In this thesis, we develop a framework for automatic material identification in hyperspectral imagery using adaptive similarity measures. We frame the material identification problem as a multiclass similarity-based classification problem, where our goal is to predict material labels for unlabeled target spectra based upon their similarities to source spectra with known material labels. As differences in capture conditions affect the spectral representations of materials, we divide the material identification problem into intra-domain (i.e., source and target spectra captured under identical conditions) and inter-domain (i.e., source and target spectra captured under different conditions) settings. The first component of this thesis develops adaptive similarity measures for intra-domain settings that measure the relevance of spectral features to the given classification task using small amounts of labeled data. We propose a technique based on multiclass Linear Discriminant Analysis (LDA) that combines several distinct similarity measures into a single hybrid measure capturing the strengths of each of the individual measures. We also provide a comparative survey of techniques for low-rank Mahalanobis metric learning, and demonstrate that regularized LDA yields competitive results to the state-of-the-art, at substantially lower computational cost. The second component of this thesis shifts the focus to inter-domain settings, and proposes a multiclass domain adaptation framework that reconciles systematic differences between spectra captured under similar, but not identical, conditions. Our framework computes a similarity-based mapping that captures structured, relative relationships between classes shared between source and target domains, allowing us apply a classifier trained using labeled source spectra to classify target spectra. We demonstrate improved domain adaptation accuracy in comparison to recently-proposed multitask learning and manifold alignment techniques in several case studies involving state-of-the-art synthetic and real-world hyperspectral imagery.
250

Adaptive Monitoring of Complex Software Systems using Management Metrics

Munawar, Mohammad Ahmad 30 September 2009 (has links)
Software systems supporting networked, transaction-oriented services are large and complex; they comprise a multitude of inter-dependent layers and components, and they implement many dynamic optimization mechanisms. In addition, these systems are subject to workload that is hard to predict. These factors make monitoring these systems as well as performing problem determination challenging and costly. In this thesis we tackle these challenges with the goal of lowering the cost and improving the effectiveness of monitoring and problem determination by reducing the dependence on human operators. Specifically, this thesis presents and demonstrates the effectiveness of an efficient, automated monitoring approach which enables detection of errors and failures, and which assists in localizing faults. Software systems expose various types of monitoring data; this thesis focuses on the use of management metrics to monitor a system's health. We devise a system modeling approach which entails modeling stable, statistical correlations among management metrics; these correlations characterize a system's normal behaviour This approach allows a system model to be built automatically and efficiently using the monitoring data alone. In order to control the monitoring overhead, and yet allow a system's health to be assessed reliably, we design an adaptive monitoring approach. This adaptive capability builds on the flexible nature of our system modeling approach, which allows the set of monitored metrics to be altered at runtime. We develop methods to automatically select management metrics to collect at the minimal monitoring level, without any domain knowledge. In addition, we devise an automated fault localization approach, which leverages the ability of the monitoring system to analyze individual metrics. Using a realistic, multi-tier software system, including different applications based on Java Enterprise Edition and industrial-strength products, we evaluate our system modeling approach. We show that stable metric correlations exist in complex software systems and that many of these correlations can be modeled using simple, efficient techniques. We investigate the effect of the collection of management metrics on system performance. We show that the monitoring overhead can be high and thus needs to be controlled. We employ fault injection experiments to evaluate the effectiveness of our adaptive monitoring and fault localization approach. We demonstrate that our approach is cost-effective, has high fault coverage and, in the majority of the cases studied, provides pertinent diagnosis information. The main contribution of this work is to show how to monitor complex software systems and determine problems in them automatically and efficiently. Our solution approach has wide applicability and the techniques we use are simple and yet effective. Our work suggests that the cost of monitoring software systems is not necessarily a function of their complexity, providing hope that the health of increasingly large and complex systems can be tracked with a limited amount of human resources and without sacrificing much system performance.

Page generated in 0.0401 seconds