• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 901
  • 650
  • 5
  • 2
  • Tagged with
  • 1560
  • 1560
  • 85
  • 75
  • 69
  • 68
  • 59
  • 58
  • 57
  • 55
  • 54
  • 53
  • 53
  • 53
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Scheduling and Optimization of Fault-Tolerant Distributed Embedded Systems

Izosimov, Viacheslav January 2009 (has links)
Safety-critical applications have to function correctly and deliver high level of quality-ofservice even in the presence of faults. This thesis deals with techniques for tolerating effects of transient and intermittent faults. Re-execution, software replication, and rollback recovery with checkpointing are used to provide the required level of fault tolerance at the software level. Hardening is used to increase the reliability of hardware components. These techniques are considered in the context of distributed real-time systems with static and quasi-static scheduling. Many safety-critical applications have also strict time and cost constrains, which means that not only faults have to be tolerated but also the constraints should be satisfied. Hence, efficient system design approaches with careful consideration of fault tolerance are required. This thesis proposes several design optimization strategies and scheduling techniques that take fault tolerance into account. The design optimization tasks addressed include, among others, process mapping, fault tolerance policy assignment, checkpoint distribution, and trading-off between hardware hardening and software re-execution. Particular optimization approaches are also proposed to consider debugability requirements of fault-tolerant applications. Finally, quality-of-service aspects have been addressed in the thesis for fault-tolerant embedded systems with soft and hard timing constraints. The proposed scheduling and design optimization strategies have been thoroughly evaluated with extensive experiments. The experimental results show that considering fault tolerance during system-level design optimization is essential when designing cost-effective and high-quality fault-tolerant embedded systems.
132

Aspects of a Constraint Optimisation Problem

Thapper, Johan January 2010 (has links)
In this thesis we study a constraint optimisation problem called the maximum solution problem, henceforth referred to as Max Sol. It is defined as the problem of optimising a linear objective function over a constraint satisfaction problem (Csp) instance on a finite domain. Each variable in the instance is given a non-negative rational weight, and each domain element is also assigned a numerical value, for example taken from the natural numbers. From this point of view, the problem is seen to be a natural extension of integer linear programming over a bounded domain. We study both the time complexity of approximating Max Sol, and the time complexity of obtaining an optimal solution. In the latter case, we also construct some exponential-time algorithms. The algebraic method is a powerful tool for studying Csp-related problems. It was introduced for the decision version of Csp, and has been extended to a number of other problems, including Max Sol. With this technique we establish approximability classifications for certain families of constraint languages, based on algebraic characterisations. We also show how the concept of a core for relational structures can be extended in order to determine when constant unary relations can be added to a constraint language, without changing the computational complexity of finding an optimal solution to Max Sol. Using this result we show that, in a specific sense, when studying the computational complexity of Max Sol, we only need to consider constraint languages with all constant unary relations included. Some optimisation problems are known to be approximable within some constant ratio, but are not believed to be approximable within an arbitrarily small constant ratio. For such problems, it is of interest to find the best ratio within which the problem can be approximated, or at least give some bounds on this constant. We study this aspect of the (weighted) Max Csp problem for graphs. In this optimisation problem the number of satisfied constraints is supposed to be maximised. We introduce a method for studying approximation ratios which is based on a new parameter on the space of all graphs. Informally, we think of this parameter as an approximation distance; knowing the distance between two graphs, we can bound the approximation ratio of one of them, given a bound for the other. We further show how the basic idea can be implemented also for the Max Sol problem.
133

A Tensor Framework for Multidimensional Signal Processing

Westin, Carl-Fredrik January 1994 (has links)
This thesis deals with ltering of multidimensional signals. A large part of the thesis is devoted to a novel filtering method termed "Normalized convolution". The method performs local expansion of a signal in a chosen lter basis which not necessarily has to be orthonormal. A key feature of the method is that it can deal with uncertain data when additional certainty statements are available for the data and/or the lters. It is shown how false operator responses due to missing or uncertain data can be significantly reduced or eliminated using this technique. Perhaps the most well-known of such eects are the various 'edge effects' which invariably occur at the edges of the input data set. The method is an example of the signal/certainty - philosophy, i.e. the separation of both data and operator into a signal part and a certainty part. An estimate of the certainty must accompany the data. Missing data are simply handled by setting the certainty to zero. Localization or windowing of operators is done using an applicability function, the operator equivalent to certainty, not by changing the actual operator coefficients. Spatially or temporally limited operators are handled by setting the applicability function to zero outside the window. The use of tensors in estimation of local structure and orientation using spatiotemporal quadrature filters is reviewed and related to dual tensor bases. The tensor representation conveys the degree and type of local anisotropy. For image sequences, the shape of the tensors describe the local structure of the spatiotemporal neighbourhood and provides information about local velocity. The tensor representation also conveys information for deciding if true flow or only normal flow is present. It is shown how normal flow estimates can be combined into a true flow using averaging of this tensor eld description. Important aspects of representation and techniques for grouping local orientation estimates into global line information are discussed. The uniformity of some standard parameter spaces for line segmentation is investigated. The analysis shows that, to avoid discontinuities, great care should be taken when choosing the parameter space for a particular problem. A new parameter mapping well suited for line extraction, the Möbius strip parameterization, is de ned. The method has similarities to the Hough Transform. Estimation of local frequency and bandwidth is also discussed. Local frequency is an important concept which provides an indication of the appropriate range of scales for subsequent analysis. One-dimensional and two-dimensional examples of local frequency estimation are given. The local bandwidth estimate is used for dening a certainty measure. The certainty measure enables the use of a normalized averaging process increasing robustness and accuracy of the frequency statements.
134

Signal Representation and Processing using Operator Groups

Nordberg, Klas January 1994 (has links)
This thesis presents a signal representation in terms of operators. The signal is assumed to be an element of a vector space and subject to transformations of operators. The operators form continuous groups, so-called Lie groups. The representation can be used for signals in general, in particular if spatial relations are undefinied and it does not require a basis of the signal space to be useful. Special attention is given to orthogonal operator groups which are generated by anti-Hermitian operators by means of the exponential mapping. It is shown that the eigensystem of the group generator is strongly related to properties of the corresponding operator group. For one-parameter orthogonal operator groups, a phase concept is introduced. This phase can for instance be used to distinguish between spatially even and odd signals and, therefore, corresponds to the usual phase for multi-dimensional signals. Given one operator group that represents the variation of the signal and one operator group that represents the variation of a corresponding feature descriptor, an equivariant mapping maps the signal to the descriptor such that the two operator groups correspond. Suficient conditions are derived for a general mapping to be equivariant with respect to a pair of operator groups. These conditions are expressed in terms of the generators of the two operator groups. As a special case, second order homo-geneous mappings are considered, and examples of how second order mappings can be used to obtain different types of feature descriptors are presented, in particular for operator groups that are homomorphic to rotations in two and three dimensions, respectively. A generalization of directed quadrature lters is made. All feature extraction algorithms that are presented are discussed in terms of phase invariance. Simple procedures that estimate group generators which correspond to one-parameter groups are derived and tested on an example. The resulting generator is evaluated by using its eigensystem in implementations of two feature extraction algorithms. It is shown that the resulting feature descriptor has good accuracy with respect to the corresponding feature value, even in the presence of signal noise.
135

Adaptive Multidimensional Filtering

Haglund, Leif January 1991 (has links)
This thesis contains a presentation and an analysis of adaptive filtering strategies for multidimensional data. The size, shape and orientation of the flter are signal controlled and thus adapted locally to each neighbourhood according to a predefined model. The filter is constructed as a linear weighting of fixed oriented bandpass filters having the same shape but different orientations. The adaptive filtering methods have been tested on both real data and synthesized test data in 2D, e.g. still images, 3D, e.g. image sequences or volumes, with good results. In 4D, e.g. volume sequences, the algorithm is given in its mathematical form. The weighting coefficients are given by the inner products of a tensor representing the local structure of the data and the tensors representing the orientation of the filters. The procedure and lter design in estimating the representation tensor are described. In 2D, the tensor contains information about the local energy, the optimal orientation and a certainty of the orientation. In 3D, the information in the tensor is the energy, the normal to the best ftting local plane and the tangent to the best fitting line, and certainties of these orientations. In the case of time sequences, a quantitative comparison of the proposed method and other (optical flow) algorithms is presented. The estimation of control information is made in different scales. There are two main reasons for this. A single filter has a particular limited pass band which may or may not be tuned to the different sized objects to describe. Second, size or scale is a descriptive feature in its own right. All of this requires the integration of measurements from different scales. The increasing interest in wavelet theory supports the idea that a multiresolution approach is necessary. Hence the resulting adaptive filter will adapt also in size and to different orientations in different scales.
136

Flexible Interleaving Sub–systems for FEC in Baseband Processors

Asghar, Rizwan January 2010 (has links)
Interleaving is always used in combination with an error control coding. It spreads the burst noise, and changes the burst noise to white noise so that the noise induced bit errors can be corrected. With the advancement of communication systems and substantial increase in bandwidth requirements, use of coding for forward error correction (FEC) has become an integral part in the modern communication systems. Dividing the FEC sub-systems in two categories i.e. channel coding/de-coding and interleaving/de-interleaving, the later appears to be more varying in permutation functions, block sizes and throughput requirements. The interleaving/de-interleaving consumes more silicon due to the silicon cost of the permutation tables used in conventional LUT based approaches. For multi-standard support devices the silicon cost of the permutation tables can grow much higher resulting in an un-efficient solution. Therefore, the hardware re-use among different interleaver modules to support multimode processing platform is of significance. The broadness of the interleaving algorithms gives rise to many challenges when considering a true multimode interleaver implementation. The main challenges include real-time low latency computation for different permutation functions, managing wide range of interleaving block sizes, higher throughput, low cost, fast and dynamic reconfiguration for different standards, and introducing parallelism where ever necessary. It is difficult to merge all currently used interleavers to a singlearchitecture because of different algorithms and throughputs; however, thefact that multimode coverage does not require multiple interleavers to workat the same time, provides opportunities to use hardware multiplexing. The multimode functionality is then achieved by fast switching between differentstandards. We used the algorithmic level transformations such as 2-Dtransformation, and realization of recursive computations, which appear to bethe key to bring different interleaving functions to the same level. In general,the work focuses on function level hardware re-use, but it also utilizesclassical data-path level optimizations for efficient hardware multiplexingamong different standards. The research has resulted in multiple flexible architectures supporting multiple standards. These architectures target both channel interleaving and turbo-code interleaving. The presented architectures can support both types of communication systems i.e. single-stream and multi-stream systems. Introducing the algorithmic level transformations and then applying hardware re-use methodology has resulted in lower silicon cost while supporting sufficient throughput. According to the database searching in March 2010, we have the first multimode interleaver core covering WLAN (802.11a/b/g and 802.11n), WiMAX (802.16e), 3GPP-WCDMA, 3GPP-LTE, and DVB-T/H on a single architecture with minimum silicon cost. The research also provides the support for parallel interleaver address generation using different architectures. It provides the algorithmic modifications and architectures to generate up to 8 addresses in parallel and handle the memory conflicts on-the-fly. One of the vital requirements for multimode operation is the fast switching between different standards, which is supported by the presented architectures with minimal cycle cost overheads. Fast switching between different standards gives luxury to the baseband processor to re-configure the interleaver architecture on-the-fly and re-use the same hardware for another standard. Lower silicon cost, maximum flexibility and fast switchability among multiple standards during run time make the proposed research a good choice for the radio baseband processing platforms.
137

On the Quality of Feature Models

Thörn, Christer January 2010 (has links)
Variability has become an important aspect of modern software-intensive products and systems. In order to reach new markets and utilize existing resources through reuse, it is necessary to have effective management of variants, configurations, and reusable functionality. The topic of this thesis is the construction of feature models that document and describe variability and commonality. The work aims to contribute to methods for creating feature models that are of high quality, suitable for their intended purpose, correct, and usable. The thesis suggests an approach, complementing existing feature modeling methodologies, that contributes to arriving at a satisfactory modeling result. The approach is based on existing practices to raise quality from other research areas, and targets shortcomings in existing feature modeling methods. The requirements for such an approach were derived from an industrial survey and a case study in the automotive domain. The approach was refined and tested in a second case study in the mobile applications domain. The main contributions of the thesis are a quality model for feature models, procedures for prioritizing and evaluating quality in feature models, and an initial set of empirically grounded development principles for reaching certain qualities in feature models. The principal findings of the thesis are that feature models exhibit different qualities, depending on certain characteristics and properties of the model. Such properties can be identified, formalized, and influenced in order to guide development of feature models, and thereby promote certain quality factors of feature models.
138

Antenna array signal processing for high rank data models

Bengtsson, Mats January 2000 (has links)
QC 20100506
139

Fatigue crack initiation and propagation in sandwich structures

Burman, Magnus January 1998 (has links)
The focus throughout this thesis is on the fatigue characteristics of core materials used insandwich structures. Three sandwich configurations are investigated, two with cellular foamsand one with honeycomb core material These corresponds to typical materials and dimensionsused in the marine and aeronautical industry.A modified four-point bending rig, which enables reversed loading, is successfully used forconstant amplitude fatigue tests of all material configurations. The core materials are tested asused in composite sandwich beams and through the design of the specimens the desiredfailure is in shear of the core. Analyses and inspections during and after the tests supports thetheory that the fracture initiation and fatigue failure occurs in a large zone of the core withwell distributed micro cracks rather than a single propagating crack. The fatigue test resultsare plotted in stress life diagrams including a Weibull type function which provides a goodaccuracy curve fit to the results. The fatigue life of the core materials is found to be reducedwith a increased load ratio, R.The influence on the strength and fatigue performance on sandwich beams with two types ofcore damages, an interfacial disbond and a flawed butt-joint, are experimentally investigated.The fatigue failure initiates at the stress intensity locations which are present due to the predamage.The specimens with flawed butt-joints display a fatigue crack propagation in theinterface between the core and face of the sandwich while the crack propagates through thethickness of the beams where an initial interface flaw is present. A fatigue failure predictionmodel is suggested which utilises the fatigue performance of undamaged beams and thestrength reduction due to the damages. The approach is correlated with results from fatiguetesting and satisfactory correlation is found.A uni-axial fatigue tests method is developed which simplifies the rig and specimenscompared to the four point bend method. A comparison between the results from uni-axialtension/compression fatigue tests and shear fatigue tests shows good correlation, although theR-dependency differs in some cases.The fatigue crack propagation rates are investigated for two configurations: crackspropagating in pure foam core material and cracks propagating in the core material near andalong a sandwich face/core interface. The rate at which a crack propagates stable in the socalled Paris’ regime is extracted for both Mode I and Mode II loading. The agreement betweenthe Mode I crack propagation rate in the pure foam and in the core/face sandwich interfacelayer supports the theory that the crack actually propagates in the sandwich core beneath astiffened resin rich layer present in the face/core interface. The stress intensity thresholds andthe limits at which the crack growth becomes unstable are further established.Acoustic Emission (AE) is used to monitor crack initiation and growth in the core, duringboth static and fatigue loading. It is found that the approximate location of AE-hits can bedetermined which demonstrates that AE has a potential both as an non destructive testing tooland to study the failure process of non-visible sub-surface damages in sandwich structures. / QC 20100520
140

Multi-scale feature tracking and motion estimation

Bretzner, Lars January 1999 (has links)
This thesis studies the problems of feature tracking and motion estimation and presents an application of these concepts to human-computer interaction. The presentation is divided into three parts. The first part addresses feature tracking in a multi-scale context. Features in an image appear at different scales, and these scales can be expected to change over time due to the size variations that occur when objects move relative to the camera. A scheme for feature tracking is presented, which incorporates a mechanism for automatic scale selection and it is argued that such a mechanism is necessary to handle size variations over time. Experiments demonstrate how the proposed scheme is robust to size variations in situations where a traditional fixed scale tracker fails. This leads to extended feature trajectories, which are valuable for motion and structure estimation. It is also shown how an object representation suitable for tracking can be built in a conceptually simple way as a multi-scale feature hierarchy with qualitative relations between features at different scales. Experiments illustrate the capability of the proposed hierarchy to handle occlusions and semirigid objects. The second part of the thesis develops a geometric framework for computing estimates of 3D structure and motion from sparse feature correspondences in monocular sequences. A tool is presented, called the centered affine trifocal tensor, for motion estimation from three affine views. Moreover, a factorization approach is developed which simultaneously handles point and line correspondences in multiple affine views. Experiments show the influence of several factors on the accuracy of the structure and motion estimates, including noise in the feature localization, perspective effects and the number of feature correspondences. This motion estimation framework is also applied to feature correspondences obtained from the abovementioned feature tracker. The last part integrates the functionalities from the first two parts into a pre-prototype system which explores new principles for human-computer interaction. The idea is to transfer 3D orientation to a computer using no other equipment than the operator’s hand. / QC 20100519

Page generated in 0.0688 seconds