• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 838
  • 117
  • 79
  • Tagged with
  • 1034
  • 673
  • 671
  • 298
  • 290
  • 290
  • 225
  • 214
  • 161
  • 155
  • 116
  • 83
  • 81
  • 73
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Rate-distortion optimal vector selection in frame based compression

Ryen, Tom January 2005 (has links)
In signal compression we distinguish between lossless and lossy compression. In lossless compression, the encoded signal is more bit efficient than the original signal and is exactly the same as the original one when decoded. In lossy compression, the encoded signal represents an approximation of the original signal, but it has less number of bits. In the latter situation, the major issue is to find the best possible rate-distortion (RD) tradeoff. The rate-distortion function (RDF) represents the theoretical lower bound of the distortion between the original and the reconstructed signal, subject to a given total bit rate for the compressed signal. This is with respect to any compression scheme. If the compression scheme is given, we can find its operational RDF (ORDF). The main contribution of this dissertation is the presentation of a method that finds the operational rate-distortion optimal solution for an overcomplete signal decomposition. The idea of using overcomplete dictionaries, or frames, is to get a sparse representation of the signal. Traditionally, suboptimal algorithms, such as Matching Pursuit (MP), are used for this purpose. Given the frame and the Variable Length Codeword (VLC) table embedded in the entropy coder, the solution of the problem of establishing the best RD trade-off has a very high complexity. The proposed method reduces this complexity significantly by structuring the solution approach such that the dependent quantizer allocation problem reduces into an independent one. In addition, the use of a solution tree further reduces the complexity. It is important to note that this large reduction in complexity is achieved without sacrificing optimality. The optimal rate-distortion solution depends on the frame selection and the VLC table embedded in the entropy coder. Thus, frame design and VLC optimization is part of this work. Extensive coding experiments are presented, where Gaussian AR(1) processes and various electrocardiogram (ECG) signals are used as input signals. The experiments demonstrate that the new approach outperforms Rate-Distortion Optimized (RDO) Matching Pursuit, previously proposed in [17], in the rate-distortion sense.
12

Path planning using probabilistic cell decomposition

Lingelbach, Frank January 2005 (has links)
<p>The problem of path planning occurs in many areas, such as computational biology, computer animations and computer-aided design. It is of particular importance in the field of robotics. Here, the task is to find a feasible path/trajectory that the robot can follow from a start to a goal configuration. For the basic path planning problem it is often assumed that a perfect model of the world surrounding the robot is known. In industrial robotics, such models are often based on, for example, CAD models. However, in applications of autonomous service robotics less knowledge about the environment is available. Efficient and robust path planning algorithms are here of major importance. To be truly autonomous, a robot should be able to plan all motions on its own. Furthermore, it has to be able to plan and re-plan in real time, which puts hard constraints on the acceptable computation time.</p><p>This thesis presents a novel path planning method called Probabilistic Cell Decomposition (PCD). This approach combines the underlying method of cell decomposition with the concept of probabilistic sampling. The cell decomposition is iteratively refined until a collision-free path is found. In each immediate step the current cell decomposition is used to guide probabilistic sampling to important areas.</p><p>The basic PCD algorithm can be decomposed into a number of components such as graph search, local planning, cell splitting and probabilistic sampling. For each component different approaches are discussed. The performance of PCD is then tested on a set of benchmark problems. The results are compared to those obtained by one of the most commonly used probabilistic path planning methods, namely Rapidly-exploring Random Trees. It is shown that PCD efficiently solves various kinds of path planning problems.</p><p>Planning for autonomous manipulation often involves additional path constraints beyond collision avoidance. This thesis presents an application of PCD to path planning for a mobile manipulator. The robot has to fetch a carton of milk from the refrigerator and place it on the kitchen table. Here, opening the refrigerator involves motion with a pre-specified end-effector path. The results show that planning the different motions for the high-level task takes less time than actually executing them. The whole series of subtasks takes about 1.5 seconds to compute.</p>
13

Modelling and control of auxiliary loads in heavy vehicles

Pettersson, Niklas January 2004 (has links)
No description available.
14

Multiuser diversity orthogonal frequency division multiple access systems

Svedman, Patrick January 2004 (has links)
<p>Multiuser diversity can be used to significantly increase system throughput in wireless communication systems. The idea is to schedule users when they experience good channel conditions and let them wait when the channels are weak. In this thesis, several aspects on multiuser diversity OFDMA systems are investigated. An adaptive reduced feedback scheme for multiuser diversity OFDMA is proposed. It significantly reduces the total feedback overhead while maintaining a multiuser diversity gain. The scheme uses clusters of sub-carriers as feedback units and only feeds back information about the fading peaks. Furthermore, an opportunistic beamforming scheme for clustered OFDM is presented and evaluated. A key aspect of the opportunistic beamforming scheme is that it increases the frequency fading of users with relatively flat channels, which increases the likelihood of being scheduled. Scheduling is an important aspect of multiuser diversity. A modified proportional fair scheduler is proposed in this thesis. It incorporates user individual target bit-rates and delays and a tunable fairness level. These features makes the scheduler more attractive for future mixed service wireless systems. The use of the feedback information in the opportunistic beamforming process is discussed and evaluated. This extra information can help to increase the performance of unfairly treated users in the system. Several aspects of the proposed system are evaluated by means of simulation, using the 3GPP spatial channel model. In the simulations, the clustered beamforming performs better than three comparison systems. The modified proportional fair scheduler manages to divide the resources according to the user targets, while at the same time exploiting the multiuser diversity as well as the standard proportional fair algorithm. The thesis also includes results on coded packet error rate estimation from a channel realization by means of a two dimensional table. This can be useful in large network simulations as well as in designing adaptive modulation schemes.</p>
15

Design and analysis of feedback structures in chemical plants and biochemical systems

Schmidt, Henning January 2004 (has links)
<p>This thesis deals with modelling, analysis, and design of interactions between subsystems in chemical process plants and intracellular biochemical processes. In the first part, the focus is on the selection of decentralized feedback control structures for plants in the chemical process industry, with the aim of achieving a desired performance in the presence of interactions. The second part focuses on modelling and analysis of complex biochemical networks, with the aim of unravelling the impact of interactions between genes, proteins, and metabolites on cell functions. </p><p>Decentralized control is almost the de-facto standard for control of large-scale systems, and in particular for systems in the process industry. An important task in the design of a decentralized control system is the selection of the control configuration, the so-called input-output pairing, which effectively decides the subsystems. Previous research addressing this problem has primarily focused on the effect of interactions on stability. In this thesis, the problem of selecting control configurations that can deliver a desired control performance is addressed. It is shown that existing measures of interactions, such as the relative gain array (RGA), are poor for selecting configurations for performance due to their inherent assumption of perfect control. Furthermore, several model based tools for the selection of control configurations based on performance considerations are proposed. </p><p>Central functions in the cell are often linked to complex dynamic behaviors, such as sustained oscillations and multistability, in a biochemical reaction network. Determination of the specific interactions underlying such behaviors is important, for example, to determine sensitivity, robustness, and modelling requirements of given cell functions. A method for identifying the feedback connections and involved subsystems, within a biochemical network, that are the main sources of a complex dynamic behavior is proposed. The effectiveness of the method is illustrated on examples involving cell cycle control, circadian rhythms and glycolytic oscillations. Also, a method for identifying structured dynamic models of biochemical networks, based on experimental data, is proposed. The method is based on results from system identification theory, using time-series measurement data of expression profiles and concentrations of the involved biochemical components. Finally, in order to reduce the complexity of obtained network models, a method for decomposing large-scale networks into biologically meaningful subnetworks is proposed.</p>
16

Path planning using probabilistic cell decomposition

Lingelbach, Frank January 2005 (has links)
The problem of path planning occurs in many areas, such as computational biology, computer animations and computer-aided design. It is of particular importance in the field of robotics. Here, the task is to find a feasible path/trajectory that the robot can follow from a start to a goal configuration. For the basic path planning problem it is often assumed that a perfect model of the world surrounding the robot is known. In industrial robotics, such models are often based on, for example, CAD models. However, in applications of autonomous service robotics less knowledge about the environment is available. Efficient and robust path planning algorithms are here of major importance. To be truly autonomous, a robot should be able to plan all motions on its own. Furthermore, it has to be able to plan and re-plan in real time, which puts hard constraints on the acceptable computation time. This thesis presents a novel path planning method called Probabilistic Cell Decomposition (PCD). This approach combines the underlying method of cell decomposition with the concept of probabilistic sampling. The cell decomposition is iteratively refined until a collision-free path is found. In each immediate step the current cell decomposition is used to guide probabilistic sampling to important areas. The basic PCD algorithm can be decomposed into a number of components such as graph search, local planning, cell splitting and probabilistic sampling. For each component different approaches are discussed. The performance of PCD is then tested on a set of benchmark problems. The results are compared to those obtained by one of the most commonly used probabilistic path planning methods, namely Rapidly-exploring Random Trees. It is shown that PCD efficiently solves various kinds of path planning problems. Planning for autonomous manipulation often involves additional path constraints beyond collision avoidance. This thesis presents an application of PCD to path planning for a mobile manipulator. The robot has to fetch a carton of milk from the refrigerator and place it on the kitchen table. Here, opening the refrigerator involves motion with a pre-specified end-effector path. The results show that planning the different motions for the high-level task takes less time than actually executing them. The whole series of subtasks takes about 1.5 seconds to compute. / QC 20101209
17

Modelling and control of auxiliary loads in heavy vehicles

Pettersson, Niklas January 2004 (has links)
<p>QCR 20161026</p>
18

Multiuser diversity orthogonal frequency division multiple access systems

Svedman, Patrick January 2004 (has links)
Multiuser diversity can be used to significantly increase system throughput in wireless communication systems. The idea is to schedule users when they experience good channel conditions and let them wait when the channels are weak. In this thesis, several aspects on multiuser diversity OFDMA systems are investigated. An adaptive reduced feedback scheme for multiuser diversity OFDMA is proposed. It significantly reduces the total feedback overhead while maintaining a multiuser diversity gain. The scheme uses clusters of sub-carriers as feedback units and only feeds back information about the fading peaks. Furthermore, an opportunistic beamforming scheme for clustered OFDM is presented and evaluated. A key aspect of the opportunistic beamforming scheme is that it increases the frequency fading of users with relatively flat channels, which increases the likelihood of being scheduled. Scheduling is an important aspect of multiuser diversity. A modified proportional fair scheduler is proposed in this thesis. It incorporates user individual target bit-rates and delays and a tunable fairness level. These features makes the scheduler more attractive for future mixed service wireless systems. The use of the feedback information in the opportunistic beamforming process is discussed and evaluated. This extra information can help to increase the performance of unfairly treated users in the system. Several aspects of the proposed system are evaluated by means of simulation, using the 3GPP spatial channel model. In the simulations, the clustered beamforming performs better than three comparison systems. The modified proportional fair scheduler manages to divide the resources according to the user targets, while at the same time exploiting the multiuser diversity as well as the standard proportional fair algorithm. The thesis also includes results on coded packet error rate estimation from a channel realization by means of a two dimensional table. This can be useful in large network simulations as well as in designing adaptive modulation schemes.
19

Shortest path methods in representation and compression of signals and image contours

Nygaard, Ranveig January 2000 (has links)
<p>Signal compression is an important problem encountered in many applications. Various techniques have been proposed over the years for adressing the problem. The focus of the dissertation is on signal representation and compression by the use of optimization theory, more shortest path methods.</p><p>Several new signal compression algorithms are presented. They are based on the coding of line segments which are used to spproximate, and thereby represent, the signal. These segments are fit in a way that is optimal given some constraints on the solution. By formulating the compession problem as a graph theory problem, shortest path methods can be applied in order to yeild optimal compresson with respect to the given constraints.</p><p>The approaches focused on in this dissertaion mainly have their origin in ECG comression and is often referred to as time domain compression methods. Coding by time domain methods is based on the idea of extracting a subset of <i>significant </i>signals samples to represent the signal. The key to a successful algoritm is a good rule for determining the most significant samples. Between any two succeeding samples in the extracted smaple set, different functions are applied in reconstruction of the signal. These functions are fitted in a wy that guaratees minimal reconstruction error under the gien constraints. Two main categories of compression schemes are developed:</p><p>1. Interpolating methods, in which it is insisted on equality between the original and reconstructed signal at the points of extraction.</p><p>2. Non-interpolating methods, where the inerpolatian restriction is released.</p><p>Both first and second order polynomials are used in reconstruction of the signal. There is solso developed an approach were multiple error measures are applied within one compression algorithm. </p><p>The approach of extracting the most significant smaples are further developed by measuring the samples in terms of the number of bits needed to encode such samples. This way we develop an approach which is optimal in the ratedistortion sense. </p><p>Although the approaches developed are applicable to any type of signal, the focus of this dissertaion is on the compression of electrodiogram (ECG) signals and image contours, ECG signal compression has traditionally been </p>
20

Signal analysis of out-of-hospital cardiac arrest electrocardiograms for decision support during cardiopulmonary resuscitation

Eftestøl, Trygve C. January 2000 (has links)
<p>This thesis focuses on signal analysis of electrocardiograms (ECG) from out-of-hospital cardiac arrested patients. The application of such methods may eventually contribute in guiding therapy towards improved survival rates which in general are dismally low, but varies among different ambulance systems depending on time from arrest to first electrical defibrillation given to the patient.</p><p>One of the possible reasons for this is that a large part of the valuable therapy time is wasted in futile attempts to restart the heart by electrical defibrillations. Using this time to provide precordial compressions and ventilations to establish and keep up an artificial supply of oxygenated blood would serve the patient better. It would improve the heart condition and thus increase the chance of successful defibrillation outcome.</p><p>By predicting defibrillation outcome, the ratio of failured defibrillations can be decreased. The ability to monitor therapeutic efficacy can help the rescuer to optimise performance. A decision support system involving ECG signal analysis to extract descriptive features has been investigated for these purposes in earlier work. We propose to use a pattern recognition framework for the decision support system. In contrast to most earlier work with one-dimensional features, this allows analysis of multivariate information. In our experiments we demonstrate that performance both in outcome prediction and monitoring is better when the analysis is based on combined rather than one-dimensional features. We also propose and experimentally verify methods firstly to control performance and secondly to ensure that performance results are reliable.</p><p>Another problem also has to be resolved. The precordial compressions and ventilations cause artefacts in the ECG so that treatment has to be stopped during analysis. We propose using adaptive filters for removing such artefacts. These filters use reference signals providing information correlated to components in the artefacts. In one of our experiments we mix human ECG with artefacts from animal ECG and show that the adaptive filter is successful in restoring the original human ECG signal.</p>

Page generated in 0.0538 seconds