• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • Tagged with
  • 14
  • 14
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Parallel computing techniques for computed tomography

Deng, Junjun 01 May 2011 (has links)
X-ray computed tomography is a widely adopted medical imaging method that uses projections to recover the internal image of a subject. Since the invention of X-ray computed tomography in the 1970s, several generations of CT scanners have been developed. As 3D-image reconstruction increases in popularity, the long processing time associated with these machines has to be significantly reduced before they can be practically employed in everyday applications. Parallel computing is a computer science computing technique that utilizes multiple computer resources to process a computational task simultaneously; each resource computes only a part of the whole task thereby greatly reducing computation time. In this thesis, we use parallel computing technology to speed up the reconstruction while preserving the image quality. Three representative reconstruction algorithms--namely, Katsevich, EM, and Feldkamp algorithms--are investigated in this work. With the Katsevich algorithm, a distributed-memory PC cluster is used to conduct the experiment. This parallel algorithm partitions and distributes the projection data to different computer nodes to perform the computation. Upon completion of each sub-task, the results are collected by the master computer to produce the final image. This parallel algorithm uses the same reconstruction formula as the sequential counterpart, which gives an identical image result. The parallelism of the iterative CT algorithm uses the same PC cluster as in the first one. However, because it is based on a local CT reconstruction algorithm, which is different from the sequential EM algorithm, the image results are different with the sequential counterpart. Moreover, a special strategy using inhomogeneous resolution was used to further speed up the computation. The results showed that the image quality was largely preserved while the computational time was greatly reduced. Unlike the two previous approaches, the third type of parallel implementation uses a shared-memory computer. Three major accelerating methods--SIMD (Single instruction, multiple data), multi-threading, and OS (ordered subsets)--were employed to speed up the computation. Initial investigations showed that the image quality was comparable to those of the conventional approach though the computation speed was significantly increased.
2

Numerical Solution of the coupled algebraic Riccati equations

Rajasingam, Prasanthan 01 December 2013 (has links)
In this paper we develop new and improved results in the numerical solution of the coupled algebraic Riccati equations. First we provide improved matrix upper bounds on the positive semidefinite solution of the unified coupled algebraic Riccati equations. Our approach is largely inspired by recent results established by Liu and Zhang. Our main results tighten the estimates of the relevant dominant eigenvalues. Also by relaxing the key restriction our upper bound applies to a larger number of situations. We also present an iterative algorithm to refine the new upper bounds and the lower bounds and numerically compute the solutions of the unified coupled algebraic Riccati equations. This construction follows the approach of Gao, Xue and Sun but we use different bounds. This leads to different analysis on convergence. Besides, we provide new matrix upper bounds for the positive semidefinite solution of the continuous coupled algebraic Riccati equations. By using an alternative primary assumption we present a new upper bound. We follow the idea of Davies, Shi and Wiltshire for the non-coupled equation and extend their results to the coupled case. We also present an iterative algorithm to improve our upper bounds. Finally we improve the classical Newton's method by the line search technique to compute the solutions of the continuous coupled algebraic Riccati equations. The Newton's method for couple Riccati equations is attributed to Salama and Gourishanar, but we construct the algorithm in a different way using the Fr\'echet derivative and we include line search too. Our algorithm leads to a faster convergence compared with the classical scheme. Numerical evidence is also provided to illustrate the performance of our algorithm.
3

Relay Selection and Resource Allocation in One-Way and Two-Way Cognitive Relay Networks

Alsharoa, Ahmad M. 08 May 2013 (has links)
In this work, the problem of relay selection and resource power allocation in one- way and two-way cognitive relay networks using half duplex channels with different relaying protocols is investigated. Optimization problems for both single and multiple relay selection that maximize the sum rate of the secondary network without degrading the quality of service of the primary network by respecting a tolerated interference threshold were formulated. Single relay selection and optimal power allocation for two-way relaying cognitive radio networks using decode-and-forward and amplify-and-forward protocols were studied. Dual decomposition and subgradient methods were used to find the optimal power allocation. The transmission process to exchange two different messages between two transceivers for two-way relaying technique takes place in two time slots. In the first slot, the transceivers transmit their signals simultaneously to the relay. Then, during the second slot the relay broadcasts its signal to the terminals. Moreover, improvement of both spectral and energy efficiency can be achieved compared with the one-way relaying technique. As an extension, a multiple relay selection for both one-way and two-way relaying under cognitive radio scenario using amplify-and-forward were discussed. A strong optimization tool based on genetic and iterative algorithms was employed to solve the 
formulated optimization problems for both single and multiple relay selection, where discrete relay power levels were considered. Simulation results show that the practical and low-complexity heuristic approaches achieve almost the same performance of the optimal relay selection schemes either with discrete or continuous power distributions while providing a considerable saving in terms of computational complexity.
4

Inferences for the Weibull parameters based on interval-censored data and its application

Huang, Jinn-Long 19 June 2000 (has links)
In this article, we make inferences for the Weibull parameters and propose two test statistics for the comparison of two Weibull distributions based on interval-censored data. However, the distributions of the two statistics are unknown and not easy to obtain, therefore a simulation study is necessary. An urn model in the simulation of interval-censored data was proposed by Lee (1999) to select random intervals. Then we propose a simulation procedure with urn model to obtain approximately the quantiles of the two statistics. We demonstrate an example in AIDS study to illustrate how the tests can be applied to the infection time distributions of AIDS.
5

Modeling and Predicting Taxi Times at Airports

Chauhan, Arjun 29 October 2010 (has links)
This research aims at providing methods in analyzing and estimating the taxi times of aircraft at airports, which are expected to be an important element for reducing taxiing delay and consequent excess fuel consumption and environmental costs. The proposed model involves a set of regression equations to model the taxi-out and taxi-in times at airports. The estimated results can be used to calculate the nominal taxi times, which are essential measures for evaluating the taxiing delays at airports. Given the outcomes of the regression model, an iterative algorithm is developed to predict taxi times. A case study at LGA shows that the proposed algorithm demonstrates higher accuracy in comparison to other algorithms in existing literature.
6

Electrical Capacitance Volume Tomography Of High Contrast Dielectrics Using A Cuboid Geometry

Nurge, Mark 01 January 2007 (has links)
An Electrical Capacitance Volume Tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 x 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This dissertation presents a method of reconstructing images of high contrast dielectric materials using only the self capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminum structure inserted at different positions within the sensing region. Comparisons with standard two dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.
7

Optimisation perceptive de la restitution sonore multicanale par une analyse spatio-temporelle des premières réflexions

Deprez, Romain 07 December 2012 (has links)
L'objectif de cette thèse est l'optimisation de la qualité perçue de la reproduction sonore par un système audio multicanal, dans un contexte de salle d'écoute domestique. Les travaux de recherche présentés s'articulent selon deux axes. Le premier concerne l'effet de salle, et plus particulièrement les aspects physiques et perceptifs liés aux premières réflexions d'une salle. Ces éléments sont décrits spécifiquement, et une expérience psychoacoustique a été menée afin d'étendre les données disponibles quant à leur perceptibilité, c'est à dire leur capacité à modifier la perception du son direct, que ce soit en termes de timbre ou de localisation. Les résultats mettent en évidence la dépendance du seuil en fonction du type de stimulus, ainsi que son évolution en fonction de la configuration spatiale de l'onde directe et de la réflexion. Pour une condition donnée, le seuil de perceptibilité est décrit comme une fonction de directivité dépendant de l'incidence de la réflexion.Le deuxième axe de travail concerne les méthodes de correction de l'effet de la salle de reproduction. Les méthodes numériques classiques sont d'abord étudiées. Leur principale lacune réside dans l'absence de prise en compte du rôle spécifique des propriétés temporelles et spatiales des première réflexions. Le travail de thèse se termine par la proposition d'une nouvelle méthode de correction utilisant un algorithme itératif de type FISTA modifié afin de prendre en compte la perceptibilité des réflexions. Le traitement est implémenté dans une représentation où l'information spatiale est analysée sur la base des harmoniques sphériques. / The goal of this Ph. D. thesis is to optimize the perceived quality of multichannel sound reproduction systems, in the context of a domestic listening room. The presented research work have been pursued in two different directions.The first deals with room effet, and more particularly with physical and perceptual aspects of first reflections within a room. These reflections are specifically described, and a psychoacoustical experiment have been carried out in order to extend the available data on their perceptibility, i.e. their potency in altering the perception of the direct sound, whether in its timbral or spatial features. Results exhibit the variation of the threshold depending on the type of stimulus, as well as on the spatial configuration of the direct sound and the reflection. For a given condition, the perceptibility threshold is given as a directivity function depending on the direction of incidence of the reflection.The second topic deals with room correction methods. Firstly, state-of-the art digital methods are investigated. Their main drawback is that they don't consider the specific impact of the temporal and spatial attributes of first reflections. A new correction method is therefore proposed. It uses an iterative algorithm, derivated from the FISTA method, in order to take into account the perceptibility of the reflections. All the processing is carried out in a spatial sound representation, where the spatial properties of the sound are analysed thanks to spherical harmonics.
8

Low-density Parity-Check decoding Algorithms / Low-density Parity-Check avkodare algoritm

Pirou, Florent January 2004 (has links)
<p>Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.</p>
9

Low-density Parity-Check decoding Algorithms / Low-density Parity-Check avkodare algoritm

Pirou, Florent January 2004 (has links)
Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.
10

Rate Distortion Theory for Causal Video Coding: Characterization, Computation Algorithm, Comparison, and Code Design

Zheng, Lin January 2012 (has links)
Due to the sheer volume of data involved, video coding is an important application of lossy source coding, and has received wide industrial interest and support as evidenced by the development and success of a series of video coding standards. All MPEG-series and H-series video coding standards proposed so far are based upon a video coding paradigm called predictive video coding, where video source frames Xᵢ,i=1,2,...,N, are encoded in a frame by frame manner, the encoder and decoder for each frame Xᵢ, i =1, 2, ..., N, enlist help only from all previous encoded frames Sj, j=1, 2, ..., i-1. In this thesis, we will look further beyond all existing and proposed video coding standards, and introduce a new coding paradigm called causal video coding, in which the encoder for each frame Xᵢ can use all previous original frames Xj, j=1, 2, ..., i-1, and all previous encoded frames Sj, while the corresponding decoder can use only all previous encoded frames. We consider all studies, comparisons, and designs on causal video coding from an information theoretic point of view. Let R*c(D₁,...,D_N) (R*p(D₁,...,D_N), respectively) denote the minimum total rate required to achieve a given distortion level D₁,...,D_N > 0 in causal video coding (predictive video coding, respectively). A novel computation approach is proposed to analytically characterize, numerically compute, and compare the minimum total rate of causal video coding R*c(D₁,...,D_N) required to achieve a given distortion (quality) level D₁,...,D_N > 0. Specifically, we first show that for jointly stationary and ergodic sources X₁, ..., X_N, R*c(D₁,...,D_N) is equal to the infimum of the n-th order total rate distortion function R_{c,n}(D₁,...,D_N) over all n, where R_{c,n}(D₁,...,D_N) itself is given by the minimum of an information quantity over a set of auxiliary random variables. We then present an iterative algorithm for computing R_{c,n}(D₁,...,D_N) and demonstrate the convergence of the algorithm to the global minimum. The global convergence of the algorithm further enables us to not only establish a single-letter characterization of R*c(D₁,...,D_N) in a novel way when the N sources are an independent and identically distributed (IID) vector source, but also demonstrate a somewhat surprising result (dubbed the more and less coding theorem)---under some conditions on source frames and distortion, the more frames need to be encoded and transmitted, the less amount of data after encoding has to be actually sent. With the help of the algorithm, it is also shown by example that R*c(D₁,...,D_N) is in general much smaller than the total rate offered by the traditional greedy coding method by which each frame is encoded in a local optimum manner based on all information available to the encoder of the frame. As a by-product, an extended Markov lemma is established for correlated ergodic sources. From an information theoretic point of view, it is interesting to compare causal video coding and predictive video coding, which all existing video coding standards proposed so far are based upon. In this thesis, by fixing N=3, we first derive a single-letter characterization of R*p(D₁,D₂,D₃) for an IID vector source (X₁,X₂,X₃) where X₁ and X₂ are independent, and then demonstrate the existence of such X₁,X₂,X₃ for which R*p(D₁,D₂,D₃)>R*c(D₁,D₂,D₃) under some conditions on source frames and distortion. This result makes causal video coding an attractive framework for future video coding systems and standards. The design of causal video coding is also considered in the thesis from an information theoretic perspective by modeling each frame as a stationary information source. We first put forth a concept called causal scalar quantization, and then propose an algorithm for designing optimum fixed-rate causal scalar quantizers for causal video coding to minimize the total distortion among all sources. Simulation results show that in comparison with fixed-rate predictive scalar quantization, fixed-rate causal scalar quantization offers as large as 16% quality improvement (distortion reduction).

Page generated in 0.0755 seconds