• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Detekce komplexů QRS s využitím vlnkové transformace / A Wavelet-Based QRS-Complex Detection

Kocian, Ondřej January 2009 (has links)
This project investigates methods of construction the wavelet-based QRS-complex detector. QRS-complex detection is very important, because it helps automatically calculate heart rate and in some cases it is used for compression ECG signal. The design of QRS detector can be made with many methods, in this project were mentioned and consequently tested only a few variants. The principle of designed detector used a wavelet-based decomposition of the original ECG signal to several frequency-coded bands. These bands are consequently transformed to absolute values and with the help of the threshold value are marked positions of assumed QRS complexes. Then are these assumed positions from all bands compared between themselves. If the position is confirmed at least at one nearby band, then is this position marked as true QRS complex. To increase efficiency of designed detector, two modifications were additionally mentioned. The first one, using the envelope of the signal, had rather negative effect on detectors efficiency. The second modification, using combined signal from three pseudoorthogonal leads, had reversely very good effect on detectors efficiency. In the end, the designed detector and all its modifications were tested on signals from CSE library (exactly on leads II, V2 and V6).
12

Three Essays on Analytical Models to Improve Early Detection of Cancer

Gopalappa, Chaitra 04 May 2010 (has links)
Development of approaches for early detection of cancer requires a comprehensive understanding of the cellular functions that lead to cancer, as well as implementing strategies for population-wide early detection. Cell functions are supported by proteins that are produced by active or expressed genes. Identifying cancer biomarkers, i.e., the genes that are expressed and the corresponding proteins present only in a cancer state of the cell, can lead to its use for early detection of cancer and for developing drugs. There are approximately 30,000 genes in the human genome producing over 500,000 proteins, thereby posing significant analytical challenges in linking specific genes to proteins and subsequently to cancer. Along with developing diagnostic strategies, effective population-wide implementation of these strategies is dependent on the behavior and interaction between entities that comprise the cancer care system, like patients, physicians, and insurance policies. Hence, obtaining effective early cancer detection requires developing models for a systemic study of cancer care. In this research, we develop models to address some of the analytical challenges in three distinct areas of early cancer detection, namely proteomics, genomics, and disease progression. The specific research topics (and models) are: 1) identification and quantification of proteins for obtaining biomarkers for early cancer detection (mixed integer-nonlinear programming (MINLP) and wavelet-based model), 2) denoising of gene values for use in identification of biomarkers (wavelet-based multiresolution denoising algorithm), and 3) estimation of disease progression time of colorectal cancer for developing early cancer intervention strategies (computational probability model and an agent-based simulation).
13

Statistical methods for variant discovery and functional genomic analysis using next-generation sequencing data

Tang, Man 03 January 2020 (has links)
The development of high-throughput next-generation sequencing (NGS) techniques produces massive amount of data, allowing the identification of biomarkers in early disease diagnosis and driving the transformation of most disciplines in biology and medicine. A greater concentration is needed in developing novel, powerful, and efficient tools for NGS data analysis. This dissertation focuses on modeling ``omics'' data in various NGS applications with a primary goal of developing novel statistical methods to identify sequence variants, find transcription factor (TF) binding patterns, and decode the relationship between TF and gene expression levels. Accurate and reliable identification of sequence variants, including single nucleotide polymorphisms (SNPs) and insertion-deletion polymorphisms (INDELs), plays a fundamental role in NGS applications. Existing methods for calling these variants often make simplified assumption of positional independence and fail to leverage the dependence of genotypes at nearby loci induced by linkage disequilibrium. We propose vi-HMM, a hidden Markov model (HMM)-based method for calling SNPs and INDELs in mapped short read data. Simulation experiments show that, under various sequencing depths, vi-HMM outperforms existing methods in terms of sensitivity and F1 score. When applied to the human whole genome sequencing data, vi-HMM demonstrates higher accuracy in calling SNPs and INDELs. One important NGS application is chromatin immunoprecipitation followed by sequencing (ChIP-seq), which characterizes protein-DNA relations through genome-wide mapping of TF binding sites. Multiple TFs, binding to DNA sequences, often show complex binding patterns, which indicate how TFs with similar functionalities work together to regulate the expression of target genes. To help uncover the transcriptional regulation mechanism, we propose a novel nonparametric Bayesian method to detect the clustering pattern of multiple-TF bindings from ChIP-seq datasets. Simulation study demonstrates that our method performs best with regard to precision, recall, and F1 score, in comparison to traditional methods. We also apply the method on real data and observe several TF clusters that have been recognized previously in mouse embryonic stem cells. Recent advances in ChIP-seq and RNA sequencing (RNA-Seq) technologies provides more reliable and accurate characterization of TF binding sites and gene expression measurements, which serves as a basis to study the regulatory functions of TFs on gene expression. We propose a log Gaussian cox process with wavelet-based functional model to quantify the relationship between TF binding site locations and gene expression levels. Through the simulation study, we demonstrate that our method performs well, especially with large sample size and small variance. It also shows a remarkable ability to distinguish real local feature in the function estimates. / Doctor of Philosophy / The development of high-throughput next-generation sequencing (NGS) techniques produces massive amount of data and bring out innovations in biology and medicine. A greater concentration is needed in developing novel, powerful, and efficient tools for NGS data analysis. In this dissertation, we mainly focus on three problems closely related to NGS and its applications: (1) how to improve variant calling accuracy, (2) how to model transcription factor (TF) binding patterns, and (3) how to quantify of the contribution of TF binding on gene expression. We develop novel statistical methods to identify sequence variants, find TF binding patterns, and explore the relationship between TF binding and gene expressions. We expect our findings will be helpful in promoting a better understanding of disease causality and facilitating the design of personalized treatments.
14

Compressed Domain Processing of MPEG Audio

Anantharaman, B 03 1900 (has links)
MPEG audio compression techniques significantly reduces the storage and transmission requirements for high quality digital audio. However, compression complicates the processing of audio in many applications. If a compressed audio signal is to be processed, a direct method would be to decode the compressed signal, process the decoded signal and re-encode it. This is computationally expensive due to the complexity of the MPEG filter bank. This thesis deals with processing of MPEG compressed audio. The main contributions of this thesis are a) Extracting wavelet coefficients in the MPEG compressed domain. b) Wavelet based pitch extraction in MPEG compressed domain. c) Time Scale Modifications of MPEG audio. d) Watermarking of MPEG audio. The research contributions starts with a technique for calculating several levels of wavelet coefficients from the output of the MPEG analysis filter bank. The technique exploits the toeplitz structure which arises when the MPEG and wavelet filter banks are represented in a matrix form, The computational complexity for extracting several levels of wavelet coefficients after decoding the compressed signal and directly from the output of the MPEG analysis filter bank are compared. The proposed technique is found to be computationally efficient for extracting higher levels of wavelet coefficients. Extracting pitch in the compressed domain becomes essential when large multimedia databases need to be indexed. For example one may be interested in listening to a particular speaker or to listen to male female audio segments in a multimedia document. For this application, pitch information is one of the very basic and important features required. Pitch is basically the time interval between two successive glottal closures. Glottal closures are accompanied by sharp transients in the speech signal which in turn gives rise to a local maxima in the wavelet coefficients. Pitch can be calculated by finding the time interval between two successive maxima in the wavelet coefficients. It is shown that the computational complexity for extracting pitch in the compressed domain is less than 7% of the uncompressed domain processing. An algorithm for extracting pitch in the compressed domain is proposed. The result of this algorithm for synthetic signals, and utterances of words by male/female is reported. In a number of important applications, one needs to modify an audio signal to render it more useful than its original. Typical applications include changing the time evolution of an audio signal (increase or decrease the rate of articulation of a speaker),or to adapt a given audio sequence to a given video sequence. In this thesis, time scale modifications are obtained in the subband domain such that when the modified subband signals are given to the MPEG synthesis filter bank, the desired time scale modification of the decoded signal is achieved. This is done by making use of sinusoidal modeling [I]. Here, each of the subband signal is modeled in terms of parameters such as amplitude phase and frequencies and are subsequently synthesised by using these parameters with Ls = k La where Ls is the length of the synthesis window , k is the time scale factor and La is the length of the analysis window. As the PCM version of the time scaled signal is not available, psychoacoustic model based bit allocation cannot be used. Hence a new bit allocation is done by using a subband coding algorithm. This method has been satisfactorily tested for time scale expansion and compression of speech and music signals. The recent growth of multimedia systems has increased the need for protecting digital media. Digital watermarking has been proposed as a method for protecting digital documents. The watermark needs to be added to the signal in such a way that it does not cause audible distortions. However the idea behind the lossy MPEC encoders is to remove or make insignificant those portions of the signal which does not affect human hearing. This renders the watermark insignificant and hence proving ownership of the signal becomes difficult when an audio signal is compressed. The existing compressed domain methods merely change the bits or the scale factors according to a key. Though simple, these methods are not robust to attacks. Further these methods require original signal to be available in the verification process. In this thesis we propose a watermarking method based on spread spectrum technique which does not require original signal during the verification process. It is also shown to be more robust than the existing methods. In our method the watermark is spread across many subband samples. Here two factors need to be considered, a) the watermark is to be embedded only in those subbands which will make the addition of the noise inaudible. b) The watermark should be added to those subbands which has sufficient bit allocation so that the watermark does not become insignificant due to lack of bit allocation. Embedding the watermark in the lower subbands would cause distortion and in the higher subbands would prove futile as the bit allocation in these subbands are practically zero. Considering a11 these factors, one can introduce noise to samples across many frames corresponding to subbands 4 to 8. In the verification process, it is sufficient to have the key/code and the possibly attacked signal. This method has been satisfactorily tested for robustness to scalefactor, LSB change and MPEG decoding and re-encoding.
15

Simulations numériques d’écoulements incompressibles interagissant avec un corps déformable : application à la nage des poissons / Numerical simulation of incompressible flows interacting with forced deformable bodies : Application to fish swimming

Ghaffari Dehkharghani, Seyed Amin 15 December 2014 (has links)
Une méthode numérique précise et efficace est proposée pour la simulation de corps déformables interagissant avec un écoulement incompressible. Les équations de Navier-Stokes, considérées dans leur formulation vorticité fonction de courant, sont discrétisées temporellement et spatialement à l'aide respectivement d'un schéma d'ordre 4 de Runge-Kutta et par des différences finies compactes. Grâce à l'utilisation d'un maillage uniforme, nous proposons un nouveau solveur direct au quatrième ordre pour l'équation de Poisson, permettant de garantir l'incompressibilité au zéro machine sur une grille optimale. L'introduction d'un corps déformable dans l'écoulement de fluide est réalisée au moyen d'une méthode de pénalisation de volume. La déformation du corps est imposée par l'utilisation d'un maillage lagrangien structuré mobile qui interagit avec le fluide environnant en raison des forces hydrodynamiques et du moment (calculés sur le maillage eulérien de référence). Une loi de contrôle efficace de la courbure d'un poisson anguilliforme nageant vers une cible prescrite est proposée. La méthode numérique développée prouve son efficacité et précision tant dans le cas de la nage du poisson mais aussi plus d'un grand nombre de problèmes d'interactions fluide-structure. / We present an efficient algorithm for simulation of deformable bodies interacting with two-dimensional incompressible flows. The temporal and spatial discretizations of the Navier--Stokes equations in vorticity stream-function formulation are based on classical fourth-order Runge--Kutta and compact finite differences, respectively. Using a uniform Cartesian grid we benefit from the advantage of a new fourth-order direct solver for the Poisson equation to ensure the incompressibility constraint down to machine zero over an optimal grid. For introducing a deformable body in fluid flow, the volume penalization method is used. A Lagrangian structured grid with prescribed motion covers the deformable body which is interacting with the surrounding fluid due to the hydrodynamic forces and the torque calculated on the Eulerian reference grid. An efficient law for controlling the curvature of an anguilliform fish, swimming toward a prescribed goal, is proposed which is based on the geometrically exact theory of nonlinear beams and quaternions. Validation of the developed method shows the efficiency and expected accuracy of the algorithm for fish-like swimming and also for a variety of fluid/solid interaction problems.
16

On improving the accuracy and reliability of GPS/INS-based direct sensor georeferencing

Yi, Yudan 24 August 2007 (has links)
No description available.
17

Use of Coherent Point Drift in computer vision applications

Saravi, Sara January 2013 (has links)
This thesis presents the novel use of Coherent Point Drift in improving the robustness of a number of computer vision applications. CPD approach includes two methods for registering two images - rigid and non-rigid point set approaches which are based on the transformation model used. The key characteristic of a rigid transformation is that the distance between points is preserved, which means it can be used in the presence of translation, rotation, and scaling. Non-rigid transformations - or affine transforms - provide the opportunity of registering under non-uniform scaling and skew. The idea is to move one point set coherently to align with the second point set. The CPD method finds both the non-rigid transformation and the correspondence distance between two point sets at the same time without having to use a-priori declaration of the transformation model used. The first part of this thesis is focused on speaker identification in video conferencing. A real-time, audio-coupled video based approach is presented, which focuses more on the video analysis side, rather than the audio analysis that is known to be prone to errors. CPD is effectively utilised for lip movement detection and a temporal face detection approach is used to minimise false positives if face detection algorithm fails to perform. The second part of the thesis is focused on multi-exposure and multi-focus image fusion with compensation for camera shake. Scale Invariant Feature Transforms (SIFT) are first used to detect keypoints in images being fused. Subsequently this point set is reduced to remove outliers, using RANSAC (RANdom Sample Consensus) and finally the point sets are registered using CPD with non-rigid transformations. The registered images are then fused with a Contourlet based image fusion algorithm that makes use of a novel alpha blending and filtering technique to minimise artefacts. The thesis evaluates the performance of the algorithm in comparison to a number of state-of-the-art approaches, including the key commercial products available in the market at present, showing significantly improved subjective quality in the fused images. The final part of the thesis presents a novel approach to Vehicle Make & Model Recognition in CCTV video footage. CPD is used to effectively remove skew of vehicles detected as CCTV cameras are not specifically configured for the VMMR task and may capture vehicles at different approaching angles. A LESH (Local Energy Shape Histogram) feature based approach is used for vehicle make and model recognition with the novelty that temporal processing is used to improve reliability. A number of further algorithms are used to maximise the reliability of the final outcome. Experimental results are provided to prove that the proposed system demonstrates an accuracy in excess of 95% when tested on real CCTV footage with no prior camera calibration.

Page generated in 0.0268 seconds