• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 242
  • 55
  • 28
  • 26
  • 13
  • 12
  • 12
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 451
  • 82
  • 54
  • 50
  • 48
  • 45
  • 44
  • 44
  • 41
  • 40
  • 36
  • 35
  • 34
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

A design of text-independent medium-size speaker recognition system

Zheng, Shun-De 13 September 2002 (has links)
This paper presents text-independent speaker identification results for medium-size speaker population sizes up to 400 speakers for TV speech and TIMIT database . A system based on Gaussian mixture speaker models is used for speaker identification, and experiments are conducted on the TV database and TIMIT database. The TV-Database results show medium-size population performance under TV conditions. These are believed to be the first speaker identification experiments on the complete 400 speaker TV databases and the largest text-independent speaker identification task reported to date. Identification accuracies of 94.5% on the TV databases, respectively and 98.5% on the TIMIT database .
162

Efficient image compression system using a CMOS transform imager

Lee, Jungwon 12 November 2009 (has links)
This research focuses on the implementation of the efficient image compression system among the many potential applications of a transform imager system. The study includes implementing the image compression system using a transform imager, developing a novel image compression algorithm for the system, and improving the performance of the image compression system through efficient encoding and decoding algorithms for vector quantization. A transform imaging system is implemented using a transform imager, and the baseline JPEG compression algorithm is implemented and tested to verify the functionality and performance of the transform imager system. The computational reduction in digital processing is investigated from two perspectives, algorithmic and implementation. Algorithmically, a novel wavelet-based embedded image compression algorithm using dynamic index reordering vector quantization (DIRVQ) is proposed for the system. DIRVQ makes it possible for the proposed algorithm to achieve superior performance over the embedded zero-tree wavelet (EZW) algorithm and the successive approximation vector quantization (SAVQ) algorithm. However, because DIRVQ requires intensive computational complexity, additional focus is placed on the efficient implementation of DIRVQ, and highly efficient implementation is achieved without a compromise in performance.
163

Source and Channel Coding for Audiovisual Communication Systems

Kim, Moo Yound January 2004 (has links)
<p>Topics in source and channel coding for audiovisual communication systems are studied. The goal of source coding is to represent a source with the lowest possible rate to achieve a particular distortion, or with the lowest possible distortion at a given rate. Channel coding adds redundancy to quantized source information to recover channel errors. This thesis consists of four topics.</p><p>Firstly, based on high-rate theory, we propose Karhunen-Loéve transform (KLT)-based classified vector quantization (VQ) to efficiently utilize optimal VQ advantages over scalar quantization (SQ). Compared with code-excited linear predictive (CELP) speech coding, KLT-based classified VQ provides not only a higher SNR and perceptual quality, but also lower computational complexity. Further improvement is obtained by companding.</p><p>Secondly, we compare various transmitter-based packet-loss recovery techniques from a rate-distortion viewpoint for real-time audiovisual communication systems over the Internet. We conclude that, in most circumstances, multiple description coding (MDC) is the best packet-loss recovery technique. If channel conditions are informed, channel-optimized MDC yields better performance.</p><p>Compared with resolution-constrained quantization (RCQ), entropy-constrained quantization (ECQ) produces a smaller number of distortion outliers but is more sensitive to channel errors. We apply a generalized γ-th power distortion measure to design a new RCQ algorithm that has less distortion outliers and is more robust against source mismatch than conventional RCQ methods.</p><p>Finally, designing quantizers to effectively remove irrelevancy as well as redundancy is considered. Taking into account the just noticeable difference (JND) of human perception, we design a new RCQ method that has improved performance in terms of mean distortion and distortion outliers. Based on high-rate theory, optimal centroid density and its corresponding mean distortion are also accurately predicted.</p><p>The latter two quantization methods can be combined with practical source coding systems such as KLT-based classified VQ and with joint source-channel coding paradigms such as MDC.</p>
164

Low-delay sensing and transmission in wireless sensor networks

Karlsson, Johannes Unknown Date (has links)
<p>With the increasing popularity and relevance of ad-hoc wireless sensor networks, cooperative transmission is more relevant than ever. In this thesis, we consider methods for optimization of cooperative transmission schemes in wireless sensor networks. We are in particular interested in communication schemes that can be used in applications that are critical to low-delays, such as networked control, and propose suitable candidates of joint source-channel coding schemes. We show that, in many cases, there are significant gains if the parts of the system are jointly optimized for the current source and channel. We especially focus on two means of cooperative transmission, namely distributed source coding and relaying.</p><p>In the distributed source coding case, we consider transmission of correlated continuous sources and propose an algorithm for designing simple and energy-efficient sensor nodes. In particular the cases of the binary symmetric channel as well as the additive white Gaussian noise channel are studied. The system works on a sample by sample basis yielding a very low encoding complexity, at an insignificant delay. Due to the source correlation, the resulting quantizers use the same indices for several separated intervals in order to reduce the quantization distortion.</p><p>For the case of relaying, we study the transmission of a continuous Gaussian source and the transmission of an uniformly distributed discrete source. In both situations, we propose design algorithms to design low-delay source-channel and relay mappings. We show that there can be significant power savings if the optimized systems are used instead of more traditional systems. By studying the structure of the optimized source-channel and relay mappings, we provide useful insights on how the optimized systems work. Interestingly, the design algorithm generally produces relay mappings with a structure that resembles Wyner-Ziv compression.</p>
165

Enhancing Gene Expression Signatures in Cancer Prediction Models: Understanding and Managing Classification Complexity

Kamath, Vidya P. 29 July 2010 (has links)
Cancer can develop through a series of genetic events in combination with external influential factors that alter the progression of the disease. Gene expression studies are designed to provide an enhanced understanding of the progression of cancer and to develop clinically relevant biomarkers of disease, prognosis and response to treatment. One of the main aims of microarray gene expression analyses is to develop signatures that are highly predictive of specific biological states, such as the molecular stage of cancer. This dissertation analyzes the classification complexity inherent in gene expression studies, proposing both techniques for measuring complexity and algorithms for reducing this complexity. Classifier algorithms that generate predictive signatures of cancer models must generalize to independent datasets for successful translation to clinical practice. The predictive performance of classifier models is shown to be dependent on the inherent complexity of the gene expression data. Three specific quantitative measures of classification complexity are proposed and one measure ( f) is shown to correlate highly (R 2=0.82) with classifier accuracy in experimental data. Three quantization methods are proposed to enhance contrast in gene expression data and reduce classification complexity. The accuracy for cancer prognosis prediction is shown to improve using quantization in two datasets studied: from 67% to 90% in lung cancer and from 56% to 68% in colorectal cancer. A corresponding reduction in classification complexity is also observed. A random subspace based multivariable feature selection approach using costsensitive analysis is proposed to model the underlying heterogeneous cancer biology and address complexity due to multiple molecular pathways and unbalanced distribution of samples into classes. The technique is shown to be more accurate than the univariate ttest method. The classifier accuracy improves from 56% to 68% for colorectal cancer prognosis prediction.  A published gene expression signature to predict radiosensitivity of tumor cells is augmented with clinical indicators to enhance modeling of the data and represent the underlying biology more closely. Statistical tests and experiments indicate that the improvement in the model fit is a result of modeling the underlying biology rather than statistical over-fitting of the data, thereby accommodating classification complexity through the use of additional variables.
166

Admissible Unbiased Quantizations: Distributions with Linear Components

Pötzelberger, Klaus January 2000 (has links) (PDF)
We show that results on the characterization of admissible quantizations, which have been derived in Potzelberger [3], have to be modified in case the probability distribution has linear components. Furthermore, we provide an example, where the limit of optimal quantizations is not admissible. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
167

Admissible Unbiased Quantizations: Distributions without Linear Components

Pötzelberger, Klaus January 2000 (has links) (PDF)
Let P be a Borel probability measure on Rd. We characterize the maximal elements p E M(P,m) with respect to the Bishop-De Leeuw order, where p E M(P, m) if and only if p P and [supp(p)] m. The results obtained have important consequences for statistical inference, such as tests of homogeneity or multivariate cluster analysis and for the theory of comparison of experiments. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
168

Telemetry Network Intrusion Detection System

Maharjan, Nadim, Moazzemi, Paria 10 1900 (has links)
ITC/USA 2012 Conference Proceedings / The Forty-Eighth Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2012 / Town and Country Resort & Convention Center, San Diego, California / Telemetry systems are migrating from links to networks. Security solutions that simply encrypt radio links no longer protect the network of Test Articles or the networks that support them. The use of network telemetry is dramatically expanding and new risks and vulnerabilities are challenging issues for telemetry networks. Most of these vulnerabilities are silent in nature and cannot be detected with simple tools such as traffic monitoring. The Intrusion Detection System (IDS) is a security mechanism suited to telemetry networks that can help detect abnormal behavior in the network. Our previous research in Network Intrusion Detection Systems focused on "Password" attacks and "Syn" attacks. This paper presents a generalized method that can detect both "Password" attack and "Syn" attack. In this paper, a K-means Clustering algorithm is used for vector quantization of network traffic. This reduces the scope of the problem by reducing the entropy of the network data. In addition, a Hidden-Markov Model (HMM) is then employed to help to further characterize and analyze the behavior of the network into states that can be labeled as normal, attack, or anomaly. Our experiments show that IDS can discover and expose telemetry network vulnerabilities using Vector Quantization and the Hidden Markov Model providing a more secure telemetry environment. Our paper shows how these can be generalized into a Network Intrusion system that can be deployed on telemetry networks.
169

Αυτόματη αναγνώριση ομιλητή χρησιμοποιώντας μεθόδους ταυτοποίησης κλειστού συνόλου / Automatic speaker recognition using closed-set recognition methods

Κεραμεύς, Ηλίας 03 August 2009 (has links)
Ο στόχος ενός συστήματος αυτόματης αναγνώρισης ομιλητή είναι άρρηκτα συνδεδεμένος με την εξαγωγή, το χαρακτηρισμό και την αναγνώριση πληροφοριών σχετικά με την ταυτότητα ενός ομιλητή. Η αναγνώριση ομιλητή αναφέρεται είτε στην ταυτοποίηση είτε στην επιβεβαίωσή του. Συγκεκριμένα, ανάλογα με τη μορφή της απόφασης που επιστρέφει, ένα σύστημα ταυτοποίησης μπορεί να χαρακτηριστεί ως ανοιχτού συνόλου (open-set) ή ως κλειστού συνόλου (closed-set). Αν ένα σύστημα βασιζόμενο σε ένα άγνωστο δείγμα φωνής αποκρίνεται με μια ντετερμινιστικής μορφής απόφαση, εάν το δείγμα ανήκει σε συγκεκριμένο ή σε άγνωστο ομιλητή, το σύστημα χαρακτηρίζεται ως σύστημα ταυτοποίησης ανοιχτού συνόλου. Από την άλλη πλευρά, στην περίπτωση που το σύστημα επιστρέφει τον πιθανότερο ομιλητή, από αυτούς που ήδη είναι καταχωρημένοι στη βάση, από τον οποίο προέρχεται το δείγμα φωνής το σύστημα χαρακτηρίζεται ως σύστημα κλειστού συνόλου. Η ταυτοποίηση συστήματος κλειστού συνόλου, περαιτέρω μπορεί να χαρακτηριστεί ως εξαρτημένη ή ανεξάρτητη από κείμενο, ανάλογα με το εάν το σύστημα γνωρίζει την εκφερόμενη φράση ή εάν αυτό είναι ικανό να αναγνωρίσει τον ομιλητή από οποιαδήποτε φράση που μπορεί αυτός να εκφέρει. Στην εργασία αυτή εξετάζονται και υλοποιούνται αλγόριθμοι αυτόματης αναγνώρισης ομιλητή που βασίζονται σε κλειστού τύπου και ανεξαρτήτως κειμένου συστήματα ταυτοποίησης. Συγκεκριμένα, υλοποιούνται αλγόριθμοι που βασίζονται στην ιδέα της διανυσματικής κβάντισης, τα στοχαστικά μοντέλα και τα νευρωνικά δίκτυα. / The purpose of system of automatic recognition of speaker is unbreakably connected with the export, the characterization and the recognition of information with regard to the identity of speaker. The recognition of speaker is reported or in the identification or in his confirmation. Concretely, depending on the form of decision that returns, a system of identification can be characterized as open-set or as closed-set. If a system based on an unknown sample of voice is replied with deterministic form decision, if the sample belongs in concrete or in unknown speaker, the system is characterized as system of identification of open set. On the other hand, in the case where the system return the more likely speaker than which emanates the sample of voice, the system is characterized as system of closed set. The identification of system of close set, further can be characterized as made dependent or independent from text, depending on whether the system knows the speaking phrase or if this is capable to recognize the speaker from any phrase that can speak. In this work they are examined and they are implemented algorithms of automatic recognition of speaker that are based in closed type and independent text systems of identification. Concretely, are implemented algorithms that are based in the idea of the Vector Quantization, the stochastic models and the neural networks.
170

Optimal Multiresolution Quantization for Broadcast Channels with Random Index Assignment

Teng, Fei 06 August 2010 (has links)
Shannon's classical separation result holds only in the limit of infinite source code dimension and infinite channel code block length. In addition, Shannon theory does not address the design of good source codes when the probability of channel error is nonzero, which is inevitable for finite-length channel codes. Thus, for practical systems, a joint source and channel code design could improve performance for finite dimension source code and finite block length channel code, as well as complexity and delay. Consider a multicast system over a broadcast channel, where different end users typically have different capacities. To support such user or capacity diversity, it is desirable to encode the source to be broadcasted into a scalable bit stream along which multiple resolutions of the source can be reconstructed progressively from left to right. Such source coding technique is called multiresolution source coding. In wireless communications, joint source channel coding (JSCC) has attracted wide attention due to its adaptivity to time-varying channels. However, there are few works on joint source channel coding for network multicast, especially for the optimal source coding over broadcast channels. In this work, we aim at designing and analyzing the optimal multiresolution vector quantization (MRVQ) in conjunction with the subsequent broadcast channel over which the coded scalable bit stream would be transmitted. By adopting random index assignment (RIA) to link MRVQ for the source with superposition coding for the broadcast channel, we establish a closed-form formula of end-to-end distortion for a tandem system of MRVQ and a broadcast channel. From this formula we analyze the intrinsic structure of end-to-end distortion (EED) in a communication system and derive two necessary conditions for optimal multiresolution vector quantization over broadcast channels with random index assignment. According to the two necessary conditions, we propose a greedy iterative algorithm for jointly designed MRVQ with channel conditions, which depends on the channel only through several types of average channel error probabilities rather than the complete knowledge of the channel. Experiments show that MRVQ designed by the proposed algorithm significantly outperforms conventional MRVQ designed without channel information. By building an closed-form formula for the weighted EED with RIA, it also makes the computational complexity incurred during the performance analysis feasible. In comparison with MRVQ design for a fixed index assignment, the computation complexity for quantization design is significantly reduced by using random index assignment. In addition, simulations indicate that our proposed algorithm shows better robustness against channel mismatch than MRVQ design with a fixed index assignment, simply due to the nature of using only the average channel information. Therefore, we conclude that our proposed algorithm is more appropriate in both wireless communications and applications where the complete knowledge of the channel is hard to obtain. Furthermore, we propose two novel algorithms for MRVQ over broadcast channels. One aims to optimize the two corresponding quantizers at two layers alternatively and iteratively, and the other applies under the constraint that each encoding cell is convex and contains the reconstruction point. Finally, we analyze the asymptotic performance of weighted EED for the optimal joint MRVQ. The asymptotic result provides a theoretically achievable quantizer performance level and sheds light on the design of the optimal MRVQ over broadcast channel from a different aspect.

Page generated in 0.0166 seconds