• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 183
  • 35
  • 34
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 323
  • 323
  • 145
  • 121
  • 86
  • 66
  • 65
  • 58
  • 52
  • 42
  • 37
  • 37
  • 36
  • 28
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Testing for delay defects utilizing test data compression techniques

Putman, Richard Dean, 1970- 29 August 2008 (has links)
As technology shrinks new types of defects are being discovered and new fault models are being created for those defects. Transition delay and path delay fault models are two such models that have been created, but they still fall short in that they are unable to obtain a high test coverage of smaller delay defects; these defects can cause functional behavior to fail and also indicate potential reliability issues. The first part of this dissertation addresses these problems by presenting an enhanced timing-based delay fault testing technique that incorporates the use of standard delay ATPG, along with timing information gathered from standard static timing analysis. Utilizing delay fault patterns typically increases the test data volume by 3-5X when compared to stuck-at patterns. Combined with the increase in test data volume associated with the increase in gate count that typically accompanies the miniaturization of technology, this adds up to a very large increase in test data volume that directly affect test time and thus the manufacturing cost. The second part of this dissertation presents a technique for improving test compression and reducing test data volume by using multiple expansion ratios while determining the configuration of the scan chains for each of the expansion ratios using a dependency analysis procedure that accounts for structural dependencies as well as free variable dependencies to improve the probability of detecting faults. Finally, this dissertation addresses the problem of unknown values (X’s) in the output response data corrupting the data and degrading the performance of the output response compactor and thus the overall amount of test compression. Four techniques are presented that focus on handling response data with large percentages of X’s. The first uses X-canceling MISR architecture that is based on deterministically observing scan cells, and the second is a hybrid approach that combines a simple X-masking scheme with the X-canceling MISR for further gains in test compression. The third and fourth techniques revolve around reiterative LFSR X-masking, which take advantage of LFSR-encoded masks that can be reused for multiple scan slices in novel ways. / text
162

An Autonomous Machine Learning Approach for Global Terrorist Recognition

Hill, Jerry L., Mora, Randall P. 10 1900 (has links)
ITC/USA 2012 Conference Proceedings / The Forty-Eighth Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2012 / Town and Country Resort & Convention Center, San Diego, California / A major intelligence challenge we face in today's national security environment is the threat of terrorist attack against our national assets, especially our citizens. This paper addresses global reconnaissance which incorporates an autonomous Intelligent Agent/Data Fusion solution for recognizing potential risk of terrorist attack through identifying and reporting imminent persona-oriented terrorist threats based on data reduction/compression of a large volume of low latency data possibly from hundreds, or even thousands of data points.
163

Development of a new image compression technique using a grid smoothing technique.

Bashala, Jenny Mwilambwe. January 2013 (has links)
M. Tech. Electrical Engineering. / Aims to implement a lossy image compression scheme that uses a graph-based approach. On the one hand, this new method should reach high compression rates with good visual quality, while on the other hand it may lead to the following sub-problems:efficient classification of image data with the use of bilateral mesh filtering ; Transformation of the image into a graph with grid smoothing ; reduction of the graph by means of mesh decimation techniques ; reconstruction process of the reduced graph into an image and quality analysis of the reconstructed images.
164

Real-time scheduling techniques with QoS support and their applications in packet video transmission

Tsoi, Yiu-lun, Kelvin., 蔡耀倫. January 1999 (has links)
published_or_final_version / Electrical and Electronic Engineering / Master / Master of Philosophy
165

Data Compression for Helioseismology

Löptien, Björn 29 July 2015 (has links)
Die effiziente Kompression von Daten wird eine wichtige Rolle für mehrere bevorste- hende und geplante Weltraummissionen spielen, die Helioseismologie betreiben werden, wie beispielsweise Solar Orbiter. Solar Orbiter ist die nächste Mission, die Helioseismologie beinhaltet, und soll im Oktober 2018 gestartet werden. Das Hauptmerkmal von Solar Orbiter ist der Orbit. Die Umlaufbahn des Satelliten wird zur Ekliptik geneigt sein, sodass der Satellit einen solaren Breitengrad von bis zu 33 Grad erreichen wird. Dies wird erstmals ermöglichen, die Pole der Sonne mit Hilfe von lokaler Helioseismologie zu studieren. Zusätzlich dazu können kombinierte Beobachtungen von Solar Orbiter und einem anderen Instrument dazu benutzt werden, die tiefen Schichten der Sonne mittels stereoskopischer Helioseismologie zu erforschen. Die Aufnahmen der Dopplergeschwindigkeit und der Kontinuumsintensität, die für Helioseismologie benötigt werden, werden vom Polarimetric and Helioseismic Imager (PHI) geliefert werden. Große Hindernisse für Helioseismologie mit Solar Orbiter sind die niedrige Datenüber- tragungsrate und die (wahrscheinlich) kurzen Beobachtungszeiten. Außerdem erfordert die Untersuchung der Pole der Sonne Beobachtungen in der Nähe des Sonnenrandes, sogar von dem geneigten Orbit von Solar Orbiter aus. Dies kann zu systematischen Fehlern führen. In dieser Doktorarbeit gebe ich eine erste Einschätzung ab, wie stark Helioseismologie von verlustbehafteter Datenkompression beeinflusst wird. Mein Schwerpunkt liegt dabei auf der Solar Orbiter Mission, die von mir erzielten Ergebnisse sind aber auch auf andere geplante Missionen übertragbar. Zunächst habe ich mit Hilfe synthetischer Daten die Eignung des PHI Instruments für Helioseismologie getestet. Diese basieren auf Simulationen der Konvektion nahe der Sonnenoberfläche und einem Modell von PHI. Ich habe eine sechs Stunden lange Zeitreihe synthetischer Daten erstellt, die die gleichen Eigenschaften wie die von PHI erwarteten Daten haben. Hierbei habe ich mich auf den Einfluss der Punktspreizfunktion, der Vibrationen des Satelliten und des Photonenrauschen konzentriert. Die von diesen Daten abgeleitete spektrale Leistungsdichte der solaren Oszillationen legt nahe, dass PHI für Helioseismologie geeignet sein wird. Aufgrund der niedrigen Datenübertragungsrate von Solar Orbiter müssen die von PHI für die Helioseismologie gewonnenen Daten stark komprimiert werden. Ich habe den Einfluss von Kompression mit Hilfe von Daten getestet, die vom Helioseismic and Magnetic Imager (HMI) stammen. HMI ist ein Instrument an Bord des Solar Dynam- ics Observatory Satelliten (SDO), der 2010 gestartet worden ist. HMI erstellt mit hoher zeitlicher Abfolge Karten der Kontinuumsintensität, der Dopplergeschwindigkeit und des kompletten Magnetfeldvektors für die komplette von der Erde aus sichtbare Hemispäre der Sonne. Mit Hilfe mit von HMI aufgenommenen Karten der Dopplergeschwindigkeit konnte ich zeigen, dass das Signal-zu-Rausch Verhältnis von Supergranulation in der Zeit-Entfernungs Helioseismologie nicht stark von Datenkompression beeinflusst wird. Außerdem habe ich nachgewiesen, dass die Genauigkeit und Präzision von Messungen der Sonnenrotation mittels Local Correlation Tracking von Granulation durch verlust- behaftete Datenkompression nicht wesentlich verschlechtert werden. Diese Ergebnisse deuten an, dass die niedrige Datenübertragungsrate von Solar Orbiter nicht unbedingt ein großes Hinderniss für Helioseismologie sein muss.
166

Dvejetainės informacijos kodavimo naudojant bazinius skaidinius analizė: teoriniai ir praktiniai aspektai / Compression of binary data using identifying decomposition sequences: theoretical and practical aspects

Smolinskas, Mindaugas 13 January 2005 (has links)
A new approach (designed to lower computational complexity) to compression of finite binary data, based on the application of “exclusive-or” operation, is presented in the paper. The new concept of an identifying sequence, associated with a particular decomposition scheme, is introduced. A new heuristic algorithm for calculation of the identifying sequences has been developed. Two robust algorithms – for compressing and streaming the sequences (using a priori compiled decomposition tables), and for real time decoding of the compressed streams – were proposed and implemented. The experimental results confirmed that the developed approach gave a data compression effect on an ordinary computer system.
167

Compressed-domain processing of MPEG audio signals

Lanciani, Christopher A. 06 1900 (has links)
No description available.
168

Multivariate permutation tests for the k-sample problem with clustered data

Rahnenführer, Jörg January 1999 (has links) (PDF)
The present paper deals with the choice of clustering algorithms before treating a k-sample problem. We investigate multivariate data sets that are quantized by algorithms that define partitions by maximal support planes (MSP) of a convex function. These algorithms belong to a wide class containing as special cases both the well known k-means algorithm and the Kohonen (1985) algorithm and have been profoundly investigated by Pötzelberger and Strasser (1999). For computing the test statistics for the k-sample problem we replace the data points by their conditional expections with respect to the MSP-partition. We present Monte Carlo simulations of power functions of different tests for the k-sample problem whereas the tests are carried out as multivariate permutation tests to ensure that they hold the level. The results presented show that there seems to be a vital and decisive connection between the optimal choice of the clustering algorithm and the tails of the probability distribution of the data. Especially for distributions with heavy tails like the exponential distribution the performance of tests based on a quadratic convex function with k-means type partitions totally breaks down. (author's abstract) / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
169

Progressive and Random Accessible Mesh Compression

Maglo, Adrien, Enam 10 July 2013 (has links) (PDF)
Previous work on progressive mesh compression focused on triangle meshes but meshes containing other types of faces are commonly used. Therefore, we propose a new progressive mesh compression method that can efficiently compress meshes with arbitrary face degrees. Its compression performance is competitive with approaches dedicated to progressive triangle mesh compression. Progressive mesh compression is linked to mesh decimation because both applications generate levels of detail. Consequently, we propose a new simple volume metric to drive the polygon mesh decimation. We apply this metric to the progressive compression and the simplification of polygon meshes. We then show that the features offered by progressive mesh compression algorithms can be exploited for 3D adaptation by the proposition of a new framework for remote scientific visualization. Progressive random accessible mesh compression schemes can better adapt 3D mesh data to the various constraints by taking into account regions of interest. So, we propose two new progressive random-accessible algorithms. The first one is based on the initial segmentation of the input model. Each generated cluster is compressed independently with a progressive algorithm. The second one is based on the hierarchical grouping of vertices obtained by the decimation. The advantage of this second method is that it offers a high random accessibility granularity and generates one-piece decompressed meshes with smooth transitions between parts decompressed at low and high levels of detail. Experimental results demonstrate the compression and adaptation efficiency of both approaches.
170

Privacy Preserving Data Mining using Unrealized Data Sets: Scope Expansion and Data Compression

Fong, Pui Kuen 16 May 2013 (has links)
In previous research, the author developed a novel PPDM method – Data Unrealization – that preserves both data privacy and utility of discrete-value training samples. That method transforms original samples into unrealized ones and guarantees 100% accurate decision tree mining results. This dissertation extends their research scope and achieves the following accomplishments: (1) it expands the application of Data Unrealization on other data mining algorithms, (2) it introduces data compression methods that reduce storage requirements for unrealized training samples and increase data mining performance and (3) it adds a second-level privacy protection that works perfectly with Data Unrealization. From an application perspective, this dissertation proves that statistical information (i. e. counts, probability and information entropy) can be retrieved precisely from unrealized training samples, so that Data Unrealization is applicable for all counting-based, probability-based and entropy-based data mining models with 100% accuracy. For data compression, this dissertation introduces a new number sequence – J-Sequence – as a mean to compress training samples through the J-Sampling process. J-Sampling converts the samples into a list of numbers with many replications. Applying run-length encoding on the resulting list can further compress the samples into a constant storage space regardless of the sample size. In this way, the storage requirement of the sample database becomes O(1) and the time complexity of a statistical database query becomes O(1). J-Sampling is used as an encryption approach to the unrealized samples already protected by Data Unrealization; meanwhile, data mining can be performed on these samples without decryption. In order to retain privacy preservation and to handle data compression internally, a column-oriented database management system is recommended to store the encrypted samples. / Graduate / 0984 / fong_bee@hotmail.com

Page generated in 0.1042 seconds