• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1349
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 31
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3045
  • 532
  • 465
  • 417
  • 410
  • 358
  • 328
  • 276
  • 265
  • 222
  • 219
  • 201
  • 169
  • 161
  • 158
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Low cost algorithms for image/video coding and rate control

Grecos, Christos January 2001 (has links)
No description available.
442

Automated Attacks on Compression-Based Classifiers

Burago, Igor 29 September 2014 (has links)
Methods of compression-based text classification have proven their usefulness for various applications. However, in some classification problems, such as spam filtering, a classifier confronts one or many adversaries willing to induce errors in the classifier's judgment on certain kinds of input. In this thesis, we consider the problem of finding thrifty strategies for character-based text modification that allow an adversary to revert classifier's verdict on a given family of input texts. We propose three statistical statements of the problem that can be used by an attacker to obtain transformation models which are optimal in some sense. Evaluating these three techniques on a realistic spam corpus, we find that an adversary can transform a spam message (detectable as such by an entropy-based text classifier) into a legitimate one by generating and appending, in some cases, as few additional characters as 20% of the original length of the message.
443

Real-time loss-less data compression

Toufie, Moegamat Zahir January 2000 (has links)
Thesis (MTech (Information Technology))--Cape Technikon, Cape Town, 2000 / Data stored on disks generally contain significant redundancy. A mechanism or algorithm that recodes the data to lessen the data size could possibly double or triple the effective data that could be stored on the media. One mechanism of doing this is by data compression. Many compression algorithms currently exist, but each one has its own advantages as well as disadvantages. The objective of this study', to formulate a new compression algorithm that could be implemented in a real-time mode in any file system. The new compression algorithm should also execute as fast as possible, so as not to cause a lag in the file systems performance. This study focuses on binary data of any type, whereas previous articles such as (Huftnlan. 1952:1098), (Ziv & Lempel, 1977:337: 1978:530), (Storer & Szymanski. 1982:928) and (Welch, 1984:8) have placed particular emphasis on text compression in their discussions of compression algorithms for computer data. The resulting compression algorithm that is formulated by this study is Lempel-Ziv-Toutlc (LZT). LZT is basically an LZ77 (Ziv & Lempel, 1977:337) encoder with a buffer size equal in size to that of the data block of the file system in question. LZT does not make this distinction, it discards the sliding buffer principle and uses each data block of the entire input stream. as one big buffer on which compression can be performed. LZT also handles the encoding of a match slightly different to that of LZ77. An LZT match is encoded by two bit streams, the first specifying the position of the match and the other specifying the length of the match. This combination is commonly referred to as a <position, length> pair. To encode the position portion of the <position, length> pair, we make use of a sliding scale method. The sliding scale method works as follows. Let the position in the input buffer, of the current character to be compressed be held by inpos, where inpos is initially set to 3. It is then only possible for a match to occur at position 1 or 2. Hence the position of a match will never be greater than 2, and therefore the position portion can be encoded using only 1 bit. As "inpos" is incremented as each character is encoded, the match position range increases and therefore more bits will be required to encode the match position. The reason why a decimal 2 can be encoded 'sing only I bit can be explained as follows. When decimal values are converted to binary values, we get 010 = 02, 110 = 12, 210, = 102etc. As a position of 0 will never be used, it is possible to develop a coding scheme where a decimal value of 1 can be represented by a binary value of 0, and a decimal value of 2 can be represented by binary value of 1. Only I bit is therefore needed to encode match position I and match position 2. In general. any decimal value n ca:) be represented by the binary equivalent for (n - 1). The number of bits needed to encode (n - 1), indicates the number of bits needed to encode the match position. The length portion of the <position, length> pair is encoded using a variable length coding (vlc) approach. The vlc method performs its encoding by using binary blocks. The first binary block is 3 bits long, where binary values 000 through 110 represent decimal values I through 7.
444

Effect of Photoacoustic Radar Chirp Parameters on Profilometric Information

Sun, Zuwen January 2018 (has links)
Photoacoustic imaging for biomedical application has attracted much research in recent years. To date, most of the work has focused on pulsed photoacoustics. Recent developments have seen the implementation of a radar pulse compression methodology into continuous wave photoacoustic modality, however very little theory has been developed in support of this approach. In this thesis, the one-dimensional theory of radar photoacousticsfor pulse compressedlinear frequency modulated continuous sinusoidal laserphotoacoustics is developed.The effect of the chirp parameters on the corresponding photoacoustic signal is investigated, and guidelines for choosing the chirp parametersfor absorber profilometric detectionare given based on the developed theory and simulations. Simulated results are also compared to available experimental results and show a good agreement.
445

Combined speech and audio coding with bit rate and bandwidth scalability

Farrugia, Maria January 2001 (has links)
The past two decades have witnessed a rapid expansion within the telecommunications industry. This growth has been primarily motivated by the proliferation of digital communication systems and services which have become easily available through wired and wireless systems. Current research trends involve the integration of speech, audio, video and data channels into true multimedia communications over fixed and mobile networks. However, while the available bandwidth in wired terrestrial networks is relatively cheap and expandable, it becomes a limited resource in satellite and cellular-radio systems. In order to accommodate an ever growing number of users while maintaining high quality and low operational costs, it is necessary to maximise spectral efficiency. This has given rise to the development of high rate compression techniques with the ability to adapt to a broad class of input signals and to varying network resources. The research carried out in this thesis has mainly focused on the design of a single algorithm for compressing speech and audio signals sampled at different rates. The algorithms are based on the analysis-by-synthesis linear prediction coding (AbS-LPC) scheme, which has been widely employed in various speech coding standards. However, this bit rate reduction technique is based on the speech production mechanism and as such provides a rigid structure which presents a major limitation for audio coding. In order to improve the audio quality at low rates and to compensate for the errors incurred by the linear prediction during segments of high transitions, the algorithms employ an efficient pulse excitation structure which represents the short innovation sequences with sparse unit magnitude pulses. The scheme proposed for the compression of telephone bandwidth speech and audio signals at 12kb/s achieves similar quality to the G.728 coder at 16kb/s and higher audio quality than the GSM-EFR standard at 12.2kb/s. Wideband speech and audio coding schemes have been designed using both the fullband approach at bit rates of 17 and 19kb/s and also the split band technique at a bit rate of 20kb/s. The perceptual quality is comparable to the G.722 coder operating at 48kb/s. The subband decomposition technique is also adapted to code speech and audio signals sampled at 32kHz. The quality of the coder at 28kb/s is similar to the quality achieved by the MP3 coder at 32kb/s. The algorithm also provides bandwidth and bit rate scalability ranging from 12 to 64kb/s, making it ideal for deployment in rate-adaptive communication systems.
446

Knowledge based image sequence compression

Zhang, Kui January 1998 (has links)
In this thesis, most commonly encountered video compression techniques and international coding standards are studied. The study leads to the idea of a reconfigurable codec which can adapt itself to the specific requirements of diverse applications so as to achieve improved performance. Firstly, we propose a multiple layer affine motion compensated codec which acts as a basic building block of the reconfigurable multiple tool video codec. A detailed investigation of the properties of the proposed codec is carried out. The experimental results reveal that the gain in coding efficiency from improved motion prediction and segmentation is proportional to the spatial complexity of the sequence being encoded. Secondly, a framework for the reconfigurable multiple tool video codec is developed and its key parts are discussed in detail. Two important concepts virtual codec and virtual tool are introduced. A prototype of the proposed reconfigurable multiple tool video codec is implemented. The codec structure and the constituent tools of the codec included in the prototype are extensively tested and evaluated to prove the concept. The results confirm that different applications require different codec configurations to achieve optimum performance. Thirdly, a knowledge based tool selection system for the reconfigurable codec is proposed and developed. Human knowledge as well as sequence properties are taken into account in the tool selection procedure. It is shown that the proposed tool selection mechanism gives promising results. Finally, concluding remarks are offered and future research directions are suggested.
447

Monitoring of multicomponent pharmaceutical powders in a compression process : development of a robust real time monitoring tools / Suivi du procédé de compression d'un mélange de poudres pharmaceutiques : développement d'outils robustes pour un suivi en temps réel

Marchao Palmeiro Durao, Pedro Filipe January 2017 (has links)
La façon dont l'industrie pharmaceutique développe et manufacture ses produits a évolué au cours de ces dernières années. L'environnement réglementaire auquel elle est contrainte a provoqué ce changement dans le but de doter de technologies de pointe dans ses différentes activités. L’encouragement pour utiliser les technologies d’analyse de procédé (PAT) afin d’implémenter le concept de « Quality By Design » (QbD) est l’exemple le plus significatif de ce nouveau paradigme. Lentement, les industries implémentent ces technologies pour de nouveaux produits, mais également pour certains produits déjà existants, bénéficiant ainsi de leurs avantages. Pour implémenter des PAT dans un procédé, plusieurs étapes doivent être franchies, de l’étude de faisabilité des instruments jusqu’à l’approbation règlementaire. Cette thèse décrit l’étude initiale (faisabilité et développement de modèles) avant toute demande d’autorisation d’utilisation d’outils PAT (proche infra-rouge (NIR), caméra RGB et fluorescence induite par laser (LIF)) pour suivre le procédé de compression d’un mélange commercial comprenant plusieurs ingrédients. Après avoir établi le potentiel de ces différents outils, des modèles quantitatifs calculés par régression par moindres carrés partiels (PLS) ont été développés pour suivre les composants ayant une concentration aussi faible que 0,1 w/w%, avec un coefficient de détermination (R2) de 0,95. Il a également été démontré que l’utilisation conjointe de données de plus d’un outil améliorait la précision du modèle. La spécificité de chacun des outils a également été évaluée à l’aide de plan d’expériences factoriels complets pour lesquels les modèles ont été construits en faisant varier simultanément la concentration de différents éléments. Même dans ces conditions, les modèles construits ont montré une précision acceptable, en considérant les critères d’acceptation utilisés pour les produits alimentaires comme les multivitamines. Le travail présenté dans cette thèse a contribué à la publication de trois articles et de trois présentations orales. En plus de l’établissement de la preuve de concept, ce qui augmente les opportunités pour tester d’autres sondes, la possibilité de suivre en ligne la composition quantitative dans la ligne d’alimentation de la presse a également été établi. Dans ce dernier cas, tous les outils sont suffisamment précis pour suivre au moins un des composants, même si celui-ci est présent en faible concentration et fait partie d’un mélange de plusieurs composants. Conséquemment, l’industrie peut utiliser ses connaissances pour suivre le procédé de compression de façon plus adéquate en augmentant l’éventail des outils utilisés à cet effet. Une recherche fondamentale pourrait également investiguer les phénomènes tels que la ségrégation afin mieux les comprendre. / Abstract : The way pharmaceutical industry develops and manufactures their products has been changing in recent years. The regulatory environment that they are obligated to comply has been pushing this change in order to endow this activity with state of art technology. The encouragement of the use of process analytical technology (PAT) to build the quality right from the design (QbD) is perhaps the most significant example of the new paradigm. The manufacturers are implementing this technology in new and existing products and benefiting from their advantages. To implement PAT in a process, many steps must be taken (from the study of feasibility of the instruments until regulatory approval). This thesis describes the initial study (feasibility and model developments), prior to any submission of authorization, of the use of PAT tools (Near-Infrared (NIR), Red Green Blue (RGB) camera and Light Induced Fluorescence (LIF)) to monitor the compression process of a commercial multi-component blend. After the potential of these tools was assessed, quantitative Partial Least Squares (PLS) models were able to be developed to monitor components with a concentration as low as 0.1 w/w % with a R2 of 0.95. It was also proved that combining data from more than one tool was benefit for the accuracy of the model. The tools were also evaluated to their specificity by using a full factorial design where the models were built with simultaneous variations of concentration of some of the components. Even in this challenging case, the models built remained with an acceptable accuracy, considering the acceptance criteria used for dietary products such as multi-vitamins. The work developed in this thesis contributed to the publication of 3 articles and 3 communications. Along with the proof of concept that it provided - which enlarged the opportunities for testing other probes - it also proved that is possible to monitor in-line the components in the feed frame. In this latter case, all the tools were accurate enough to monitor at least one component even if they are present in low concentration and part of multi-component blends. Therefore, the industry can use this knowledge to monitor the compression process more adequately, increasing the range of tools used for the effect. Fundamental research can also be investigated as phenomena like segregation can be more accurately identified.
448

Etude de l'influence des caractéristiques de carburants de synthèse sur la combustion diesel avancée homogène et partiellement homogène / Study of the impact of properties of synthetic fuels on diesel combustion

Ben Houidi, Moez 16 June 2014 (has links)
Dans un contexte de recherche de nouveaux modes de combustion propres, la combustionhomogène à allumage par compression HCCI s’inscrit comme une stratégie prometteuse.Cependant, cette combustion est limitée par un niveau élevé de bruit. La recherche descarburants permettant de relaxer cette contrainte constitue l’objectif global de cette étude.Particulièrement, on s’intéresse ici à l’influence de l’Indice de Cétane, de la volatilité et de lacomposition chimique des carburants sur les Délais d’Auto-Inflammation et sur les vitesses decombustion globales évaluées par les taux maximaux d’accroissement de la pression et dudégagement d’énergie apparente. L’étude se base dans un premier temps sur l’analyse d’essaissur banc moteur dans lesquels on a testé plusieurs carburants de synthèse à l’état pur et enmélange avec un Gazole conventionnel. Dans un deuxième temps des essais ont été préparés etréalisés sur Machine à Compression Rapide avec deux configurations en injection directe et enmélange homogène. Les essais Moteur ont permis d’orienter les paramètres expérimentauxciblés sur ce dispositif. D’autre part, pour étudier les régimes de combustion, des mesures dechamps de température locale ont été réalisées en mélange inerte (N2, CO2, Ar) par FluorescenceInduite par Laser avec un traceur Toluène. L’étude montre les limites des paramètres habituelspour caractériser l’adéquation carburant combustion HCCI et propose un nouveau critère basésur la dépendance des délais d’auto-inflammation à la température et à la richesse. / Advanced combustion strategies such as Homogeneous Charge Compression Ignition (HCCI)usually enable cleaner combustion with less NOx and Particulate Matter emissions comparedto conventional Diesel combustion. However, these strategies are difficult to implement due todifficulties related to combustion timing and burn rate control. Lately various studies have beenfocusing on extending advanced combustion functioning with new technologies and withsearching fuels properties to enable such combustion modes. This study is focused on theimpact of fuel Cetane Number, volatility and chemical composition on Ignition Delay, HeatRelease Rate and Pressure Rise Rate. The study is based on three complementary experiments.First, several synthetic fuel was tested on a research engine and analysis was focused on theHeat Release Rate. Secondly, experiments on a Rapid Compression Machine were performedto study the auto-ignition phenomena at homogeneous conditions with surrogate fuels (blendsof n-Heptane and Methyl-Cyclohexane). Analysis of the combustion regimes was supported bya study of the temperature field based on a Toluene Laser Induced Fluorescence experiment ininert (N2, CO2, Ar) mixture. Finally, the RCM was adapted to allow direct injection of fuel tostudy the auto-ignition at less homogeneous conditions. Results showed the limits of theconventional fuels properties to describe an adequate fuel formulation for the HCCI combustionmode. A new criterion based on the dependency of ignition delays to temperature and air fuelratio variations is proposed.
449

Buffering strategies and bandwidth renegotiation for MPEG video streams

Schonken, Nico January 1999 (has links)
This paper confirms the existence of short-term and long-term variation of the required bandwidth for MPEG videostreams. We show how the use of a small amount of buffering and GOP grouping can significantly reduce the effect of the short-term variation. By introducing a number of bandwidth renegotiation techniques, which can be applied to MPEG video streams in general, we are able to reduce the effect of long-term variation. These techniques include those that need the a priori knowledge of frame sizes as well as one that can renegotiate dynamically. A costing algorithm has also been introduced in order to compare various proposals against each other.
450

Multiple transforms for video coding / Transformées multiples pour le codage vidéo

Arrufat Batalla, Adrià 11 December 2015 (has links)
Les codeurs vidéo état de l’art utilisent des transformées pour assurer une représentation compacte du signal. L’étape de transformation constitue le domaine dans lequel s’effectue la compression, pourtant peu de variabilité dans les types de transformations est constatée dans les systèmes de codage vidéo normalisés : souvent, une seule transformée est considérée, habituellement la transformée en cosinus discrète (DCT). Récemment, d’autres transformées ont commencé à être considérées en complément de la DCT. Par exemple, dans le dernier standard de compression vidéo, nommé HEVC (High Efficiency Video Coding), les blocs de taille 4x4 peuvent utiliser la transformée en sinus discrète (DST), de plus, il est également possible de ne pas les transformer. Ceci révèle un intérêt croissant pour considérer une pluralité de transformées afin d’augmenter les taux de compression. Cette thèse se concentre sur l’extension de HEVC au travers de l’utilisation de multiples transformées. Après une introduction générale au codage vidéo et au codage par transformée, une étude détaillée de deux méthodes de construction de transformations est menée : la transformée de Karhunen Loève (KLT) et une transformée optimisée en débit et distorsion sont considérées. Ces deux méthodes sont comparées entre-elles en substituant les transformées utilisées par HEVC. Une expérimentation valide la pertinence des approches. Un schéma de codage qui incorpore et augmente l’utilisation de multiples transformées est alors introduit : plusieurs transformées sont mises à disposition de l’encodeur, qui sélectionne celle qui apporte le meilleur compromis dans le plan débit distorsion. Pour ce faire, une méthode de construction qui permet de concevoir des systèmes comportant de multiples transformations est décrite. Avec ce schéma de codage, le débit est significativement réduit par rapport à HEVC, tout particulièrement lorsque les transformées sont nombreuses et complexes à mettre en oeuvre. Néanmoins, ces améliorations viennent au prix d’une complexité accrue en termes d’encodage, de décodage et de contrainte de stockage. En conséquence, des simplifications sont considérées dans la suite du document, qui ont vocation à limiter l’impact en réduction de débit. Une première approche est introduite dans laquelle des transformées incomplètes sont motivées. Les transformations de ce type utilisent un seul vecteur de base, et sont conçues pour travailler de concert avec les transformations de HEVC. Cette technique est évaluée et apporte une réduction de complexité significative par rapport au précédent système, bien que la réduction de débit soit modeste. Une méthode systématique, qui détermine les meilleurs compromis entre le nombre de transformées et l’économie de débit est alors définie. Cette méthode utilise deux types différents de transformée : basés sur des transformées orthogonales séparables et des transformées trigonométriques discrètes (DTT) en particulier. Plusieurs points d’opération sont présentés qui illustrent plusieurs compromis complexité / gain en débit. Ces systèmes révèlent l’intérêt de l’utilisation de transformations multiples pour le codage vidéo. / State of the art video codecs use transforms to ensure a compact signal representation. The transform stage is where compression takes place, however, little variety is observed in the type of transforms used for standardised video coding schemes: often, a single transform is considered, usually a Discrete Cosine Transform (DCT). Recently, other transforms have started being considered in addition to the DCT. For instance, in the latest video coding standard, High Efficiency Video Coding (HEVC), the 4x4 sized blocks can make use of the Discrete Sine Transform (DST) and, in addition, it also possible not to transform them. This fact reveals an increasing interest to consider a plurality of transforms to achieve higher compression rates. This thesis focuses on extending HEVC through the use of multiple transforms. After a general introduction to video compression and transform coding, two transform designs are studied in detail: the Karhunen Loève Transform (KLT) and a Rate-Distortion Optimised Transform are considered. These two methods are compared against each other by replacing the transforms in HEVC. This experiment validates the appropriateness of the design. A coding scheme that incorporates and boosts the use of multiple transforms is introduced: several transforms are made available to the encoder, which chooses the one that provides the best rate-distortion trade-off. Consequently, a design method for building systems using multiple transforms is also described. With this coding scheme, significant amounts of bit-rate savings are achieved over HEVC, especially when using many complex transforms. However, these improvements come at the expense of increased complexity in terms of coding, decoding and storage requirements. As a result, simplifications are considered while limiting the impact on bit-rate savings. A first approach is introduced, in which incomplete transforms are used. This kind of transforms use one single base vector and are conceived to work as companions of the HEVC transforms. This technique is evaluated and provides significant complexity reductions over the previous system, although the bit-rate savings are modest. A systematic method, which specifically determines the best trade-offs between the number of transforms and bit-rate savings, is designed. This method uses two different types of transforms based separable orthogonal transforms and Discrete Trigonometric Transforms (DTTs) in particular. Several designs are presented, allowing for different complexity and bitrate savings trade-offs. These systems reveal the interest of using multiple transforms for video coding.

Page generated in 0.0221 seconds