• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1349
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 31
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3045
  • 532
  • 465
  • 417
  • 410
  • 358
  • 328
  • 276
  • 265
  • 222
  • 219
  • 201
  • 169
  • 161
  • 158
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
511

Parsing optimal pour la compression du texte par dictionnaire

Langiu, Alessio 03 April 2012 (has links) (PDF)
Les algorithmes de compression de données basés sur les dictionnaires incluent une stratégie de parsing pour transformer le texte d'entrée en une séquence de phrases du dictionnaire. Etant donné un texte, un tel processus n'est généralement pas unique et, pour comprimer, il est logique de trouver, parmi les parsing possibles, celui qui minimise le plus le taux de compression finale. C'est ce qu'on appelle le problème du parsing. Un parsing optimal est une stratégie de parsing ou un algorithme de parsing qui résout ce problème en tenant compte de toutes les contraintes d'un algorithme de compression ou d'une classe d'algorithmes de compression homogène. Les contraintes de l'algorithme de compression sont, par exemple, le dictionnaire lui-même, c'est-à-dire l'ensemble dynamique de phrases disponibles, et combien une phrase pèse sur le texte comprimé, c'est-à-dire quelle est la longueur du mot de code qui représente la phrase, appelée aussi le coût du codage d'un pointeur de dictionnaire. En plus de 30 ans d'histoire de la compression de texte par dictionnaire, une grande quantité d'algorithmes, de variantes et d'extensions sont apparus. Cependant, alors qu'une telle approche de la compression du texte est devenue l'une des plus appréciées et utilisées dans presque tous les processus de stockage et de communication, seuls quelques algorithmes de parsing optimaux ont été présentés. Beaucoup d'algorithmes de compression manquent encore d'optimalité pour leur parsing, ou du moins de la preuve de l'optimalité. Cela se produit parce qu'il n'y a pas un modèle général pour le problème de parsing qui inclut tous les algorithmes par dictionnaire et parce que
les parsing optimaux existants travaillent sous des hypothèses trop restrictives. Ce travail focalise sur le problème de parsing et présente à la fois un modèle général pour la compression des textes basée sur les dictionnaires appelé la théorie Dictionary-Symbolwise et un algorithme général de parsing qui a été prouvé être optimal sous certaines hypothèses réalistes. Cet algorithme est appelé Dictionary-Symbolwise Flexible Parsing et couvre pratiquement tous les cas des algorithmes de compression de texte basés sur dictionnaire ainsi que la grande classe de leurs variantes où le texte est décomposé en une séquence de symboles et de phrases du dictionnaire. Dans ce travail, nous avons aussi considéré le cas d'un mélange libre d'un compresseur par dictionnaire et d'un compresseur symbolwise. Notre Dictionary-Symbolwise Flexible Parsing couvre également ce cas-ci. Nous avons bien un algorithme de parsing optimal dans le cas de compression Dictionary-Symbolwise où le dictionnaire est fermé par préfixe et le coût d'encodage des pointeurs du dictionnaire est variable. Le compresseur symbolwise est un compresseur symbolwise classique qui fonctionne en temps linéaire, comme le sont de nombreux codeurs communs à longueur variable. Notre algorithme fonctionne sous l'hypothèse qu'un graphe spécial, qui sera décrit par la suite, soit bien défini. Même si cette condition n'est pas remplie, il est possible d'utiliser la même méthode pour obtenir des parsing presque optimaux. Dans le détail, lorsque le dictionnaire est comme LZ78, nous montrons comment mettre en œuvre notre algorithme en temps linéaire. Lorsque le dictionnaire est comme LZ77 notre algorithme peut être mis en œuvre en temps O (n log n) où n est le longueur du texte. Dans les deux cas, la complexité en espace est O (n). Même si l'objectif principal de ce travail est de nature théorique, des résultats expérimentaux seront présentés pour souligner certains effets pratiques de l'optimalité du parsing sur les performances de compression et quelques résultats expérimentaux plus détaillés sont mis dans une annexe appropriée.
512

Measurability Aspects of the Compactness Theorem for Sample Compression Schemes

Kalajdzievski, Damjan 31 July 2012 (has links)
In 1998, it was proved by Ben-David and Litman that a concept space has a sample compression scheme of size $d$ if and only if every finite subspace has a sample compression scheme of size $d$. In the compactness theorem, measurability of the hypotheses of the created sample compression scheme is not guaranteed; at the same time measurability of the hypotheses is a necessary condition for learnability. In this thesis we discuss when a sample compression scheme, created from compression schemes on finite subspaces via the compactness theorem, have measurable hypotheses. We show that if $X$ is a standard Borel space with a $d$-maximum and universally separable concept class $\m{C}$, then $(X,\CC)$ has a sample compression scheme of size $d$ with universally Borel measurable hypotheses. Additionally we introduce a new variant of compression scheme called a copy sample compression scheme.
513

Alkane fluids confined and compressed by two smooth gold crystalline surfaces: pure liquids and mixtures

Merchan Alvarez, Lina Paola 17 January 2012 (has links)
With the use of grand canonical molecular dynamics, we studied the slow ompression(0.01m/s) of very thin liquid films made of equimolar mixtures of short and long alkane chains (hexane and hexadecane), and branched and unbranched alkanes (phytane and hexadecane). Besides comparing how these mixtures behave under constant speed compression, we will compare their properties with the behavior and structure of the pure systems undergoing the same type of slow compression. To understand the arrangement of the molecules inside the confinement, we present segmental and molecular density profiles, average length and orientation of the molecules inside well layered gaps. To observe the effects of the compression on the fluids, we present the number of confined molecules, the inlayer orientation, the solvation force and the inlayer diffusion coefficient, versus the thickness of the gap. We observe that pure hexadecane, although liquid at this temperature, starts presenting strong solid-like behavior when it is compressed to thicknesses under 3nm, while pure hexane and pure phytane continue to behave liquid-like except at 1.3nm when they show some weak solid-like features. When hexadecane is mixed with the short straight hexane, it remains liquid down to 2.8nm at which point this mixture behaves solid-like with an enhanced alignment of the long molecules not seen in its pure form; but when hexadecane is mixed with the branched phytane the system does not present the solid-like features seen when hexadecane is compressed pure.
514

Statistical data compression by optimal segmentation. Theory, algorithms and experimental results.

Steiner, Gottfried 09 1900 (has links) (PDF)
The work deals with statistical data compression or data reduction by a general class of classification methods. The data compression results in a representation of the data set by a partition or by some typical points (called prototypes). The optimization problems are related to minimum variance partitions and principal point problems. A fixpoint method and an adaptive approach is applied for the solution of these problems. The work contains a presentation of the theoretical background of the optimization problems and lists some pseudo-codes for the numerical solution of the data compression. The main part of this work concentrates on some practical questions for carrying out a data compression. The determination of a suitable number of representing points, the choice of an objective function, the establishment of an adjacency structure and the improvement of the fixpoint algorithm belong to the practically relevant topics. The performance of the proposed methods and algorithms is compared and evaluated experimentally. A lot of examples deepen the understanding of the applied methods. (author's abstract)
515

Error resilient video coding for wireless applications

Jung, Kyunghun 01 December 2003 (has links)
No description available.
516

Video analysis and abstraction in the compressed domain

Lee, Sangkeun 01 December 2003 (has links)
No description available.
517

Efficient image compression system using a CMOS transform imager

Lee, Jungwon 12 November 2009 (has links)
This research focuses on the implementation of the efficient image compression system among the many potential applications of a transform imager system. The study includes implementing the image compression system using a transform imager, developing a novel image compression algorithm for the system, and improving the performance of the image compression system through efficient encoding and decoding algorithms for vector quantization. A transform imaging system is implemented using a transform imager, and the baseline JPEG compression algorithm is implemented and tested to verify the functionality and performance of the transform imager system. The computational reduction in digital processing is investigated from two perspectives, algorithmic and implementation. Algorithmically, a novel wavelet-based embedded image compression algorithm using dynamic index reordering vector quantization (DIRVQ) is proposed for the system. DIRVQ makes it possible for the proposed algorithm to achieve superior performance over the embedded zero-tree wavelet (EZW) algorithm and the successive approximation vector quantization (SAVQ) algorithm. However, because DIRVQ requires intensive computational complexity, additional focus is placed on the efficient implementation of DIRVQ, and highly efficient implementation is achieved without a compromise in performance.
518

Effects of image compression on data interpretation for telepathology

Williams, Saunya Michelle 25 August 2011 (has links)
When geographical distance poses as a barrier, telepathology is designed to offer pathologists the opportunity to replicate their normal activities by using an alternative means of practice. The rapid progression in technology has greatly influenced the appeal of telepathology and its use in multiple domains. To that point, telepathology systems help to afford teleconsultation services for remote locations, improve the workload distribution in clinical environments, measure quality assurance, and also enhance educational programs. While telepathology is an attractive method to many potential users, the resource requirements for digitizing microscopic specimens have hindered widespread adoption. The use of image compression is extremely critical to help advance the pervasiveness of digital images in pathology. For this research, we characterize two different methods that we use to assess compression of pathology images. Our first method is characterized by the fact that image quality is human-based and completely subjective in terms of interpretation. Our second method is characterized by the fact that image analysis is introduced by using machine-based interpretation to provide objective results. Additionally, the objective outcomes from the image analysis may also be used to help confirm tumor classification. With these two methods in mind, the purpose of this dissertation is to quantify the effects of image compression on data interpretation as seen by human experts and a computerized algorithm for use in telepathology.
519

Advanced wavelet image and video coding strategies for multimedia communications

Vass, Jozsef January 2000 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2000. / Typescript. Vita. Includes bibliographical references (leaves 202-221). Also available on the Internet.
520

Behaviour of unconfined cemented materials under dynamic loading.

Matheba, Mokgele Johannes. January 2013 (has links)
M. Tech. Engineering: Civil. / Aims investigate the response of cement stabilised sub-base layers to dynamic load by evaluating the changes in stiffness at known strain level, and to compare the stiffness from dynamic loads with those derived from the Unconfined Compressive Stress (UCS) test.

Page generated in 0.0199 seconds