• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1349
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 31
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3045
  • 532
  • 465
  • 417
  • 410
  • 358
  • 328
  • 276
  • 265
  • 222
  • 219
  • 201
  • 169
  • 161
  • 158
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

On structural studies of high-density potassium and sodium

McBride, Emma Elizabeth January 2014 (has links)
The alkali elements at ambient conditions are well described by the nearly-free electron (NFE) model, yet show a remarkable departure from this “simple” behaviour with increasing pressure. Low-symmetry complex structures are observed in all, and anomalous melting has been observed in lithium (Li), sodium (Na), rubidium (Rb), and caesium (Cs). In this Thesis, static and dynamic compression techniques have been used to investigate the high-pressure high-temperature behaviour of the alkali elements potassium (K) and Na. Utilising diamond anvil pressure cells and external resistive heating, both in-air and in-vacuum, the melting curve of K has been determined to 24 GPa and 750 K, and is found to be remarkably similar to that of Na, but strikingly different to that reported previously. Furthermore, there is some evidence to suggest that a change in the compressibility of liquid-K occurs at lower pressures than the solid-solid phase transitions, perhaps indicating structural transitions occurring in the liquid phase, similar to those in the underlying solid. This could suggest a mechanism to explain the anomalous melting behaviour observed. Previous ab initio computational studies indicate that the unusual melting curve of Na arises due to structural and electronic transitions occurring in the liquid, mirroring those found in the underlying solid at higher pressures. The discovery that the melting curve of K is very similar to that of Na suggests that the same physical phenomena predicted for Na could be responsible for the high-pressure melting behaviour observed in K. The tI19 phase of K, observed above 20 GPa at 300 K, is a composite incommensurate host-guest structure consisting of 1D chains of guest atoms surrounded by a tetragonal host framework. Along the unique c-axis, the host and guest are incommensurate with each other. During the melting studies described above, it was observed that with increasing temperature, the weaker-bonded guest chains become more disordered while the host structure remains unchanged. To investigate and characterise this order-disorder transition, in situ synchrotron X-ray diffraction studies were conducted on single-crystal and quasi-single crystal samples of tI19-K. An order-disorder phase line has been mapped out to 50 GPa and 650 K. Perhaps the most striking departure from NFE behaviour in the alkali elements is observed in Na at pressures above 200 GPa where it transforms to a transparent electrical insulator. This phase is a so-called elemental “electride”, which may be thought of as being pseudo-ionically bonded. Electrides are predicted to exist in many elements, but at pressures far beyond the current capabilities of static pressure techniques. Utilising laser-driven quasi-isentropic compression techniques, dynamic compression experiments were performed on Na to see if it is possible to observe this electride phase under the timescales of dynamic compression experiment (ns). Optical velocimetry and reflectivity of the sample were measured directly to determine pressure and monitor the on-set of the transparent phase, respectively.
492

Humidity’s effect on strength and stiffness of containerboard materials : A study in how the relative humidity in the ambient air affects the tensile and compression properties in linerboard and fluting mediums

Strömberg, Frida January 2016 (has links)
The aim of this thesis was to investigate the difference between containerboard materials strength and stiffness properties in tension and compression, how the mechanisms behind compressive and tensile properties are affected by the relative humidity of the ambient air and how the relative humidity affects the compressive response of the fibre network. These properties are used to predict the lifetime performance of corrugated boxes and to prevent early collapses of the boxes and thereby waste or harm of the transported goods inside. The work also discusses the methods used to evaluate the different properties and how reliable the results are. The experimental part includes testing of linerboard and fluting materials from both virgin and recycled fibres, which have been conditioned at 50% and 90% relative humidity. The compression tests were filmed to evaluate if different compression failure modes can be related to the strength and stiffness of the material. The results indicated that the compressive strength and stiffness differ from the strength and stiffness values in tension at 90% relative humidity. Compressive strength is lower in both 50% and 90% relative humidity compared with the tensile strength. However, the compression stiffness shows a higher value than the tensile stiffness at 90% relative humidity. The study of the method for evaluating the compressive behaviour of the paper does not present a complete picture on what type of failure the paper actually experience.
493

Compression guidée par automate et noyaux rationnels / Compression guided by automata and rational kernels

Amarni, Ahmed 11 May 2015 (has links)
En raison de l'expansion des données, les algorithmes de compression sont désormais cruciaux. Nous abordons ici le problème de trouver des algorithmes de compression optimaux par rapport à une source de Markov donnée. A cet effet, nous étendons l'algorithme de Huffman classique. Pour se faire premièrement on applique Huffman localement à chaque état de la source Markovienne, en donnant le résultat de l'efficacité obtenue pour cet algorithme. Mais pour bien approfondir et optimiser quasiment l'efficacité de l'algorithme, on donne un autre algorithme qui est toujours appliqué localement à chaque états de la source Markovienne, mais cette fois ci en codant les facteurs partant de ces états de la source Markovienne de sorte à ce que la probabilité du facteur soit une puissance de 1/2 (sachant que l'algorithme de Huffman est optimal si et seulement si tous les symboles à coder ont une probabilité puissance de 1/2). En perspective de ce chapitre on donne un autre algorithme (restreint à la compression de l'étoile) pour coder une expression à multiplicité, en attendant dans l'avenir à coder une expression complète / Due to the expansion of datas, compression algorithms are now crucial algorithms. We address here the problem of finding an optimal compression algorithm with respect to a given Markovian source. To this purpose, we extend the classical Huffman algorithm. The kernels are popular methods to measure the similarity between words for classication and learning. We generalize the definition of rational kernels in order to apply kernels to the comparison of languages. We study this generalization for factor and subsequence kerneland prove that these kernels are defined for parameters chosen in an appropriate interval. We give different methods to build weighted transducers which compute these kernels
494

Perceptual Image Compression using JPEG2000

Oh, Han January 2011 (has links)
Image sizes have increased exponentially in recent years. The resulting high-resolution images are typically encoded in a lossy fashion to achieve high compression ratios. Lossy compression can be categorized into visually lossless and visually lossy compression depending on the visibility of compression artifacts. This dissertation proposes visually lossless coding methods as well as a visually lossy coding method with perceptual quality control. All resulting codestreams are JPEG2000 Part-I compliant.Visually lossless coding is increasingly considered as an alternative to numerically lossless coding. In order to hide compression artifacts caused by quantization, visibility thresholds (VTs) are measured and used for quantization of subbands in JPEG2000. In this work, VTs are experimentally determined from statistically modeled quantization distortion, which is based on the distribution of wavelet coefficients and the dead-zone quantizer of JPEG2000. The resulting VTs are adjusted for locally changing background through a visual masking model, and then used to determine the minimum number of coding passes to be included in a codestream for visually lossless quality under desired viewing conditions. The proposed coding scheme successfully yields visually lossless images at competitive bitrates compared to those of numerically lossless coding and visually lossless algorithms in the literature.This dissertation also investigates changes in VTs as a function of display resolution and proposes a method which effectively incorporates multiple VTs for various display resolutions into the JPEG2000 framework. The proposed coding method allows for visually lossless decoding at resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely, this method can significantly reduce bandwidth usage.Contrary to images encoded in the visually lossless manner, highly compressed images inevitably have visible compression artifacts. To minimize these artifacts, many compression algorithms exploit the varying sensitivity of the human visual system (HVS) to different frequencies, which is typically obtained at the near-threshold level where distortion is just noticeable. However, it is unclear that the same frequency sensitivity applies at the supra-threshold level where distortion is highly visible. In this dissertation, the sensitivity of the HVS for several supra-threshold distortion levels is measured based on the JPEG2000 quantization distortion model. Then, a low-complexity JPEG2000 encoder using the measured sensitivity is described. The proposed visually lossy encoder significantly reduces encoding time while maintaining superior visual quality compared with conventional JPEG2000 encoders.
495

Measurability Aspects of the Compactness Theorem for Sample Compression Schemes

Kalajdzievski, Damjan 31 July 2012 (has links)
In 1998, it was proved by Ben-David and Litman that a concept space has a sample compression scheme of size $d$ if and only if every finite subspace has a sample compression scheme of size $d$. In the compactness theorem, measurability of the hypotheses of the created sample compression scheme is not guaranteed; at the same time measurability of the hypotheses is a necessary condition for learnability. In this thesis we discuss when a sample compression scheme, created from compression schemes on finite subspaces via the compactness theorem, have measurable hypotheses. We show that if $X$ is a standard Borel space with a $d$-maximum and universally separable concept class $\m{C}$, then $(X,\CC)$ has a sample compression scheme of size $d$ with universally Borel measurable hypotheses. Additionally we introduce a new variant of compression scheme called a copy sample compression scheme.
496

A Study of Perceptually Tuned, Wavelet Based, Rate Scalable, Image and Video Compression

Wei, Ming 05 1900 (has links)
In this dissertation, first, we have proposed and implemented a new perceptually tuned wavelet based, rate scalable, and color image encoding/decoding system based on the human perceptual model. It is based on state-of-the-art research on embedded wavelet image compression technique, Contrast Sensitivity Function (CSF) for Human Visual System (HVS) and extends this scheme to handle optimal bit allocation among multiple bands, such as Y, Cb, and Cr. Our experimental image codec shows very exciting results in compression performance and visual quality comparing to the new wavelet based international still image compression standard - JPEG 2000. On the other hand, our codec also shows significant better speed performance and comparable visual quality in comparison to the best codec available in rate scalable color image compression - CSPIHT that is based on Set Partition In Hierarchical Tree (SPIHT) and Karhunen-Loeve Transform (KLT). Secondly, a novel wavelet based interframe compression scheme has been developed and put into practice. It is based on the Flexible Block Wavelet Transform (FBWT) that we have developed. FBWT based interframe compression is very efficient in both compression and speed performance. The compression performance of our video codec is compared with H263+. At the same bit rate, our encoder, being comparable to the H263+ scheme, with a slightly lower (Peak Signal Noise Ratio (PSNR) value, produces a more visually pleasing result. This implementation also preserves scalability of wavelet embedded coding technique. Thirdly, the scheme to handle optimal bit allocation among color bands for still imagery has been modified and extended to accommodate the spatial-temporal sensitivity of the HVS model. The bit allocation among color bands based on Kelly's spatio-temporal CSF model is designed to achieve the perceptual optimum for human eyes. A perceptually tuned, wavelet based, rate scalable video encoding/decoding system has been designed and implemented based on this new bit allocation scheme. Finally to present the potential applications of our rate scalable video codec, a prototype system for rate scalable video streaming over the Internet has been designed and implemented to deal with the bandwidth unpredictability of the Internet.
497

Hierarchická komprese / Hierarchical compression

Kreibichová, Lenka January 2011 (has links)
The most of existing text compression methods is based on the same base concept. First the Input text is divided into sequence of text units. These text units cat be single symbols, syllables or words. When compressing large text files, searching for redundancies over longer text units is usually more effective than searching over the shorter ones. But if we choose words as base units we cannot anymore catch redundancies over symbols and syllables. In this paper we propose a new text compression method called Hierarchical compresssion. It constructs hierarchical grammar to store redundancies over syllables, words and upper levels of text. The code of the text then consists of code of this grammer. We proposed a strategy for constructing hierarchical grammar for concrete input text and we proposed an effective way how to encode it. Above mentioned our proposed method is compared with some other common methods of text compression.
498

Compréhension et modélisation du comportement du clinker de ciment lors du broyage par compression / Understanding and modeling behaviour of cement clinker during compresssive grinding

Esnault, Vivien 19 June 2013 (has links)
On appelle clinker le matériau obtenu par cuisson de calcaire et d'argile et qui constitue le principal ingrédient du ciment Portland, composant essentiel de la majorité des bétons produits dans le monde. Ce clinker doit être finement broyé avant de pouvoir présenter une réactivité suffisante. La maîtrise des procédés de broyage représente un enjeu considérable pour l'industrie cimentière : il s'agit du premier poste en termes de consommation électrique d'une usine, en partie du fait de l'inefficacité des procédés employés. Les techniques de broyage par compression, apparues au cours des années 80, ont constitué un progrès majeur du point de vue de l'efficacité énergétique, mais la généralisation de leur utilisation a été freinée par des problèmes de maîtrise du procédé, en particulier pour des finesses importantes. L'enjeu de cette thèse est une meilleure compréhension des phénomènes en jeu lors du broyage par compression du clinker, en vue d'un meilleur contrôle des installations industrielles lors de la fabrication de produits fins. Nous nous sommes intéressés en particulier au comportement, du point de vue fondamental, d'un matériau granulaire subissant une fragmentation de ses grains, en nous appuyant sur la simulation numérique d'un Volume Elémentaire Représentatif de matière par les éléments discrets (DEM). Nous avons aussi recherché une loi de comportement permettant de relier contraintes, déformation, et évolution de la taille des particules pour le matériau broyé, en nous appuyant à la fois sur la micromécanique et les techniques d'homogénéisation, et un modèle semi-empirique de bilans de masses. Enfin, un premier pas vers la modélisation du procédé industriel et notamment sa simulation par éléments finis a été esquissé, afin de résorber les difficultés rencontrées en pratique par les industriels / Noindent Clinker is the material obtained by calcination of a mix of clay and limestone, and it is the main component of Portland cement, a crucial ingredient for the majority of concrete used around the world. This clinker must be finely ground to have a sufficient reactivity. Mastering the grinding process is a key issue in the cement industry: it is the first source of expense in terms of electric consumption in a factory, partially because of the overall inefficiency of the process. Compressive grinding techniques, first appeared during the 80's, allow major improvements in terms of energy efficiency, but the general implementation is yet to come, hindered by process control issues, especially for high fineness. The goal of this study is a better understanding of phenomenons occurring during compressive grinding of clinker, in order to provide better process control for industrial installations when dealing with fine products. We particularly choose to study the behaviour, on a fundamental point of view, of a granular material subjected to grain fragmentation, using the numerical simulation of an Elementary Representative Volume of material through Discrete Element Method (DEM). We also looked for a behaviour law able to provide a link between stress, strain, and grain size evolution for the ground material, using at the same time micromechanics and homogenization technique, and a semi-empirical mass balance model. Finally, we made first efforts in the direction of modelling the whole process through numerical simulation by Finite Element Method (FEM), in order to tackle the issue met by the industrials in operations
499

Filtrage Stochastique et amélioration des performances des systèmes de positionnement d’engins sous-marins en milieu bruyant / Stochastic filter and underwater vehicule positioning systems improvement in noisy environment

Julien, Grégory 05 December 2012 (has links)
Le positionnement d'un engin sous-marin s'appuie sur des systèmes dits "acoustiques". Ces derniers renseignent la position relative de l'engin immergé par rapport au navire support. Les performances de ces systèmes sont définies en termes de limite de portée et de précision. Le principe de ces systèmes repose sur les notions de distance-métrie et de goniométrie, qui s'appuient toutes deux sur l'estimation du temps de propagation et donc de la date d'arrivée du signal utile. Cela est classiquement réalisé par une opération de Compression d'Impulsion. Cette technique qui est largement utilisée dans les domaines du SONAR, RADAR et imagerie bio-médicale, repose sur une application sous-optimale du Filtrage Adapté. En effet, le Filtrage Adapté est une technique d’estimation ou de détection optimale lorsque le bruit et blanc et gaussien et lorsque le signal utile est déterministe, c’est-à-dire que le signal reçu est bien connu. Cependant, il est bien connu que dans le monde sous-marin, le bruit n’est pas blanc, et pas toujours gaussien. Aussi, le signal utile étant déformé soit par le milieu de propagation soit par des phénomènes physiques tels que l’effet Doppler, celui-ci n’est pas déterministe. On peut alors considérer que le bruit est coloré et que le signal utile est une réalisation d’un processus aléatoire. Ainsi, en vue d’étendre les hypothèse d’application de la Compression d’Impulsion classique, nous proposons de construire une nouvelle forme de Compression d’Impulsion basée sur l’utilisation du Filtrage Adapté Stochastique. En effet, ce dernier est une extension naturelle du Filtrage Adapté pour des bruits colorés et des signaux déterministes. Toutefois, le Filtrage Adapté Stochastique suppose que les signaux sont stationnaires au second ordre. Or, cela n’est pas toujours le cas pour le bruit en milieu marin, et cela n’est jamais le cas pour un signal modulé en fréquence tel que ceux utilisés par les systèmes de positionnement acoustiques. Ainsi, nous proposons une nouvelle technique de Compression d’Impulsion alliant les qualités du Filtrage Adapté Stochastique et celle des techniques Temps-Fréquence. Ces dernières, et en particulier la transformée de Wigner-Ville, permettent de contourner l’hypothèse de stationnarité imposée par le Filtrage Adapté Stochastique. D’autre part, en vue de contrer l’apparition d’interférences générées par ces techniques, nous développons ici une approche par « décomposition atomique » sur une base de DCT. Ainsi donc, ces trois années de thèse, ont donné naissance à de nouvelles méthodes de Compression d'Impulsion qui permettent d'améliorer les performances des systèmes de positionnement sous-marin. / The underwater vehicules positioning is based on acoustic systems. These systems provide us the relative position of the immersed submarine to the carrier ship. The systems performances are defined in terms of precision and slant range. The positioning systems use concepts like distance measurement and goniometry, both based on the Time Of Arrival estimation of the useful signal, which is classically performed by a Pulse Compression. This technique, widely applied on SONAR, RADAR and bio-medical imaging, is a sub-optimal application of the Matched Filtering. After these three years of work, we had obtained new methods of Pulse Compression that allow to improve the performances of the acoustic positioning systems. These new techniques are based on an expension of the application assumptions of the Pulse Compression to reach, as well as possible, the optimality.
500

Transformations compactes de triangulations surfaciques par bascule d'arête / Compact transformation for 2-dimensional triangulations with edge flip

Espinas, Jérémy 24 October 2013 (has links)
Le développement de la numérisation systématique des formes 3D (conservation du patrimoine national, commerce électronique, reverse engineering, intégration d’objets réels dans des environnements de réalité virtuelle) et le besoin toujours croissant de ces objets géométriques dans de nombreuses applications (conception assistée par ordinateur, calcul de simulations par éléments finis, système d’informations géographiques, loisirs numériques) a entrainé une augmentation vertigineuse du volume de données à traiter, avec l’émergence de nombreuses méthodes de compression de modèles 3D. Ce volume de données devient encore plus difficile à maitriser lorsque l’aspect temporel entre en jeu. Les maillages correspondent au modèle classiquement utilisé pour modéliser les formes numérisées et certaines approches de compression exploitent la propriété qu’une bonne estimation de la connectivité peut être déduite de l’échantillonnage, lorsque ce dernier s’avère suffisamment dense. La compression de la connectivité d’un maillage revient alors au codage de l’écart entre deux connectivités proches. Dans ce mémoire, nous nous intéressons au codage compact de cette différence pour des maillages surfaciques. Nos travaux sont fondés sur l’utilisation de la bascule d’arête (edge flip) et l’étude de ses propriétés. Nos contributions sont les suivantes. Etant donné deux triangulations connexes partageant le même nombre de sommets et un même genre topologique, nous proposons un algorithme direct et efficace pour générer une séquence de bascules d’arêtes permettant de passer d’un maillage `a un autre. Nous nous appuyons sur une correspondance entre les sommets des deux maillages, qui, si elle est non fournie, peut être choisie de manière totalement aléatoire / The development of scanning 3D shapes (national heritage conservation, ecommerce, reverse engineering, virtual reality environments) and the growing need for geometric objects in many applications (computer-aided design, simulations, geographic information systems, digital entertainment) have led to a dramatic increase in the volume of data to be processed, and the emergence of many methods of compression of 3D models. This volume of data becomes even more difficult to control when the temporal aspect comes in. Meshes correspond to the pattern typically used to model the scanned forms and some approaches exploit a property of compression that a good estimation of connectivity can be derived from sampling, when it appears sufficiently dense. Compressing the connectivity of a mesh is equivalent to coding the difference between two close connectivities. In this thesis, we focus on the compact coding of this difference for 2-dimensional meshes. Our work is based on the use and study of the properties of the edge flip. Our contributions are the following : - Given two connected triangulations that share the same number of vertices and the same topological genus, we propose a direct and efficient algorithm to generate a sequence of edge flips to change one mesh into the other. We rely on a correspondence between the vertices of the two meshes, which, if not provided, may be chosen randomly. The validity of the algorithm is based on the fact that we intend to work in a triangulation of a different class from those generally used. - We then generalize the edge flips to triangulations in which we identify each edge with a label. We show that a sequence of edge flips can be used to transpose two labels, under certain conditions. From this result, the edge flip can be generalized to meshes whose faces are not necessarily triangular, which allowed us to develop an algorithm for reducing sequences of edge flips. - Finally, we present a compact coding approach for a sequence of edge flips, and determine under what conditions it is better to use this compact transformation between two connectivities instead of coding them independently by a static algorithm

Page generated in 0.0315 seconds