151 |
Higher Compression from the Burrows-Wheeler Transform with New Algorithms for the List Update ProblemChapin, Brenton 08 1900 (has links)
Burrows-Wheeler compression is a three stage process in which the data is transformed with the Burrows-Wheeler Transform, then transformed with Move-To-Front, and finally encoded with an entropy coder. Move-To-Front, Transpose, and Frequency Count are some of the many algorithms used on the List Update problem. In 1985, Competitive Analysis first showed the superiority of Move-To-Front over Transpose and Frequency Count for the List Update problem with arbitrary data. Earlier studies due to Bitner assumed independent identically distributed data, and showed that while Move-To-Front adapts to a distribution faster, incurring less overwork, the asymptotic costs of Frequency Count and Transpose are less. The improvements to Burrows-Wheeler compression this work covers are increases in the amount, not speed, of compression. Best x of 2x-1 is a new family of algorithms created to improve on Move-To-Front's processing of the output of the Burrows-Wheeler Transform which is like piecewise independent identically distributed data. Other algorithms for both the middle stage of Burrows-Wheeler compression and the List Update problem for which overwork, asymptotic cost, and competitive ratios are also analyzed are several variations of Move One From Front and part of the randomized algorithm Timestamp. The Best x of 2x - 1 family includes Move-To-Front, the part of Timestamp of interest, and Frequency Count. Lastly, a greedy choosing scheme, Snake, switches back and forth as the amount of compression that two List Update algorithms achieves fluctuates, to increase overall compression. The Burrows-Wheeler Transform is based on sorting of contexts. The other improvements are better sorting orders, such as “aeioubcdf...” instead of standard alphabetical “abcdefghi...” on English text data, and an algorithm for computing orders for any data, and Gray code sorting instead of standard sorting. Both techniques lessen the overwork incurred by whatever List Update algorithms are used by reducing the difference between adjacent sorted contexts.
|
152 |
Algorithms for compression of high dynamic range images and videoDolzhenko, Vladimir January 2015 (has links)
The recent advances in sensor and display technologies have brought upon the High Dynamic Range (HDR) imaging capability. The modern multiple exposure HDR sensors can achieve the dynamic range of 100-120 dB and LED and OLED display devices have contrast ratios of 10^5:1 to 10^6:1. Despite the above advances in technology the image/video compression algorithms and associated hardware are yet based on Standard Dynamic Range (SDR) technology, i.e. they operate within an effective dynamic range of up to 70 dB for 8 bit gamma corrected images. Further the existing infrastructure for content distribution is also designed for SDR, which creates interoperability problems with true HDR capture and display equipment. The current solutions for the above problem include tone mapping the HDR content to fit SDR. However this approach leads to image quality associated problems, when strong dynamic range compression is applied. Even though some HDR-only solutions have been proposed in literature, they are not interoperable with current SDR infrastructure and are thus typically used in closed systems. Given the above observations a research gap was identified in the need for efficient algorithms for the compression of still images and video, which are capable of storing full dynamic range and colour gamut of HDR images and at the same time backward compatible with existing SDR infrastructure. To improve the usability of SDR content it is vital that any such algorithms should accommodate different tone mapping operators, including those that are spatially non-uniform. In the course of the research presented in this thesis a novel two layer CODEC architecture is introduced for both HDR image and video coding. Further a universal and computationally efficient approximation of the tone mapping operator is developed and presented. It is shown that the use of perceptually uniform colourspaces for internal representation of pixel data enables improved compression efficiency of the algorithms. Further proposed novel approaches to the compression of metadata for the tone mapping operator is shown to improve compression performance for low bitrate video content. Multiple compression algorithms are designed, implemented and compared and quality-complexity trade-offs are identified. Finally practical aspects of implementing the developed algorithms are explored by automating the design space exploration flow and integrating the high level systems design framework with domain specific tools for synthesis and simulation of multiprocessor systems. The directions for further work are also presented.
|
153 |
Perceived audio quality of compressed audio in game dialogueAhlberg, Anton January 2016 (has links)
A game could have thousands of sound assets, to fit all of those files to a manageable storage space it is often necessary to reduce the size of the files to a more manageable size so they have to be compressed. One type of sound that often takes up a lot of disc space (because there is so much of it) is dialogue. In the popular game engine Unreal Engine 4 (UE4) the the audio is compressed to Ogg Vorbis and has as default the bit rate is set to 104 kbit/s. The goal of this paper is to see if untrained listeners find dialogue compressed in Ogg Vorbis 104 kbit/s good enough for dialogue or if they prefer higher bit rates. A game was made in UE4 that would act as a listening test. Dialogue audio was recorded with a male and a female voice-actor and was compressed in UE4 in six different bit rates. 24 untrained subjects was asked to play the game and identify the two out of six robots with the dialogue audio they thought sound the best. The results show that the subjects prefer the higher bit rates that was tested. The results was analyzed with a chi-squared test which showed that the null-hypothesis can be rejected. Only 21% of the answers were towards UE4s default bit rate of 104 kbit/s or lower. The result suggest that the subjects prefer dialogue in higher bit rates and UE4 should raise the default bit rate.
|
154 |
Theoretical study of flux compression for the conceptual design of a non-explosive FCGDickson, Andrew Stuart 31 October 2006 (has links)
Student Number : 9608998A -
MSc dissertation -
School of Electrical and Information Engineering -
Faculty of Engineering and the Built Environment / The history of flux compression is relatively short. One of the founders, a
Russian physicist, Sakharov developed the idea of compressing a magnetic field
to generate high magnetic fields and from this he also developed a generator to
produce current impulses. Most of this initial work was performed in military
research laboratories. The first open source literature became available in the
1960s and from there it has become an international research arena. There
are two types of flux compression generators, field generators and current generators.
These are discussed along with the basic theory of flux compression
generators and related physics. The efficiency of generators is often quite low.
However in many generators high explosives are used and because of their
high energy density, the current or field strength produced is substantially
greater then the initial source. This of course limits the locations possible for
experimental work and subsequently limits the industrial applications of flux
compression generators .
This research presents a theoretical design for a non-explosive flux compression
generator. The generator is designed to produce a current impulse for
tests in laboratory and remote locations. The generator has the advantage
of being non-destructive, therefore reducing costs, and allowing for repeatable
experiments. The design also reduces the possibilities or many of the loss
mechanisms.
|
155 |
Contribution à l’optimisation de densité de code pour Processeur Embarqué / Contribution to the optimization of Embedded processor code densityFahmi, Youssef 13 June 2013 (has links)
Les systèmes embarqués prennent une place de plus en plus grande dans le marché actuelavec des dispositifs basée sur des systèmes on-chip. Ces systèmes embarqués ont descontraintes très fortes concernant leurs coût, taille, consommation, fiabilité et dimensions.Dans ce contexte la densité de code d'un processeur devient un critère important.Dans cette thèse l'idée était de prendre un processeur RISC(l'APS3 de la société Cortus)qui a de bonne performance pour le monde embarqué et d'augmenter sa densité de code.Plusieurs méthodes ont été testé :– compression à base de Huffman.– compression à base de dictionnaire.– modification du jeu d'instructions.Les méthodes de compression ont montrée leur limites dans notre cas car soit ellesn'étaient pas compatible avec nos objectifs , soit elles offraient un gain pas assez importantcomparé aux surplus en terme de taille et de cycle en plus lors de l'exécution. Ce qui nousa poussé vers la modification du jeu d'instructions.Le résultat obtenu est une augmentation de la taille du code de 25% dans la phase derecherche et de 20.8% dans la version finale du processeur car il aura fallu faire un compromispour garder une petite taille et de bonnes performances.L'APS3CD est le résultat de cette thèse. il a une surface de 49605m2, une fréquencemaximale de 444 MHZ, un score de 2.16 DMIPS/MHZ et une consommation de12 W/MHZ(UMC90). il offre 20.8% de gain par rapport à l'APS3 et 40% par rapport aucortex-m3 (avec gcc) qui est une référence en terme de densité de code dans le marché.Toutefois le gain obtenu peut être augmente en travaillant sur le compilateur car lecompilateur actuel (gcc) n'utilise pas pleinement les instructions complexes ajoutés (dansquelque cas). Une continuation possible serait de travailler sur un compilateur qui soitmeilleur que gcc qui à la base n'est pas destinée aux systèmes embarqué avec des demandesde densité de code. Un exemple est la différence de taille du code entre gcc etiar ou keil pour les processeurs ARM. / Since the market is moving toward portable devices with a one device System on-Chip(SoC), code density of a processor becomes an important criteria.The idea of this thesis was to improve the code density of the Cortus processor theAPS3, which is an embedded RISC processor with good performances.Several methods were tried :– Huffman compression.– Dictionnary based compression.– Instruction set modification.Compression methods have shown their limits in this case either because they werenot compatible with our goals or did not provided a gain large enough compared to surplusesin terms of size and cycle number when running. This prompted us to modifie theinstruction set.The result was 25% of code density improvement in the research phase and 20.8% ofcode density improvement in the final version of the processor because we had to keepgood perfomances and small size of the APS3.APS3CD is the result of this thesis. It has an area of 49605m2, a maximum frequencyof 444 MHZ, a score of 2.16 DMIPS/MHz and a consumption of 12W/MHZ(UMC90). itoffers 20.8% gain over the APS3 and 40% compared to the cortex-m3 (with gcc) which is arefrence in termof code density in the market.However, the gain can be increased by working on the compiler because the currentcompiler (gcc) does not fully utilize the complex instructions added (in some cases). Apossible continuation would be to work on a compiler better than gcc wich is not designedfor embedded systems applications with code density at the base. An example is the codesize difference between gcc and keil or iar for ARM processors.
|
156 |
Reeb Graph Modeling of 3-D Animated Meshes and its Applications to Shape Recognition and Dynamic Compression / Modélisation des maillages animés 3D par Reeb Graph et son application à l'indexation et la compressionHachani, Meha 19 December 2015 (has links)
Le développement fulgurant de réseaux informatiques, a entraîné l'apparition de diverses applications multimédia qui emploient des données 3D dans des multiples contextes. Si la majorité des travaux de recherche sur ces données s'est appuyées sur les modèles statiques, c'est à présent vers Les modèles dynamiques de maillages qu'il faut se tourner. Cependant, le maillage triangulaire est une représentation extrinsèque, sensible face aux différentes transformations affines et isométriques. Par conséquent, il a besoin d'un descripteur structurel intrinsèque. Pour relever ces défis, nous nous concentrons sur la modélisation topologique intrinsèque basée sur les graphes de Reeb. Notre principale contribution consiste à définir une nouvelle fonction continue basée sur les propriétés de diffusion de la chaleur. Ce dernier est calculé comme la distance de diffusion d'un point de la surface aux points localisés aux extrémités du modèle 3D qui représentent l'extremum locales de l'objet . Cette approche de construction de graph de Reeb peut être extrêmement utile comme descripteur de forme locale pour la reconnaissance de forme 3D. Il peut également être introduit dans un système de compression dynamique basée sur la segmentation.Dans une deuxième partie, nous avons proposé d'exploiter la méthode de construction de graphe de Reeb dans un système de reconnaissance de formes 3D non rigides. L'objectif consiste à segmenter le graphe de Reeb en cartes de Reeb définis comme cartes de topologie contrôlée. Chaque carte de Reeb est projetée vers le domaine planaire canonique. Ce dépliage dans le domaine planaire canonique introduit des distorsions d'aire et d'angle. En se basant sur une estimation de distorsion, l'extraction de vecteur caractéristique est effectuée. Nous calculons pour chaque carte un couple de signatures, qui sera utilisé par la suite pour faire l'appariement entre les cartes de Reeb.Dans une troisième partie, nous avons proposé de concevoir une technique de segmentation, des maillages dynamiques 3D. Le processus de segmentation est effectué en fonction des valeurs de la fonction scalaire proposée dans la première partie. Le principe consiste à dériver une segmentation purement topologique qui vise à partitionner le maillage en des régions rigides tout en estimant le mouvement de chaque région au cours du temps. Pour obtenir une bonne répartition des sommets situés sur les frontières des régions, nous avons proposé d'ajouter une étape de raffinement basée sur l'information de la courbure. Chaque limite de région est associée à une valeur de la fonction qui correspond à un point critique. L'objectif visé est de trouver la valeur optimale de cette fonction qui détermine le profil des limites. La technique de segmentation développée est exploitée dans un système de compression sans perte des maillages dynamiques 3D. Il s'agit de partitionner la première trame de la séquence. Chaque région est modélisée par une transformée affine et leurs poids d'animation associés. Le vecteur partition, associant à chaque sommet l'index de la région auquel il appartient, est compressé par un codeur arithmétique. Les deux ensembles des transformées affines et des poids d'animation sont quantifiés uniformément et compressés par un codeur arithmétique. La première trame de la séquence est compressée en appliquant un codeur de maillage statique. L a quantification de l'erreur de prédiction temporelle est optimisée en minimisant l'erreur de reconstruction. Ce processus est effectué sur les données de l'erreur de prédiction, qui est divisé en 3 sous-bandes correspondant aux erreurs de prédiction des 3 coordonnées x, y et z. Le taux de distorsion introduit est déterminé en calculant le pas de quantification, pour chaque sous-bande, afin d'atteindre le débit binaire cible. / In the last decade, the technological progress in telecommunication, hardware design and multimedia, allows access to an ever finer three-dimensional (3-D) modeling of the world. While most researchers have focused on the field of 3D objects, now it is necessary to turn to 3D time domain (3D+t). 3D dynamic meshes are becoming a media of increasing importance. This 3D content is subject to various processing operations such as indexation, segmentation or compression. However, surface mesh is an extrinsic shape representation. Therefore, it suffers from important variability under different sampling strategies and canonical shape-non-altering surface transformations, such as affine or isometric transformations. Consequently it needs an intrinsic structural descriptor before being processed by one of the aforementioned processing operations. The research topic of this thesis work is the topological modeling based on Reeb graphs. Specifically, we focus on 3D shapes represented by triangulated surfaces. Our objective is to propose a new approach, of Reeb graph construction, which exploits the temporal information. The main contribution consists in defining a new continuous function based on the heat diffusion properties. The latter is computed from the discrete representation of the shape to obtain a topological structure.The restriction of the heat kernel to temporal domain makes the proposed function intrinsic and stable against transformation. Due to the presence of neighborhood information in the heat kernel, the proposed Reeb Graph construction approach can be extremely useful as local shape descriptor for non-rigid shape retrieval. It can also be introduced into a segmentation-based dynamic compression scheme in order to infer the functional parts of a 3D shape by decomposing it into parts of uniform motion. In this context, we apply the concept of Reeb graph in two widely used applications which are pattern recognition and compression.Reeb graph has been known as an interesting candidate for 3D shape intrinsic structural representation. we propose a 3D non rigid shape recognition approach. The main contribution consists in defining a new scalar function to construct the Reeb graph. This function is computed based on the diffusion distance. For matching purpose, the constructed Reeb graph is segmented into Reeb charts, which are associated with a couple of geometrical signatures. The matching between two Reeb charts is performed based on the distances between their corresponding signatures. As a result, the global similarity is estimated based on the minimum distance between Reeb chart pairs. Skeletonisation and segmentation tasks are closely related. Mesh segmentation can be formulated as graph clustering. First we propose an implicit segmentation method which consists in partitioning mesh sequences, with constant connectivity, based on the Reeb graph construction method. Regions are separated according to the values of the proposed continuous function while adding a refinement step based on curvature and boundary information.Intrinsic mesh surface segmentation has been studied in the field of computer vision, especially for compression and simplification purposes. Therefore we present a segmentation-based compression scheme for animated sequences of meshes with constant connectivity. The proposed method exploits the temporal coherence of the geometry component by using the heat diffusion properties during the segmentation process. The motion of the resulting regions is accurately described by 3D affine transforms. These transforms are computed at the first frame to match the subsequent ones. In order to improve the performance of our coding scheme, the quantization of temporal prediction errors is optimized by using a bit allocation procedure. The objective aimed at is to control the compression rate while minimizing the reconstruction error.
|
157 |
Stereoscopic video coding.January 1995 (has links)
by Roland Siu-kwong Ip. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 101-[105]). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.1 / Chapter 1.2 --- Image Compression --- p.2 / Chapter 1.2.1 --- Classification of Image Compression --- p.2 / Chapter 1.2.2 --- Lossy Compression Approaches --- p.3 / Chapter 1.3 --- Video Compression --- p.4 / Chapter 1.3.1 --- Video Compression System --- p.5 / Chapter 1.4 --- Stereoscopic Video Compression --- p.6 / Chapter 1.5 --- Organization of the thesis --- p.6 / Chapter 2 --- Motion Video Coding Theory --- p.8 / Chapter 2.1 --- Introduction --- p.8 / Chapter 2.2 --- Representations --- p.8 / Chapter 2.2.1 --- Temporal Processing --- p.13 / Chapter 2.2.2 --- Spatial Processing --- p.19 / Chapter 2.3 --- Quantization --- p.25 / Chapter 2.3.1 --- Scalar Quantization --- p.25 / Chapter 2.3.2 --- Vector Quantization --- p.27 / Chapter 2.4 --- Code Word Assignment --- p.29 / Chapter 2.5 --- Selection of Video Coding Standard --- p.31 / Chapter 3 --- MPEG Compatible Stereoscopic Coding --- p.34 / Chapter 3.1 --- Introduction --- p.34 / Chapter 3.2 --- MPEG Compatibility --- p.36 / Chapter 3.3 --- Stereoscopic Video Coding --- p.37 / Chapter 3.3.1 --- Coding by Stereoscopic Differences --- p.37 / Chapter 3.3.2 --- I-pictures only Disparity Coding --- p.40 / Chapter 3.4 --- Stereoscopic MPEG Encoder --- p.44 / Chapter 3.4.1 --- Stereo Disparity Estimator --- p.45 / Chapter 3.4.2 --- Improved Disparity Estimation --- p.47 / Chapter 3.4.3 --- Stereo Bitstream Multiplexer --- p.49 / Chapter 3.5 --- Generic Implementation --- p.50 / Chapter 3.5.1 --- Macroblock Converter --- p.54 / Chapter 3.5.2 --- DCT Functional Block --- p.55 / Chapter 3.5.3 --- Rate Control --- p.57 / Chapter 3.6 --- Stereoscopic MPEG Decoder --- p.58 / Chapter 3.6.1 --- Mono Playback --- p.58 / Chapter 3.6.2 --- Stereo Playback --- p.60 / Chapter 4 --- Performance Evaluation --- p.63 / Chapter 4.1 --- Introduction --- p.63 / Chapter 4.2 --- Test Sequences Generation --- p.63 / Chapter 4.3 --- Simulation Environment --- p.64 / Chapter 4.4 --- Simulation Results --- p.65 / Chapter 4.4.1 --- Objective Results --- p.65 / Chapter 4.4.2 --- Subjective Results --- p.72 / Chapter 5 --- Conclusions --- p.80 / Chapter A --- MPEG ´ؤ An International Standard --- p.83 / Chapter A.l --- Introduction --- p.83 / Chapter A.2 --- Preprocessing --- p.84 / Chapter A.3 --- Data Structure of Pictures --- p.85 / Chapter A.4 --- Picture Coding --- p.86 / Chapter A.4.1 --- Coding of Motion Vectors --- p.90 / Chapter A.4.2 --- Coding of Quantized Coefficients --- p.94 / References --- p.101
|
158 |
Wavelet Compression for Visualization and Analysis on High Performance ComputersLi, Shaomeng 31 October 2018 (has links)
As HPC systems move towards exascale, the discrepancy between computational power and I/O transfer rate is only growing larger. Lossy in situ compression is a promising solution to address this gap, since it alleviates I/O constraints while still enabling traditional post hoc analysis. This dissertation explores the viability of such a solution with respect to a specific kind of compressor — wavelets. We especially examine three aspects of concern regarding the viability of wavelets: 1) information loss after compression, 2) its capability to fit within in situ constraints, and 3) the compressor’s capability to adapt to HPC architectural changes. Findings from this dissertation inform in situ use of wavelet compressors on HPC systems, demonstrate its viabilities, and argue that its viability will only increase as exascale computing becomes a reality.
|
159 |
Attractor image coding with low blocking effects.January 1997 (has links)
by Ho, Hau Lai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 97-103). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overview of Attractor Image Coding --- p.2 / Chapter 1.2 --- Scope of Thesis --- p.3 / Chapter 2 --- Fundamentals of Attractor Coding --- p.6 / Chapter 2.1 --- Notations --- p.6 / Chapter 2.2 --- Mathematical Preliminaries --- p.7 / Chapter 2.3 --- Partitioned Iterated Function Systems --- p.10 / Chapter 2.3.1 --- Mathematical Formulation of the PIFS --- p.12 / Chapter 2.4 --- Attractor Coding using the PIFS --- p.16 / Chapter 2.4.1 --- Quadtree Partitioning --- p.18 / Chapter 2.4.2 --- Inclusion of an Orthogonalization Operator --- p.19 / Chapter 2.5 --- Coding Examples --- p.21 / Chapter 2.5.1 --- Evaluation Criterion --- p.22 / Chapter 2.5.2 --- Experimental Settings --- p.22 / Chapter 2.5.3 --- Results and Discussions --- p.23 / Chapter 2.6 --- Summary --- p.25 / Chapter 3 --- Attractor Coding with Adjacent Block Parameter Estimations --- p.27 / Chapter 3.1 --- δ-Minimum Edge Difference --- p.29 / Chapter 3.1.1 --- Definition --- p.29 / Chapter 3.1.2 --- Theoretical Analysis --- p.31 / Chapter 3.2 --- Adjacent Block Parameter Estimation Scheme --- p.33 / Chapter 3.2.1 --- Joint Optimization --- p.34 / Chapter 3.2.2 --- Predictive Coding --- p.36 / Chapter 3.3 --- Algorithmic Descriptions of the Proposed Scheme --- p.39 / Chapter 3.4 --- Experimental Results --- p.40 / Chapter 3.5 --- Summary --- p.50 / Chapter 4 --- Attractor Coding using Lapped Partitioned Iterated Function Sys- tems --- p.51 / Chapter 4.1 --- Lapped Partitioned Iterated Function Systems --- p.53 / Chapter 4.1.1 --- Weighting Operator --- p.54 / Chapter 4.1.2 --- Mathematical Formulation of the LPIFS --- p.57 / Chapter 4.2 --- Attractor Coding using the LPIFS --- p.62 / Chapter 4.2.1 --- Choice of Weighting Operator --- p.64 / Chapter 4.2.2 --- Range Block Preprocessing --- p.69 / Chapter 4.2.3 --- Decoder Convergence Analysis --- p.73 / Chapter 4.3 --- Local Domain Block Searching --- p.74 / Chapter 4.3.1 --- Theoretical Foundation --- p.75 / Chapter 4.3.2 --- Local Block Searching Algorithm --- p.77 / Chapter 4.4 --- Experimental Results --- p.79 / Chapter 4.5 --- Summary --- p.90 / Chapter 5 --- Conclusion --- p.91 / Chapter 5.1 --- Original Contributions --- p.91 / Chapter 5.2 --- Subjects for Future Research --- p.92 / Chapter A --- Fundamental Definitions --- p.94 / Chapter B --- Appendix B --- p.96 / Bibliography --- p.97
|
160 |
Étude numérique de l’interaction choc/couche limite en géométrie de révolution / Numerical Study of Shock/Boundary Layer Interaction on a Cylindrical ConfigurationNakano, Tamon 12 September 2018 (has links)
Les phénomènes d’interactions choc/couche limite sont dimensionnants pour de nombreuses applications des domaines de l’aéronautique et du spatial. Ils peuvent être associés à la formation de décollements instationnaires à basse fréquence qui n’ont été étudiés jusqu’à présent qu’en géométrie plane. La présente étude vise à caractériser ce type d’interaction en configuration cylindrique. Un outil de simulation numérique directe,basé sur l’extension de schémas hybrides aux différences finies de haute précision (centrés optimisés6/WENO 5) en géométrie curviligne, a été développé et validé à travers divers cas test standards. Une première partie de l’étude se focalise sur l’influence d’un effet de courbure transverse sur le développement des propriétés d’une couche limite supersonique à Mach 3. Il est montré que l’augmentation de la courbure relative de la couche limite tend à réduire l’énergie de fluctuation à basse fréquence près de la paroi, tout en renforçant les perturbations à hautes fréquences dans la zone externe de la couche limite. En comparaison avec le cas plan, la courbure transverse induit une ré-organisation notable des structures de la couche limite et un comportement différent des invariants d’anisotropie des contraintes, mais ne conduit qu’à une légère modification des distributions de contraintes et de l’équilibre global d’énergie cinétique turbulente. Une seconde partie de l’étude se concentre sur la zone d’interaction avec une rampe de compression et le mouvement instationnaire du choc en géométrie de révolution complète. La déformation azimutale du choc est caractérisée dans son mouvement. Elle apparaît essentiellement associée à la fluctuation de la ligne de décollement et l’organisation des structures tourbillonnaires amont. Il est montré que l’énergie des modes azimutaux de pression pariétale fluctuante est plus amplifiée pour les modes d’ordre plus élevé. La contribution à l’effort latéral associé au mode 1 apparaît plus particulièrement marquée à basses fréquences dans la zone amont au point de décollement et à moyennes fréquences en aval de la zone de recollement sur la rampe où les niveaux les plus élevés de fluctuations sont observés. Il est montré que les fluctuations à basses fréquences sont en revanche portées par des modes azimutaux d’ordre de plus en plus élevé à travers la zone d’interaction. / Shock wave/boundary layer interactions (SWBLI) are present in various aerospace engineering applications.They can be associated with separated regions yielding low-frequency unsteadiness, which have mainly been studied in planar geometries. The present study aims at characterizing this type of interaction in a cylindrical configuration. A direct numerical simulation solver has been developed and validated with various test cases. It is based on a high-order finite difference based hybrid schemes (6th order centered scheme/5thorder WENO), extended to curvilinear geometries. Transverse curvature effects on properties of spatially developing supersonic boundary layer at Mach 3 are first examined. It is shown that the increase of the relative curvature of the boundary layer tends to reduce the fluctuation energy at lower frequencies near the wall, while reinforcing the perturbations at higher frequencies in the upper zone of the boundary layer.In comparison with the planar case, the transverse curvature leads to a significant re-organization of the boundary layer structures and a subsequent modified behavior of the invariants of anisotropy turbulent stress tensor. It however only leads to slightly modified distributions of Reynolds stress and a rather similar overall balance of turbulent kinetic energy through the boundary layer. The second part of this study is dedicated to the unsteady motions of the shock/separation zone in a cylinder/compression flare configuration for which the full cylindrical geometry is taken into account. The shock distortions in the azimutal direction appears to be mainly associated to the organization of the upstream vortex structures and the subsequent azimutal fluctuations of the separation line. It is shown that the energy of the fluctuating wall pressure is more amplified for higher order azimutal modes. The contributions to lateral forces, associated to the first mode, are dominated by low-frequencies only upstream of the separation line in the intermittent region. They become more dominant in the middle frequency range downstream of the reattachment zone on the ramp. It is also shown that the low-frequency activity at the wall is progressively due to higher order azimuthal modes through the interaction zone.
|
Page generated in 0.103 seconds