• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 23
  • 14
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 125
  • 52
  • 31
  • 24
  • 23
  • 22
  • 19
  • 18
  • 17
  • 14
  • 13
  • 12
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

[en] MULTI-RESOLUTION OF OUT-OF-CORE TERRAIN GEOMETRY / [pt] MULTI-RESOLUÇÃO DE GEOMETRIA DE TERRENOS ARMAZENADOS EM MEMÓRIA SECUNDÁRIA

LUIZ GUSTAVO BUSTAMANTE MAGALHAES 07 March 2006 (has links)
[pt] Visualização de grandes terrenos é um assunto desafiador em computação gráfica. O número de polígonos necessário para representar fielmente a geometria de um terreno pode ser muito alto para ser processado em tempo real. Para resolver tal problema, utiliza-se um algoritmo de multi- resolução, que envia para o processador gráfico (GPU) somente os polígonos mais importantes, sem que haja uma perda na qualidade visual. A quantidade de dados é um outro grande problema, pois facilmente excede a quantidade de memória RAM do computador. Desta forma, um sistema de gerenciamento de dados que não estão em memória principal também é necessário. Este trabalho propõe uma solução simples e escalável para visualizar a geometria de grandes terrenos baseada em três pontos chaves: uma estrutura de dados para representar o terreno em multi-resolução; um sistema eficiente de visualização; e um sistema de paginação e predição dos dados. A estrutura de dados utilizada, assim como em outros trabalhos similares, é a quadtree. Esta escolha justifica-se pela simplicidade, além da eficiência e baixo consumo de memória de uma implementação em vetor. Cada nó da quadtree representa um ladrilho do terreno. A implementação é dividida em duas linhas de execução (threads), uma para gerenciamento dos ladrilhos e outra para visualização. A linha de execução de gerenciamento de ladrilhos é responsável por carregar/remover ladrilhos para/da memória. Esta linha de execução utiliza um mecanismo de predição de movimento da câmera para carregar ladrilhos que possam ser utilizados em um futuro próximo, e remover ladrilhos que provavelmente não serão necessários. A linha de execução de visualização é responsável por visualizar o terreno, fazendo cálculo do erro projetado, eliminando ladrilhos não visíveis e balanceando a estrutura de quadtree para eliminar buracos ou vértices T na superfície do terreno. A visualização pode ser feita de duas formas distintas: baseada no erro máximo tolerado ou na quantidade máxima de polígonos a ser processado. / [en] The visualization of large terrains is a challenging Computer Graphics issue. The number of polygons required to faithfully represent a terrain`s geometry can be too high for real-time visualization. To solve this problem, a multiresolution algorithm is used to feed the graphics processor only with the most important polygons, without visual quality loss. The amount of data is another important problem, as it can easily exceed a computer`s RAM. Thus, a system to manage out-of-core data is also required. The present work proposes a simple and scalable solution to visualize the geometry of large terrains based on three key points: a data structure to represent the terrain in multi-resolution, an efficient visualization system and a data paging and prediction system. Similarly to other works, the system uses a quadtree data structure due to its simplicity, along with the efficiency and the low memory use of an array-based implementation. Each node of the quadtree represents a tile of the terrain. The implementation is divided in two threads, one to manage the tiles and the other for visualization. The tile-management thread is responsible for loading/unloading tiles into/from the memory. This thread uses a camera-movement prediction mechanism to load tiles that can be used in the near future and to remove tiles that probably will not be necessary. The visualization thread is responsible for viewing the terrain, computing the projected error, eliminating tiles that are not visible and balancing the quadtree structure in order to eliminate cracks or T-vertices on the terrain`s surface. The visualization can be made by means of a fidelity-based or a budget-based approach.
102

Etude en vue de la multirésolution de l’apparence

Hadim, Julien 11 May 2009 (has links)
Les fonctions de texture directionnelle "Bidirectional Texture Function" (BTF) ont rencontrés un certain succès ces dernières années, notamment pour le rendu temps-réel d'images de synthèse, grâce à la fois au réalisme qu'elles apportent et au faible coût de calcul nécessaire. Cependant, un inconvénient de cette approche reste la taille gigantesque des données : de nombreuses méthodes ont été proposées afin de les compresser. Dans ce document, nous proposons une nouvelle représentation des BTFs qui améliore la cohérence des données et qui permet ainsi une compression plus efficace. Dans un premier temps, nous étudions les méthodes d'acquisition et de génération des BTFs et plus particulièrement, les méthodes de compression adaptées à une utilisation sur cartes graphiques. Nous réalisons ensuite une étude à l'aide de notre logiciel "BTFInspect" afin de déterminer parmi les différents phénomènes visuels dans les BTFs, ceux qui influencent majoritairement la cohérence des données par texel. Dans un deuxième temps, nous proposons une nouvelle représentation pour les BTFs, appelées Flat Bidirectional Texture Function (Flat-BTFs), qui améliore la cohérence des données d'une BTF et donc la compression des données. Dans l'analyse des résultats obtenus, nous montrons statistiquement et visuellement le gain de cohérence obtenu ainsi que l'absence d'une perte significative de qualité en comparaison avec la représentation d'origine. Enfin, dans un troisième temps, nous démontrons l'utilisation de notre nouvelle représentation dans des applications de rendu en temps-réel sur cartes graphiques. Puis, nous proposons une compression de l'apparence grâce à une méthode de quantification sur GPU et présentée dans le cadre d'une application de diffusion de données 3D entre un serveur contenant des modèles 3D et un client désirant visualiser ces données. / In recent years, Bidirectional Texture Function (BTF) has emerged as a flexible solution for realistic and real-time rendering of material with complex appearance and low cost computing. However one drawback of this approach is the resulting huge amount of data: several methods have been proposed in order to compress and manage this data. In this document, we propose a new BTF representation that improves data coherency and allows thus a better data compression. In a first part, we study acquisition and digital generation methods of BTFs and more particularly, compression methods suitable for GPU rendering. Then, We realise a study with our software BTFInspect in order to determine among the different visual phenomenons present in BTF which ones induce mainly the data coherence per texel. In a second part, we propose a new BTF representation, named Flat Bidirectional Texture Function (Flat-BTF), which improves data coherency and thus, their compression. The analysis of results show statistically and visually the gain in coherency as well as the absence of a noticeable loss of quality compared to the original representation. In a third and last part, we demonstrate how our new representation may be used for realtime rendering applications on GPUs. Then, we introduce a compression of the appearance thanks to a quantification method on GPU which is presented in the context of a 3D data streaming between a server of 3D data and a client which want visualize them.
103

Segmentação multiresolução variográfica ótima. / Optimal variographic multiresolution segmentation.

Costa, Wilian França 12 August 2016 (has links)
O desenvolvimento de soluções que auxiliem na extração de informações de dados oriundos de sistemas de sensoriamento remoto e outras geotecnologias são essenciais em diversas atividades, por exemplo, a identificação de requisitos para o monitoramento ambiental; a definição de regiões de conservação; o planejamento e execução de atividades de verificação quanto ao cumprimento e uso do espaço; o gerenciamento de recursos naturais; a definição de áreas protegidas e ecossistemas; e o planejamento para aplicação e reposição de insumos agrícolas. Neste contexto, o presente trabalho apresenta um método para parametrizar um algoritmo segmentador Multiresolution, de forma que os segmentos obtidos sejam os maiores possíveis dentro de limites pré-estabelecidos de heterogeneidade para os dados avaliados. O método faz uso de variografia, uma ferramenta geoestatística que apresenta uma estimativa de quanto duas amostras variam em uma região espacial, de acordo com a distância relativa entre elas. Mostra-se também como a avaliação de múltiplos variogramas pode ser empregada na delimitação de regiões quando combinada a este algoritmo de segmentação, desde que os dados estejam dispostos em uma grade amostral regularmente espaçada. O método desenvolvido utiliza o efeito pepita estimado para os atributos dispostos em camadas sobrepostas e quantifica a segmentação em dois momentos (ou médias) para identificar o valor do parâmetro espacial ótimo a ser aplicado no segmentador. Apresenta-se, como exemplos de aplicabilidade do método, três casos típicos desta área: (i) definição de zonas de manejo para agricultura de precisão; (ii) seleção de regiões para estimativas de degradação ambiental na vizinhança de ponto de coleta/observação de espécies; e (iii) a identificação de regiões bioclimáticas que compõem uma Unidade de Conservação da biodiversidade. / Information extraction of data derived from remote sensing and other geotechnologies is important for many activities, e.g., the identification of environmental requirements, the definition of conservation areas, the planning and implementation of activities regarding compliance of correct land use; the management of natural resources, the definition of protected ecosystem areas, and the spatial planning of agricultural input reposition. This thesis presents a parameter optimisation method for the Multiresolution segmentation algorithm. The goal of the method is to obtain maximum sized segments within the established heterogeneity limits. The method makes use of variography, a geostatistical tool that gives a measure of how much two samples will vary in a region depending on the distance between each one of them. The variogram nugget effect is measured for each attribute layer and then averaged to obtain the optimal value for spatial segmentation with the Multiresolution algorithm. The segments thus obtained are superimposed on a regularly spaced sampled grid of georeferenced data to divide the region under study. To show the usefulnesss of this method, the following three case studies were performed: (i) the delineation of precision farming management zones; (ii) the selection of regions for environmental degradation estimates in the neighbourhood of species occurrence points; and (iii) the identification of bioclimatic regions that are present in biodiversity conservation units.
104

Analyse harmonique sur graphes dirigés et applications : de l'analyse de Fourier aux ondelettes / Harmonic Analysis on directed graphs and applications : From Fourier analysis to wavelets

Sevi, Harry 22 November 2018 (has links)
La recherche menée dans cette thèse a pour but de développer une analyse harmonique pour des fonctions définies sur les sommets d'un graphe orienté. À l'ère du déluge de données, de nombreuses données sont sous forme de graphes et données sur ce graphe. Afin d'analyser d'exploiter ces données de graphes, nous avons besoin de développer des méthodes mathématiques et numériquement efficientes. Ce développement a conduit à l'émergence d'un nouveau cadre théorique appelé le traitement de signal sur graphe dont le but est d'étendre les concepts fondamentaux du traitement de signal classique aux graphes. Inspirées par l'aspect multi échelle des graphes et données sur graphes, de nombreux constructions multi-échelles ont été proposé. Néanmoins, elles s'appliquent uniquement dans le cadre non orienté. L'extension d'une analyse harmonique sur graphe orienté bien que naturelle, s'avère complexe. Nous proposons donc une analyse harmonique en utilisant l'opérateur de marche aléatoire comme point de départ de notre cadre. Premièrement, nous proposons des bases de type Fourier formées des vecteurs propres de l'opérateur de marche aléatoire. De ces bases de Fourier, nous en déterminons une notion fréquentielle en analysant la variation de ses vecteurs propres. La détermination d'une analyse fréquentielle à partir de la base des vecteurs de l'opérateur de marche aléatoire nous amène aux constructions multi-échelles sur graphes orientés. Plus particulièrement, nous proposons une construction en trames d'ondelettes ainsi qu'une construction d'ondelettes décimées sur graphes orientés. Nous illustrons notre analyse harmonique par divers exemples afin d'en montrer l'efficience et la pertinence. / The research conducted in this thesis aims to develop a harmonic analysis for functions defined on the vertices of an oriented graph. In the era of data deluge, much data is in the form of graphs and data on this graph. In order to analyze and exploit this graph data, we need to develop mathematical and numerically efficient methods. This development has led to the emergence of a new theoretical framework called signal processing on graphs, which aims to extend the fundamental concepts of conventional signal processing to graphs. Inspired by the multi-scale aspect of graphs and graph data, many multi-scale constructions have been proposed. However, they apply only to the non-directed framework. The extension of a harmonic analysis on an oriented graph, although natural, is complex. We, therefore, propose a harmonic analysis using the random walk operator as the starting point for our framework. First, we propose Fourier-type bases formed by the eigenvectors of the random walk operator. From these Fourier bases, we determine a frequency notion by analyzing the variation of its eigenvectors. The determination of a frequency analysis from the basis of the vectors of the random walk operator leads us to multi-scale constructions on oriented graphs. More specifically, we propose a wavelet frame construction as well as a decimated wavelet construction on directed graphs. We illustrate our harmonic analysis with various examples to show its efficiency and relevance.
105

Un système intégré d'acquisition 3D multispectral : acquisition, codage et compression des données / A 3D multispectral integrated acquisition system : acquisition, data coding and compression

Delcourt, Jonathan 29 October 2010 (has links)
Nous avons développé un système intégré permettant l'acquisition simultanée de la forme 3D ainsi que de la réflectance des surfaces des objets scannés. Nous appelons ce système un scanner 3D multispectral du fait qu’il combine, dans un couple stéréoscopique, une caméra multispectrale et un système projecteur de lumière structurée. Nous voyons plusieurs possibilités d’application pour un tel système mais nous mettons en avant des applications dans le domaine de l’archivage et la diffusion numériques des objets du patrimoine. Dans le manuscrit, nous présentons d’abord ce système ainsi que tous les calibrages et traitements nécessaires à sa mise en oeuvre. Ensuite, une fois que le système est fonctionnel, les données qui en sont générées sont riches d’informations, hétérogènes (maillage + réflectances, etc.) et surtout occupent beaucoup de place. Ce fait rend problématiques le stockage et la transmission, notamment pour des applications en ligne de type musée virtuel. Pour cette raison, nous étudions les différentes possibilités de représentation et de codage des données acquises par ce système pour en adopter la plus pertinente. Puis nous examinons les stratégies les plus appropriées à la compression de telles données, sans toutefois perdre la généralité sur d’autres données (type satellitaire). Nous réalisons un benchmark des stratégies de compression en proposant un cadre d’évaluation et des améliorations sur les stratégies classiques existantes. Cette première étude nous permettra de proposer une approche adaptative qui se révélera plus efficace pour la compression et notamment dans le cadre de la stratégie que nous appelons Full-3D. / We have developed an integrated system permitting the simultaneous acquisition of the 3D shape and the spectral spectral reflectance of scanned object surfaces. We call this system a 3D multispectral scanner because it combines within a stereopair, a multispectral video camera and a structured light projector. We see several application possibilities for a such acquisition system but we want to highlight applications in the field of digital archiving and broadcasting for heritage objects. In the manuscript we first introduce the acquisition system and its necessary calibrations and treatments needed for his use. Then, once the acquisition system is functional, data that are generated are rich in information, heterogeneous (mesh + reflectance, etc.) and in particular require lots of memory space. This fact makes data storage and transmission problematic, especially for applications like on line virtual museum. For this reason we study the different possibilities of representation and coding of data acquired by this system to adopt the most appropriate one. Then we examinate the most appropriate strategies to compress such data, without lost the generality on other data (satellite type). We perform a benchmark of compression strategies by providing an evaluation framework and improvements on existing conventional strategies. This first study will allow us to propose an adaptive approach that will be most effective for compression and particularly in the context of the compression strategy that we call Full-3D.
106

Segmentação multiresolução variográfica ótima. / Optimal variographic multiresolution segmentation.

Wilian França Costa 12 August 2016 (has links)
O desenvolvimento de soluções que auxiliem na extração de informações de dados oriundos de sistemas de sensoriamento remoto e outras geotecnologias são essenciais em diversas atividades, por exemplo, a identificação de requisitos para o monitoramento ambiental; a definição de regiões de conservação; o planejamento e execução de atividades de verificação quanto ao cumprimento e uso do espaço; o gerenciamento de recursos naturais; a definição de áreas protegidas e ecossistemas; e o planejamento para aplicação e reposição de insumos agrícolas. Neste contexto, o presente trabalho apresenta um método para parametrizar um algoritmo segmentador Multiresolution, de forma que os segmentos obtidos sejam os maiores possíveis dentro de limites pré-estabelecidos de heterogeneidade para os dados avaliados. O método faz uso de variografia, uma ferramenta geoestatística que apresenta uma estimativa de quanto duas amostras variam em uma região espacial, de acordo com a distância relativa entre elas. Mostra-se também como a avaliação de múltiplos variogramas pode ser empregada na delimitação de regiões quando combinada a este algoritmo de segmentação, desde que os dados estejam dispostos em uma grade amostral regularmente espaçada. O método desenvolvido utiliza o efeito pepita estimado para os atributos dispostos em camadas sobrepostas e quantifica a segmentação em dois momentos (ou médias) para identificar o valor do parâmetro espacial ótimo a ser aplicado no segmentador. Apresenta-se, como exemplos de aplicabilidade do método, três casos típicos desta área: (i) definição de zonas de manejo para agricultura de precisão; (ii) seleção de regiões para estimativas de degradação ambiental na vizinhança de ponto de coleta/observação de espécies; e (iii) a identificação de regiões bioclimáticas que compõem uma Unidade de Conservação da biodiversidade. / Information extraction of data derived from remote sensing and other geotechnologies is important for many activities, e.g., the identification of environmental requirements, the definition of conservation areas, the planning and implementation of activities regarding compliance of correct land use; the management of natural resources, the definition of protected ecosystem areas, and the spatial planning of agricultural input reposition. This thesis presents a parameter optimisation method for the Multiresolution segmentation algorithm. The goal of the method is to obtain maximum sized segments within the established heterogeneity limits. The method makes use of variography, a geostatistical tool that gives a measure of how much two samples will vary in a region depending on the distance between each one of them. The variogram nugget effect is measured for each attribute layer and then averaged to obtain the optimal value for spatial segmentation with the Multiresolution algorithm. The segments thus obtained are superimposed on a regularly spaced sampled grid of georeferenced data to divide the region under study. To show the usefulnesss of this method, the following three case studies were performed: (i) the delineation of precision farming management zones; (ii) the selection of regions for environmental degradation estimates in the neighbourhood of species occurrence points; and (iii) the identification of bioclimatic regions that are present in biodiversity conservation units.
107

A Novel Framework to Determine Physiological Signals From Blood Flow Dynamics

Chetlur Adithya, Prashanth 03 April 2018 (has links)
Centers for Disease Control and Prevention (CDC) estimate that more than 11.2 million people require critical and emergency care in the United States per year. Optimizing and improving patient morbidity and mortality outcomes are the primary objectives of monitoring in critical and emergency care. Patients in need of critical or emergency care in general are at a risk of single or multiple organ failures occurring due to a traumatic injury, a surgical event, or an underlying pathology that results in severe patient hemodynamic instability. Hence, continuous monitoring of fundamental cardiovascular hemodynamic parameters, such as heart rate, respiratory rate, blood pressure, blood oxygenation and core temperature, is essential to accomplish diagnostics in critical and emergency care. Today’s standard of care measures these critical parameters using multiple monitoring technologies. Though it is possible to measure all the fundamental cardiovascular hemodynamic parameters using the blood flow dynamics, its use is currently only limited to measuring continuous blood pressure. No other comparable studies in the literature were successful in quantifying other critical parameters from the blood flow dynamics for a few reasons. First, the blood flow dynamics exhibit a complicated and sensitive dynamic pressure field. Existing blood flow based data acquisition systems are unable to detect these sensitive variations in the pressure field. Further, the pressure field is also influenced by the presence of background acoustic interference, resulting in a noisy pressure profile. Thus in order to extract critical parameters from this dynamic pressure field with fidelity, there is need for an integrated framework that is composed of a highly sensitive data acquisition system and advanced signal processing. In addition, existing state-of-the-art technologies require expensive instrumentation and complex infrastructure. The information sensed using these multiple monitoring technologies is integrated and visualized using a clinical information system. This process of integration and visualization creates the need for functional interoperability within the multiple monitoring technologies. Limited functional interoperability not only results in diagnostic errors but also their complexity makes it impossible to use such technologies to accomplish monitoring in low resource settings. These multiple monitoring technologies are neither portable nor scalable, in addition to inducing extreme patient discomfort. For these reasons, existing monitoring technologies do not efficiently meet the monitoring and diagnostic requirements of critical and emergency care. In order to address the challenges presented by existing blood flow based data acquisition systems and other monitoring systems, a point of care monitoring device was developed to provide multiple critical parameters by means of uniquely measuring a physiological process. To demonstrate the usability of this novel catheter multiscope, a feasibility study was performed using an animal model. The corresponding results are presented in this dissertation. The developed measurement system first acquires the dynamics of blood flow through a minimally invasive catheter. Then, a signal processing framework is developed to characterize the blood flow dynamics and to provide critical parameters such as heart rate, respiratory rate, and blood pressure. The framework used to extract the physiological data corresponding to the acoustic field of the blood flow consisted of a noise cancellation technique and a wavelet based source separation. The preliminary results of the acoustic field of the blood flow revealed the presence of acoustic heart and respiratory pulses. A unique and novel framework was also developed to extract continuous blood pressure from the pressure field of the blood flow. Finally, the computed heart and respiratory rates, systolic and diastolic pressures were benchmarked with actual values measured using conventional devices to validate the measurements of the catheter multiscope. In summary, the results of the feasibility study showed that the novel catheter multiscope can provide critical parameters such as heart rate, respiratory rate and blood pressure with clinical accuracy. In addition, this dissertation also highlights the diagnostic potential of the developed catheter multiscope by presenting preliminary results of proof of concept studies performed for application case studies such as sinus rhythm pattern recognition and fetal monitoring through phonocardiography.
108

關於週期性波包近似值的理論與應用 / On the Theory and Applications of Periodic Wavelet Approximation

鄧起文, Deng, Qi Wen Unknown Date (has links)
在本篇論文裏,我們將使用所謂的週期化(periodization)的裝置作用於Daubechies' compactly supported wavelets上而得到一族構成L<sup>2</sup>([0,1])和H<sup>s</sup>-periodic (the space of periodic function locally in H<sup>s</sup>)基底的正交的週期性波包(orthonormal periodic wavelets)。然後我們給出了對於一函數的波包近似值的誤差估計(參閱定理6)以及對於週期性邊界值的常微分方程問題的解的波包近似值的誤差估計(參閱定理7)。對於Burger equation的數值解也當作一個應用來討論。 / In this thesis,we shall construct a family of orthonormal periodic wavelets which form a basis of L<sup>2</sup>([0,l]) and H<sup>s</sup>-periodic (the space of periodic functions locally in H<sup>s</sup>) by using a device called periodization ([10,7]) on Daubechies' compactly supported wavelets.We then give the error estimates for the wavelet approximation to a given function (see theorem 6) and to a solution of periodic boundary value problem for ordinary differential equation(see theorem 7). Numerical solution for Burger equation is also discussed as an application.
109

Non-parametric synthesis of volumetric textures from a 2D sample

Urs, Radu Dragos 29 March 2013 (has links) (PDF)
This thesis deals with the synthesis of anisotropic volumetric textures from a single 2D observation. We present variants of non parametric and multi-scale algorithms. Their main specificity lies in the fact that the 3D synthesis process relies on the sampling of a single 2D input sample, ensuring consistency in the different views of the 3D texture. Two types of approaches are investigated, both multi-scale and based on markovian hypothesis. The first category brings together a set of algorithms based on fixed-neighbourhood search, adapted from existing algorithms of texture synthesis from multiple 2D sources. The principle is that, starting from a random initialisation, the 3D texture is modified, voxel by voxel, in a deterministic manner, ensuring that the grey level local configurations on orthogonal slices containing the voxel are similar to configurations of the input image. The second category points out an original probabilistic approach which aims at reproducing in the textured volume the interactions between pixels learned in the input image. The learning is done by non-parametric Parzen windowing. Optimization is handled voxel by voxel by a deterministic ICM type algorithm. Several variants are proposed regarding the strategies used for the simultaneous handling of the orthogonal slices containing the voxel. These synthesis methods are first implemented on a set of structured textures of varied regularity and anisotropy. A comparative study and a sensitivity analysis are carried out, highlighting the strengths and the weaknesses of the different algorithms. Finally, they are applied to the simulation of volumetric textures of carbon composite materials, on nanometric scale snapshots obtained by transmission electron microscopy. The proposed experimental benchmark allows to evaluate quantitatively and objectively the performances of the different methods.
110

Fluxes and Mixing Processes in the Marine Atmospheric Boundary Layer

Nilsson, Erik Olof January 2013 (has links)
Atmospheric models are strongly dependent on the turbulent exchange of momentum, sensible heat and moisture (latent heat) at the surface. Oceans cover about 70% of the Earth’s surface and understanding the processes that control air-sea exchange is of great importance in order to predict weather and climate. In the atmosphere, for instance, hurricane development, cyclone intensity and track depend on these processes. Ocean waves constitute an obvious example of air-sea interaction and can cause the air-flow over sea to depend on surface conditions in uniquely different ways compared to boundary layers over land. When waves are generated by wind they are called wind sea or growing sea, and when they leave their generation area or propagate faster than the generating wind they are called swell. The air-sea exchange is mediated by turbulent eddies occurring on many different scales. Field measurements and high-resolution turbulence resolving numerical simulations have here been used to study these processes. The standard method to measure turbulent fluxes is the eddy covariance method. A spatial separation is often used between instruments when measuring scalar flux; this causes an error which was investigated for the first time over sea. The error is typically smaller over ocean than over land, possibly indicating changes in turbulence structure over sea. Established and extended analysis methods to determine the dominant scales of momentum transfer was used to interpret how reduced drag and sometimes net upward momentum flux can persist in the boundary layer indirectly affected by swell. A changed turbulence structure with increased turbulence length scales and more effective mixing was found for swell. A study, using a coupled wave-atmosphere regional climate model, gave a first indication on what impact wave mixing have on atmosphere and wave parameters. Near surface wind speed and wind gradients was affected especially for shallow boundary layers, which typically increased in height from the introduced wave-mixing. A large impact may be expected in regions of the world with predominant swell. The impact of swell waves on air-sea exchange and mixing should be taken into account to develop more reliable coupled Earth system models.

Page generated in 0.0516 seconds