• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 603
  • 285
  • 85
  • 61
  • 40
  • 18
  • 17
  • 16
  • 16
  • 16
  • 15
  • 12
  • 6
  • 5
  • 5
  • Tagged with
  • 1347
  • 236
  • 168
  • 163
  • 140
  • 124
  • 110
  • 109
  • 103
  • 93
  • 90
  • 90
  • 89
  • 82
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Similaridade em big data / Similarity in big data

Santos, Lúcio Fernandes Dutra 19 July 2017 (has links)
Os volumes de dados armazenados em grandes bases de dados aumentam em ritmo sempre crescente, pressionando o desempenho e a flexibilidade dos Sistemas de Gerenciamento de Bases de Dados (SGBDs). Os problemas de se tratar dados em grandes quantidades, escopo, complexidade e distribuição vêm sendo tratados também sob o tema de big data. O aumento da complexidade cria a necessidade de novas formas de busca - representar apenas números e pequenas cadeias de caracteres já não é mais suficiente. Buscas por similaridade vêm se mostrando a maneira por excelência de comparar dados complexos, mas até recentemente elas não estavam disponíveis nos SGBDs. Agora, com o início de sua disponibilidade, está se tornando claro que apenas os operadores de busca por similaridade fundamentais não são suficientes para lidar com grandes volumes de dados. Um dos motivos disso é que similaridade\' é, usualmente, definida considerando seu significado quando apenas poucos estão envolvidos. Atualmente, o principal foco da literatura em big data é aumentar a eficiência na recuperação dos dados usando paralelismo, existindo poucos estudos sobre a eficácia das respostas obtidas. Esta tese visa propor e desenvolver variações dos operadores de busca por similaridade para torná-los mais adequados para processar big data, apresentando visões mais abrangentes da base de dados, aumentando a eficácia das respostas, porém sem causar impactos consideráveis na eficiência dos algoritmos de busca e viabilizando sua execução escalável sobre grandes volumes de dados. Para alcançar esse objetivo, este trabalho apresenta quatro frentes de contribuições: A primeira consistiu em um modelo de diversificação de resultados que pode ser aplicado usando qualquer critério de comparação e operador de busca por similaridade. A segunda focou em definir técnicas de amostragem e de agrupamento de dados com o modelo de diversificação proposto, acelerando o processo de análise dos conjuntos de resultados. A terceira contribuição desenvolveu métodos de avaliação da qualidade dos conjuntos de resultados diversificados. Por fim, a última frente de contribuição apresentou uma abordagem para integrar os conceitos de mineração visual de dados e buscas por similaridade com diversidade em sistemas de recuperação por conteúdo, aumentando o entendimento de como a propriedade de diversidade pode ser aplicada. / The data being collected and generated nowadays increase not only in volume, but also in complexity, requiring new query operators. Health care centers collecting image exams and remote sensing from satellites and from earth-based stations are examples of application domains where more powerful and flexible operators are required. Storing, retrieving and analyzing data that are huge in volume, structure, complexity and distribution are now being referred to as big data. Representing and querying big data using only the traditional scalar data types are not enough anymore. Similarity queries are the most pursued resources to retrieve complex data, but until recently, they were not available in the Database Management Systems. Now that they are starting to become available, its first uses to develop real systems make it clear that the basic similarity query operators are not enough to meet the requirements of the target applications. The main reason is that similarity is a concept formulated considering only small amounts of data elements. Nowadays, researchers are targeting handling big data mainly using parallel architectures, and only a few studies exist targeting the efficacy of the query answers. This Ph.D. work aims at developing variations for the basic similarity operators to propose better suited similarity operators to handle big data, presenting a holistic vision about the database, increasing the effectiveness of the provided answers, but without causing impact on the efficiency on the searching algorithms. To achieve this goal, four mainly contributions are presented: The first one was a result diversification model that can be applied in any comparison criteria and similarity search operator. The second one focused on defining sampling and grouping techniques with the proposed diversification model aiming at speeding up the analysis task of the result sets. The third contribution concentrated on evaluation methods for measuring the quality of diversified result sets. Finally, the last one defines an approach to integrate the concepts of visual data mining and similarity with diversity searches in content-based retrieval systems, allowing a better understanding of how the diversity property is applied in the query process.
302

A Document Similarity Measure and Its Applications

Gan, Zih-Dian 07 September 2011 (has links)
In this paper, we propose a novel similarity measure for document data processing and apply it to text classification and clustering. For two documents, the proposed measure takes three cases into account: (a) The feature considered appears in both documents, (b) the feature considered appears in only one document, and (c) the feature considered appears in none of the documents. For the first case, we give a lower bound and decrease the similarity according to the difference between the feature values of the two documents. For the second case, we give a fixed value disregarding the magnitude of the feature value. For the last case, we ignore its effectiveness. We apply it to the similarity based single-label classifier k-NN and multi-label classifier ML-KNN, and adopt these properties to measure the similarity between a document and a specific set for document clustering, i.e., k-means like algorithm, to compare the effectiveness with other measures. Experimental results show that our proposed method can work more effectively than others.
303

The influence of contagion information and behavior on older adolescents' perceptions of peers with chronic illness

Grizzle, Jonhenry Cordell 01 November 2005 (has links)
To explore attributions about chronically ill peers, 545 older adolescents ages 17-26 read a short vignette describing a brief social encounter with a hypothetical peer suffering from a medical condition, and then responded to a series of questionnaires to assess their perceptions of that peer. Nine measures intended to assess perceptions of ill peers were developed and empirically validated. Test-retest reliability and internal consistency was moderate to good for all measures. Component structure of the Peer Acceptance Questionnaire (PAQ), Peer Acceptance Questionnaire ?? 3rd Person (PAQ-F), and Perceived Similarity Questionnaire (PSQ) were also evaluated. Principal components analysis yielded a 2-factor structure of Openness and Egalitarianism for both the PAQ and PAQ-F. A 6-factor structure of (a) Familial/Spiritual, (b) General Health, (c) Social, (d) Behavioral, (e) Physical, and (f) Educational was suggested for the PSQ. Results indicated an interaction between illness type and behavior on acceptance ratings, such that behavior potentiated the effect of illness type on acceptance. In addition, vignette characters with contagious illnesses were rated less favorably than those with noncontagious illnesses, and vignette characters displaying typical behavior were rated more favorably than either withdrawn or aggressive vignette characters. Illness-specific knowledge, ratings of perceived similarity, and ratings of assigned blame predicted acceptance ratings, whereas illness-specific knowledge and acceptance ratings predicted ratings of assigned blame. Finally, significant differences were observed between first- and third-person ratings of both acceptance and assigned blame.
304

Komparativer Ähnlichkeitsalgorithmus

Schwartz, Eva-Maria 15 January 2010 (has links) (PDF)
Die Notwendigkeit zur Nutzung von nicht-individuell entwickelter Software entsteht im Geschäfts- und Arbeitsfeld auf Grund der Entwicklung in diesem Bereich. Unternehmen müssen sich ständig ändernden Anforderungen im Geschäftsumfeld stellen. Mit dem immer stärker werdenden Wettbewerb ist es erforderlich, sich auf eigene Kernkompetenzen zu konzentrieren und zeitliche Kooperation bzw. Beziehungen mit anderen Organisationen einzugehen. Um diesen Beziehungen und Anforderungen gerecht zu werden, müssen Software bzw. Softwarebausteine flexibel und temporär bezogen werden. Um den Nutzern dieser Software eine bestmögliche Unterstützung bei der Auswahl ihrer bedarfsgerechten Komponenten zu geben, sollen Ihnen, anhand von Entscheidungen bereits bestehender Kunden, Vorschläge für Objekte unterbreitet werden. Diese Objekte können je nach System zum Beispiel Konfigurationseigenschaften, Inhaltsmodule oder Layoutdarstellungen sein. Es wird davon ausgegangen, dass ähnliche Nutzer auch ähnliche Objekte benötigen. Aus diesem Grund sollen die Nutzer miteinander verglichen werden. Das Problem liegt an dieser Stelle in der Beschreibung eines Nutzers. Dieser kann durch eine Vielzahl von Merkmalen gekennzeichnet werden, welche je nach Objekt eine unterschiedliche Wichtigkeit bei der Entscheidung haben. Aus diesem Grund müssen die einzelnen Merkmale unabhängig von einander betrachtet werden. Bei der Bewertung eines Objektes sollen dann entsprechende Wichtungen für das jeweilige Merkmal integriert werden. Der Vergleich ist erst dadurch möglich, dass der Kontext und damit die Aufgabe des Nutzers bekannt sind. Nur mit diesen Informationen können gezielte Empfehlungen erstellt werden. Es wird ein Verfahren vorgestellt, welches die priorisierte Bewertung einzelner Merkmale einbezieht. Ausgehend von diesem Verfahren wird ein Algorithmus vorgestellt, welcher Nutzer anhand ihrer Merkmale vergleicht und daraus folgend Empfehlungen für Objekte ausgibt. Der Algorithmus soll in ein Recommender-System integriert werden.
305

Effiziente MapReduce-Parallelisierung von Entity Resolution-Workflows

Kolb, Lars 11 December 2014 (has links) (PDF)
In den vergangenen Jahren hat das neu entstandene Paradigma Infrastructure as a Service die IT-Welt massiv verändert. Die Bereitstellung von Recheninfrastruktur durch externe Dienstleister bietet die Möglichkeit, bei Bedarf in kurzer Zeit eine große Menge von Rechenleistung, Speicherplatz und Bandbreite ohne Vorabinvestitionen zu akquirieren. Gleichzeitig steigt sowohl die Menge der frei verfügbaren als auch der in Unternehmen zu verwaltenden Daten dramatisch an. Die Notwendigkeit zur effizienten Verwaltung und Auswertung dieser Datenmengen erforderte eine Weiterentwicklung bestehender IT-Technologien und führte zur Entstehung neuer Forschungsgebiete und einer Vielzahl innovativer Systeme. Ein typisches Merkmal dieser Systeme ist die verteilte Speicherung und Datenverarbeitung in großen Rechnerclustern bestehend aus Standard-Hardware. Besonders das MapReduce-Programmiermodell hat in den vergangenen zehn Jahren zunehmend an Bedeutung gewonnen. Es ermöglicht eine verteilte Verarbeitung großer Datenmengen und abstrahiert von den Details des verteilten Rechnens sowie der Behandlung von Hardwarefehlern. Innerhalb dieser Dissertation steht die Nutzung des MapReduce-Konzeptes zur automatischen Parallelisierung rechenintensiver Entity Resolution-Aufgaben im Mittelpunkt. Entity Resolution ist ein wichtiger Teilbereich der Informationsintegration, dessen Ziel die Entdeckung von Datensätzen einer oder mehrerer Datenquellen ist, die dasselbe Realweltobjekt beschreiben. Im Rahmen der Dissertation werden schrittweise Verfahren präsentiert, welche verschiedene Teilprobleme der MapReduce-basierten Ausführung von Entity Resolution-Workflows lösen. Zur Erkennung von Duplikaten vergleichen Entity Resolution-Verfahren üblicherweise Paare von Datensätzen mithilfe mehrerer Ähnlichkeitsmaße. Die Auswertung des Kartesischen Produktes von n Datensätzen führt dabei zu einer quadratischen Komplexität von O(n²) und ist deswegen nur für kleine bis mittelgroße Datenquellen praktikabel. Für Datenquellen mit mehr als 100.000 Datensätzen entstehen selbst bei verteilter Ausführung Laufzeiten von mehreren Stunden. Deswegen kommen sogenannte Blocking-Techniken zum Einsatz, die zur Reduzierung des Suchraums dienen. Die zugrundeliegende Annahme ist, dass Datensätze, die eine gewisse Mindestähnlichkeit unterschreiten, nicht miteinander verglichen werden müssen. Die Arbeit stellt eine MapReduce-basierte Umsetzung der Auswertung des Kartesischen Produktes sowie einiger bekannter Blocking-Verfahren vor. Nach dem Vergleich der Datensätze erfolgt abschließend eine Klassifikation der verglichenen Kandidaten-Paare in Match beziehungsweise Non-Match. Mit einer steigenden Anzahl verwendeter Attributwerte und Ähnlichkeitsmaße ist eine manuelle Festlegung einer qualitativ hochwertigen Strategie zur Kombination der resultierenden Ähnlichkeitswerte kaum mehr handhabbar. Aus diesem Grund untersucht die Arbeit die Integration maschineller Lernverfahren in MapReduce-basierte Entity Resolution-Workflows. Eine Umsetzung von Blocking-Verfahren mit MapReduce bedingt eine Partitionierung der Menge der zu vergleichenden Paare sowie eine Zuweisung der Partitionen zu verfügbaren Prozessen. Die Zuweisung erfolgt auf Basis eines semantischen Schlüssels, der entsprechend der konkreten Blocking-Strategie aus den Attributwerten der Datensätze abgeleitet ist. Beispielsweise wäre es bei der Deduplizierung von Produktdatensätzen denkbar, lediglich Produkte des gleichen Herstellers miteinander zu vergleichen. Die Bearbeitung aller Datensätze desselben Schlüssels durch einen Prozess führt bei Datenungleichverteilung zu erheblichen Lastbalancierungsproblemen, die durch die inhärente quadratische Komplexität verschärft werden. Dies reduziert in drastischem Maße die Laufzeiteffizienz und Skalierbarkeit der entsprechenden MapReduce-Programme, da ein Großteil der Ressourcen eines Clusters nicht ausgelastet ist, wohingegen wenige Prozesse den Großteil der Arbeit verrichten müssen. Die Bereitstellung verschiedener Verfahren zur gleichmäßigen Ausnutzung der zur Verfügung stehenden Ressourcen stellt einen weiteren Schwerpunkt der Arbeit dar. Blocking-Strategien müssen stets zwischen Effizienz und Datenqualität abwägen. Eine große Reduktion des Suchraums verspricht zwar eine signifikante Beschleunigung, führt jedoch dazu, dass ähnliche Datensätze, z. B. aufgrund fehlerhafter Attributwerte, nicht miteinander verglichen werden. Aus diesem Grunde ist es hilfreich, für jeden Datensatz mehrere von verschiedenen Attributen abgeleitete semantische Schlüssel zu generieren. Dies führt jedoch dazu, dass ähnliche Datensätze unnötigerweise mehrfach bezüglich verschiedener Schlüssel miteinander verglichen werden. Innerhalb der Arbeit werden deswegen Algorithmen zur Vermeidung solch redundanter Ähnlichkeitsberechnungen präsentiert. Als Ergebnis dieser Arbeit wird das Entity Resolution-Framework Dedoop präsentiert, welches von den entwickelten MapReduce-Algorithmen abstrahiert und eine High-Level-Spezifikation komplexer Entity Resolution-Workflows ermöglicht. Dedoop fasst alle in dieser Arbeit vorgestellten Techniken und Optimierungen in einem nutzerfreundlichen System zusammen. Der Prototyp überführt nutzerdefinierte Workflows automatisch in eine Menge von MapReduce-Jobs und verwaltet deren parallele Ausführung in MapReduce-Clustern. Durch die vollständige Integration der Cloud-Dienste Amazon EC2 und Amazon S3 in Dedoop sowie dessen Verfügbarmachung ist es für Endnutzer ohne MapReduce-Kenntnisse möglich, komplexe Entity Resolution-Workflows in privaten oder dynamisch erstellten externen MapReduce-Clustern zu berechnen.
306

Explorácia multimediálnych kolekcií / Exploration of Multimedia Collections

Moško, Juraj January 2016 (has links)
Multimedia retrieval systems are supposed to provide the method and the interface for users to retrieve particular multimedia data from multimedia collections. Although, many different retrieval techniques evolved from times when the search in multimedia collections firstly appeared as a research task, not all of them can fulfill specific requirements that the multimedia exploration is determined for. The multimedia exploration is designated for revealing the content of a whole multimedia collection, quite often totally unknown to the users who retrieve data. Because of these facts a multimedia exploration system has to solve problems like, how to visualize (usually multidimensional) multimedia data, how to scale data retrieval from arbitrarily large collections and how to design such an interface that the users could intuitively use for the exploration. Taking these problems into consideration, we proposed and evaluated ideas for building the system that is well-suited for the multimedia exploration. We outlined the overall architecture of a multimedia exploration system, created the Multi-Layer Exploration Structure (MLES) as an underlying index structure that should solve problems of efficient and intuitive data retrieval and we also proposed definitions of exploration operations as an interactive and...
307

Efficient similarity-driven emission angle selection for coherent plane-wave compounding

Akbar, Haroon Ali 09 October 2018 (has links)
Typical ultrafast plane-wave ultrasound imaging involves: 1) insonifying the medium with several plane-wave pulses emitted at different angles by a linear transducer array, 2) sampling the returning echo signals, after each plane-wave emission, with the same transducer array, 3) beamforming the recorded angle-specific raw data frames, and 4) compounding the beamformed data frames over all angles to form a final image. This thesis attempts to address the following question: Given a set of available plane-wave emission angles, which ones should we select for acquisition (i.e., which angle-specific raw data frames should we sample), to achieve adequate image quality at low cost associated with both sampling and computation? We propose a simple similarity-driven angle selection scheme and evaluate its several variants that rely on user-specified similarity measurement thresholds guiding the recursive angle selection process. Our results show that the proposed scheme has a low computational overhead and can yield significant savings in terms of the amount of sampled raw data. / Graduate
308

Similaridade em big data / Similarity in big data

Lúcio Fernandes Dutra Santos 19 July 2017 (has links)
Os volumes de dados armazenados em grandes bases de dados aumentam em ritmo sempre crescente, pressionando o desempenho e a flexibilidade dos Sistemas de Gerenciamento de Bases de Dados (SGBDs). Os problemas de se tratar dados em grandes quantidades, escopo, complexidade e distribuição vêm sendo tratados também sob o tema de big data. O aumento da complexidade cria a necessidade de novas formas de busca - representar apenas números e pequenas cadeias de caracteres já não é mais suficiente. Buscas por similaridade vêm se mostrando a maneira por excelência de comparar dados complexos, mas até recentemente elas não estavam disponíveis nos SGBDs. Agora, com o início de sua disponibilidade, está se tornando claro que apenas os operadores de busca por similaridade fundamentais não são suficientes para lidar com grandes volumes de dados. Um dos motivos disso é que similaridade\' é, usualmente, definida considerando seu significado quando apenas poucos estão envolvidos. Atualmente, o principal foco da literatura em big data é aumentar a eficiência na recuperação dos dados usando paralelismo, existindo poucos estudos sobre a eficácia das respostas obtidas. Esta tese visa propor e desenvolver variações dos operadores de busca por similaridade para torná-los mais adequados para processar big data, apresentando visões mais abrangentes da base de dados, aumentando a eficácia das respostas, porém sem causar impactos consideráveis na eficiência dos algoritmos de busca e viabilizando sua execução escalável sobre grandes volumes de dados. Para alcançar esse objetivo, este trabalho apresenta quatro frentes de contribuições: A primeira consistiu em um modelo de diversificação de resultados que pode ser aplicado usando qualquer critério de comparação e operador de busca por similaridade. A segunda focou em definir técnicas de amostragem e de agrupamento de dados com o modelo de diversificação proposto, acelerando o processo de análise dos conjuntos de resultados. A terceira contribuição desenvolveu métodos de avaliação da qualidade dos conjuntos de resultados diversificados. Por fim, a última frente de contribuição apresentou uma abordagem para integrar os conceitos de mineração visual de dados e buscas por similaridade com diversidade em sistemas de recuperação por conteúdo, aumentando o entendimento de como a propriedade de diversidade pode ser aplicada. / The data being collected and generated nowadays increase not only in volume, but also in complexity, requiring new query operators. Health care centers collecting image exams and remote sensing from satellites and from earth-based stations are examples of application domains where more powerful and flexible operators are required. Storing, retrieving and analyzing data that are huge in volume, structure, complexity and distribution are now being referred to as big data. Representing and querying big data using only the traditional scalar data types are not enough anymore. Similarity queries are the most pursued resources to retrieve complex data, but until recently, they were not available in the Database Management Systems. Now that they are starting to become available, its first uses to develop real systems make it clear that the basic similarity query operators are not enough to meet the requirements of the target applications. The main reason is that similarity is a concept formulated considering only small amounts of data elements. Nowadays, researchers are targeting handling big data mainly using parallel architectures, and only a few studies exist targeting the efficacy of the query answers. This Ph.D. work aims at developing variations for the basic similarity operators to propose better suited similarity operators to handle big data, presenting a holistic vision about the database, increasing the effectiveness of the provided answers, but without causing impact on the efficiency on the searching algorithms. To achieve this goal, four mainly contributions are presented: The first one was a result diversification model that can be applied in any comparison criteria and similarity search operator. The second one focused on defining sampling and grouping techniques with the proposed diversification model aiming at speeding up the analysis task of the result sets. The third contribution concentrated on evaluation methods for measuring the quality of diversified result sets. Finally, the last one defines an approach to integrate the concepts of visual data mining and similarity with diversity searches in content-based retrieval systems, allowing a better understanding of how the diversity property is applied in the query process.
309

Junção de conjuntos por similaridade explorando paralelismo multinível em GPUs / Set similarity joins exploring multilevel parallelism on GPUs

Ribeiro Junior, Sidney 29 August 2017 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2017-10-05T11:30:17Z No. of bitstreams: 2 Dissertação - Sidney Ribeiro Junior - 2017.pdf: 1832065 bytes, checksum: 41b96bdea09ea7b5ddb6551265e0622b (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-10-05T11:30:38Z (GMT) No. of bitstreams: 2 Dissertação - Sidney Ribeiro Junior - 2017.pdf: 1832065 bytes, checksum: 41b96bdea09ea7b5ddb6551265e0622b (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2017-10-05T11:30:38Z (GMT). No. of bitstreams: 2 Dissertação - Sidney Ribeiro Junior - 2017.pdf: 1832065 bytes, checksum: 41b96bdea09ea7b5ddb6551265e0622b (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-08-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Similarity Join is an important operation for information retrieval, near duplicate detection, data analysis etc. State-of-the-art algorithms for similarity join use a technique known as prefix filtering to reduce the amount of sets to be entirely compared by previously discarding dissimilar sets. However, prefix filtering is only effective when looking for very similar data. An alternative to speedup the similarity join when prefix filtering is not efficient is to explore parallelism. In this work we developed three multi-level fine-grained parallel algorithms for many-core architectures (such as modern Graphic Processing Units) to solve the similarity join problem. The proposed algorithms have shown speedup gains of 109x and 17x when compared with sequential (ppjoin) and parallel (fgssjoin) state-of-the-art solutions, respectively, on standard real text databases. / A Junção por Similaridade é uma operação importante no contexto de recuperação da informação, identificação de duplicatas, análise de dados etc. Os algoritmos do estado da arte que realizam a junção por similaridade utilizam uma técnica chamada filtragem por prefixo, que diminui a quantidade de pares a serem totalmente comparados ao descartar previamente pares dissimilares. No entanto, a filtragem por prefixo é eficaz apenas quando se deseja encontrar pares muito similares. Uma alternativa para melhorar o desempenho da junção por similaridade quando a filtragem por prefixo é ineficaz, é explorar paralelismo. Neste trabalho foram desenvolvidos três algoritmos com paralelismo multinível de granularidade fina para arquiteturas many-core (como as modernas Unidades de Processamento Gráfico) para resolver o problema da junção por similaridade. Os algoritmos desenvolvidos demonstraram ganhos de speedup de até 109x e 17x em relação às soluções do estado da arte sequencial (ppjoin) e paralela (fgssjoin), respectivamente, quando executado sobre bases de dados textuais padrão reais.
310

Contributions à l'étude phénoménologique des impacts de vagues lors du ballottement de liquide dans une cuve modèle : physique associée à la variabilité de l’écoulement et effets d’échelle induits / Contributions to the phenomenological study of wave impacts created by the sloshing in a model tank : physics associated with the variability of the flow and induced scale effects.

Frihat, Mohamed 28 June 2018 (has links)
Cette thèse porte sur le problème du ballottement d'un liquide dans un réservoir, rencontré dans le transport et le stockage du GNL par des structures flottantes. La prédiction des chargements réels, dus au ballotement sur les parois du réservoir, est souvent basée sur des études expérimentales à petite échelle. La modélisation expérimentale à petite échelle respecte la similitude de Froude et le rapport de densité entre le gaz et le liquide. Cependant, d’autres similitudes sont biaisées comme la similitude par rapport au nombre de Weber et la similitude par rapport au nombre de Reynolds. De plus, les pressions enregistrées montrent une grande variabilité quand le même essai est répété. Dans une première partie, différentes sources physiques responsables de cette variabilité sont discutées, à savoir les instabilités de surface libre, la retombée des gouttes et des jets liquides sur la surface libre, et la production et l'entraînement des bulles dans le liquide. En fait, ces phénomènes sont à l'origine des perturbations de l'écoulement, de la variabilité de la géométrie de la vague et de cette façon des pressions engendrées par cette dernière sur la paroi. D'autres mécanismes de dissipation d’énergie sont identifiés. Ils sont liés aux frottements aux parois et aux déferlements de vagues. Nous montrons que cette dissipation induit un effet mémoire à courte durée pour l’écoulement, permettant de reproduire pour chaque impact la distribution statistique des pics de pression avec une courte durée des excitations. Ces sources de variabilité et ces mécanismes de dissipation dépendent de la tension de surface et de la viscosité du liquide. Ainsi nous étudions dans une deuxième partie, les effets de ces paramètres physiques. Nous montrons que la forme locale de la vague dépend de la tension de surface. Par contre, les effets sur la forme globale de la vague sont négligeables. Plus la tension de surface diminue, plus les pics pression sont faibles. Ce qui est dû aux différents phénomènes liés au développement des ligaments, la fragmentation en gouttes et la génération de la mousse sur la crête de la vague, et à l’entraînement des bulles dans le liquide. Quant à la viscosité du liquide, elle affecte à la fois la forme globale et la forme locale de la vague, là encore les pressions sont modifiées. Cette étude paramétrique permet, dans une troisième partie, d'étudier et comprendre les effets du nombre de Weber et du nombre de Reynolds, en comparant les résultats pour deux échelles différentes 1:40 et 1:20, quand les mêmes fluides sont considérés. De plus, en se basant sur différents cas de comparaison avec la similitude de Reynolds et/ou la similitude de Weber, nous montrons que la double similitude est indispensable pour obtenir une forme de vague avant l'impact indépendante de l'échelle. Cependant, la distribution statistique des pics de pression dépend aussi d’autres nombres adimensionnels à savoir le nombre de Mach du liquide et le nombre de Mach du gaz. / This work focuses on sloshing problem, encountered in the transport and storage of LNG by floating structures. The prediction of real sloshing loads is often based on small-scale experimental studies, respecting the Froude similarity and the density ratio between the gas and the liquid. However, other similarities are biased such as the Weber similarity and the Reynolds similarity. In addition, the recorded pressures show great variability when the same test is repeated. In a first part, different physical sources responsible for this variability are discussed, which are the free surface instabilities, the falling droplets and liquid jets impinging on the free surface, and the liquid entrainment by bubbles. In fact, these phenomena are at the origin of the flow disturbances, the variability of the wave shape, and hence its pressures on the wall. Other dissipation mechanisms are identified. They are related to wall frictions and breaking waves. Thanks to this energy dissipation, we show that the flow is characterized by a short-term memory, making it possible to reproduce for each impact its statistical distribution of pressure peaks with a short duration of excitations. These sources of variability and dissipation mechanisms depend on the surface tension and the viscosity of the liquid. Thus, we study, in a second part, these physical parameters. We show that the local wave shape depends on the surface tension. However, its effects on the global wave shape are negligible. Besides, when the surface tension is reduced, the statistical pressures are reduced. This is due to various phenomena related to the development of liquid ligaments, their fragmentation into drops and the generation of foam at the wave crest, and the liquid entrainment by bubbles. As for the viscosity of the liquid, it affects both the local and global shape wave shapes, again the pressures are changed. Based on this parametric study, The effects of Weber number and Reynolds number are studied by comparing the results for two different scales 1:40 and 1:20, when the same fluids are used. Moreover, considering different cases of comparison with Reynolds number similarity and / or Weber number similarity, the results show that both similarities are essential to obtain a scaleindependent wave shape. However, the statistical distribution of pressure peaks also depends on other dimensionless numbers, namely the Mach number of the liquid and the Mach number of the gas.

Page generated in 0.033 seconds