Spelling suggestions: "subject:"data retrieval"" "subject:"data etrieval""
1 |
The issues involving the implementation of a "virtual visit" on a construction siteCody, Dale W. 17 January 2009 (has links)
During the construction phase of a project it is vital to have good communication between all of the parties involved in the construction process. A construction project is dynamic in nature, and is constantly being changed due to issues involving design and structural improvements, constructability issues, or simply an altering of the original design intent. All of the parties must act together to maintain a high quality of work and initiate any changes, as they are needed, so that the whole project does not suffer from lost time and inefficiency.
When changes are initiated on a construction site, it is not always possible for the desired personnel to be on-site. In these instances a decision must be made whether to travel to the site, or to allow the decision to be made sight unseen. This thesis offers an alternative solution to this dilemma, a “Virtual Visit” system. The idea of a “Virtual Visit” system ts to allow personnel to view, evaluate, or to clarify in their own minds what is occurring on the construction site. This is accomplished by augmenting typical telephone conversations with video, combined with the data storage and retrieval capabilities of a computer.
This research conceptualizes a “Virtual Visit” system, and models aspects of the system in order to test it’s potential on a construction site. The “Virtual Visit” system is designed to store and retrieve data collected on the construction site and permanently stored it in an archival network. The data collected on the site can take the form of video, audio, and text. The combination of these three formats allows for the documentation of construction activities in a clearer, more readable format than traditional methods. / Master of Science
|
2 |
Mobility-Oriented Data Retrieval for Computation Offloading in Vehicular Edge ComputingSoto Garcia, Victor 21 February 2019 (has links)
Vehicular edge computing (VEC) brings the cloud paradigm to the edge of the network, allowing nodes such as Roadside Units (RSUs) and On-Board Units (OBUs) in vehicles to perform services with location awareness and low delay requirements. Furthermore, it alleviates the bandwidth congestion caused by the large amount of data requests in the network. One of the major components of VEC, computation offloading, has gained increasing attention with the emergence of mobile and vehicular applications with high-computing and low-latency demands, such as Intelligent Transportation Systems and IoT-based applications. However, existing challenges need to be addressed for vehicles' resources to be used in an efficient manner. The primary challenge consists of the mobility of the vehicles, followed by intermittent or lack of connectivity. Therefore, the MPR (Mobility Prediction Retrieval) data retrieval protocol proposed in this work allows VEC to efficiently retrieve the output processed data of the offloaded application by using both vehicles and road side units as communication nodes. The developed protocol uses geo-location information of the network infrastructure and the users to accomplish an efficient data retrieval in a Vehicular Edge Computing environment. Moreover, the proposed MPR Protocol relies on both Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication to achieve a reliable retrieval of data, giving it a higher retrieval rate than methods that use V2I or V2V only. Finally, the experiments performed show the proposed protocol to achieve a more reliable data retrieval with lower communication delay when compared to related techniques.
|
3 |
The Study of Marshalling in Android:Case Implementation of Data Retrieval from Cloud Database ServiceJhan, Bo-Chao 18 November 2011 (has links)
With the smart handheld devices and the rapid development of network applications, data exchange between devices as the first problem. There are many ways the information can be transmitted from one end to the other end, but which one is the best way?
This paper examines several common data package method, compare their features, advantages and disadvantages, and to test the effectiveness of the data package, the size of data packaged, the package needed time.
In order to prove the practicality of packages, designed a "file synchronization system," using Protocol Buffer as data exchange formats, implementing the Android system.
|
4 |
View-Dependent Visualization for Analysis of Large DatasetsOverby, Derek Robert 2011 December 1900 (has links)
Due to the impressive capabilities of human visual processing, interactive visualization methods have become essential tools for scientists to explore and analyze large, complex datasets. However, traditional approaches do not account for the increased size or latency of data retrieval when interacting with these often remote datasets. In this dissertation, I discuss two novel design paradigms, based on accepted models of the information visualization process and graphics hardware pipeline, that are appropriate for interactive visualization of large remote datasets. In particular, I discuss novel solutions aimed at improving the performance of interactive visualization systems when working with large numeric datasets and large terrain (elevation and imagery) datasets by using data reduction and asynchronous retrieval of view-prioritized data, respectively.
First I present a modified version of the standard information visualization model that accounts for the challenges presented by interacting with large, remote datasets. I also provide the details of a software framework implemented using this model and discuss several different visualization applications developed within this framework.
Next I present a novel technique for leveraging the hardware graphics pipeline to provide asynchronous, view-prioritized data retrieval to support interactive visualization of remote terrain data. I provide the results of statistical analysis of performance metrics to demonstrate the effectiveness of this approach.
Finally I present the details of two novel visualization techniques, and the results of evaluating these systems using controlled user studies and expert evaluation. The results of these qualitative and quantitative evaluation mechanisms demonstrate improved visual analysis task performance for large numeric datasets.
|
5 |
Drowning in Data, Starving for Knowledge OMEGA Data EnvironmentCoble, Keith 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The quantity T&E data has grown in step with the increase in computing power and digital storage.
T&E data management and exploitation technologies have not kept pace with this exponential
growth. New approaches to the challenges posed by this data explosion must provide for continued
growth while providing seamless integration with the existing body of work. Object Oriented Data
Management provides the framework to handle the continued rapid growth in computer speed and
the amount of data gathered and legacy integration. The OMEGA Data Environment is one of the
first commercially available examples of this emerging class of OODM applications.
|
6 |
Προσωποποιημένη προβολή περιεχομένου του διαδικτύου σε desktop εφαρμογή με τεχνικές ανάκτησης δεδομένων, προεπεξεργασίας κειμένου, αυτόματης κατηγοριοποίησης και εξαγωγής περίληψηςΤσόγκας, Βασίλειος 15 June 2009 (has links)
Με την πραγματικότητα των υπέρογκων και ολοένα αυξανόμενων πηγών κειμένου στο διαδίκτυο, καθίστανται αναγκαία η ύπαρξη μηχανισμών οι οποίοι βοηθούν τους χρήστες ώστε να λάβουν γρήγορες απαντήσεις στα ερωτήματά τους. Η παρουσίαση προσωποποιημένου, συνοψισμένου και προκατηγοριοποιημένου περιεχομένου στους χρήστες, κρίνεται απαραίτητη σύμφωνα με τις επιταγές της συνδυαστικής έκρηξης της πληροφορίας που είναι ορατή σε κάθε "γωνία" του διαδικτύου. Ζητούνται άμεσες και αποτελεσματικές λύσεις ώστε να "τιθασευτεί" αυτό το χάος πληροφορίας που υπάρχει στον παγκόσμιο ιστό, λύσεις που είναι εφικτές μόνο μέσα από ανάλυση των προβλημάτων και εφαρμογή σύγχρονων μαθηματικών και υπολογιστικών μεθόδων για την αντιμετώπισή τους.
Στα πλαίσια της παρούσας εργασίας, δημιουργήθηκε ένας ολοκληρωμένος μηχανισμός ο οποίος μπορεί αυτόματα να αναλύει κείμενα του διαδικτύου προκειμένου να εξάγει λέξεις-κλειδιά. Μέσα από αυτή την ανάλυση προκύπτουν οι σημαντικότερες προτάσεις του κειμένου που το χαρακτηρίζουν και οι οποίες μπορούν, αν συνενωθούν, να αποτελέσουν μια σύντομη περίληψη του κειμένου. Ο μηχανισμός αξιοποιεί γνώσεις για την κατηγορία του κειμένου καθώς και για τις προτιμήσεις που παρουσιάζουν οι χρήστες του προκειμένου να βελτιώσει και να φιλτράρει τα αποτελέσματα που παρουσιάζονται. Το σύστημα που κατασκευάστηκε έχει τα εξής βασικά υποσυστήματα: μηχανισμός ανάκτησης δεδομένων και εξαγωγής χρήσιμου κειμένου από τον παγκόσμιο ιστό, μηχανισμός εξαγωγής λέξεων-κλειδιών από το πηγαίο κείμενο, μηχανισμός κατηγοριοποίησης κειμένου, ο οποίος μπορεί να συμμετάσχει στη διαδικασία εξαγωγής περίληψης και να ενδυναμώσει τα αποτελέσματά της, μηχανισμοί προσωποποίησης περιεχομένου στο χρήστη και φυσικά, μηχανισμός εξαγωγής περίληψης. Οι παραπάνω μηχανισμοί είναι ενσωματωμένοι σε ένα σύστημα αποδελτίωσης, το PeRSSonal, το οποίο χρησιμοποιείται για την ανάκτηση / προεπεξεργασία / κατηγοριοποίηση / προσωποποίηση και περίληψη άρθρων από ειδησεογραφικούς τόπους του διαδικτύου.
Σκοπός της παρούσας εργασίας είναι η ενίσχυση των υπαρχόντων διαδικασιών του μηχανισμού με καλύτερες και αποτελεσματικότερες μεθόδους και αλγορίθμους, καθώς και η δημιουργία μιας desktop εφαρμογής που θα αξιοποιεί στο έπακρο τις δυνατότητες παρουσίασης του συστήματος μέσω του κλασικού client-server μοντέλου.
Πιο συγκεκριμένα, αναβαθμίζονται όλα τα στάδια λειτουργίας του μηχανισμού. Έτσι, το στάδιο ανάκτησης δεδομένων από τον ιστό ενισχύεται με έναν νέο, πιο αποτελεσματικό crawler. Ο αλγόριθμος που υλοποιείται σε αυτό το στάδιο λαμβάνει υπ' όψιν του, μεταξύ άλλων, και τον ρυθμό μεταβολής των RSS Feeds που αναλύει προκειμένου να αποφανθεί αν θα επισκεφθεί τη σελίδα του νέου. Αποφεύγονται έτσι άσκοπες εκτελέσεις της διαδικασίας του crawling και ουσιαστικά εξοικονομούνται πόροι του συστήματος. Παράλληλα, οι αλγόριθμοι αναγνώρισης και εξαγωγής χρήσιμου κειμένου έχουν ενισχυθεί και βελτιστοποιηθεί ώστε να εκτελούνται ταχύτερα και να επιστρέφουν με υψηλότερη ακρίβεια το περιεχόμενο που ανταποκρίνεται στο ωφέλιμο κείμενο μιας ιστοσελίδας.
Η διαδικασία προεπεξεργασίας του κειμένου και εξαγωγής των λέξεων-κλειδιών από αυτό, έχει επίσης βελτιωθεί σημαντικά. Οι αλγόριθμοι πλέον δέχονται ρύθμιση μέσω παραμέτρων που μεταβάλλονται ανάλογα με το κείμενο και την πηγή του. Επιπλέον, το σύστημα μπορεί να αναγνωρίσει κείμενα όλων των βασικών γλωσσών με μια αρθρωτή (modular) αρχιτεκτονική. Παράλληλα, η διαδικασία εύρεσης λέξεων-κλειδιών έχει ενισχυθεί με την δυνατότητα εξαγωγής των ουσιαστικών του κειμένου, που συνήθως φέρουν το μεγαλύτερο ποσοστό ``νοήματος'' μιας πρότασης, και γενικότερα δυνατότητα αναγνώρισης των μερών του λόγου των προτάσεων.
Ακολουθώντας, βρίσκονται οι μηχανισμοί κατηγοριοποίησης κειμένου και εξαγωγής της περίληψης αυτού οι οποίοι επίσης έχουν ενισχυθεί και παρουσιάζουν καλύτερα αποτελέσματα σε σχέση με την αρχική έκδοση του συστήματος. Η διαδικασία περίληψης έχει βελτιωθεί σημαντικά με τεχνικές που αξιοποιούν τη γνώση του συστήματος τόσο για το ίδιο το κείμενο όσο και για τον χρήστη που ζητάει την περίληψη. Η διαδικασία κατηγοριοποίησης επίσης επωφελείται από την περίληψη του κειμένου αξιοποιώντας τη, ως μικρότερη και συνοπτικότερη έκδοση του αρχικού κειμένου, προκειμένου να αποφανθεί σε περιπτώσεις που δεν είναι εντελώς ξεκάθαρο σε ποια κατηγορία ανήκει το κείμενο.
Η διαδικασία ολοκληρώνεται με την προσωποποιημένη παρουσίαση των αποτελεσμάτων στη μεριά του χρήστη. Ο αλγόριθμος προσωποποίησης λαμβάνει υπ' όψιν του πολλές παραμέτρους, μεταξύ των οποίων το ιστορικό περιήγησης, οι χρόνοι που μένει ο χρήστης σε κάποιο άρθρο και οι επιλογές του στην εφαρμογή για να παράγει το προφίλ του. Ο αλγόριθμος προσωποποίησης που προτείνεται ουσιαστικά ``μαθαίνει'' από τις επιλογές του χρήστη και προσαρμόζεται στις πραγματικές προτιμήσεις του με το πέρασμα του χρόνου. Έτσι το σύστημα μπορεί να ανταποκρίνεται στις διαρκώς μεταβαλλόμενες προτιμήσεις των χρηστών.
Στην τελική φάση της ροής της πληροφορίας, τα αποτελέσματα επιστρέφονται στην εφαρμογή που τρέχει ο χρήστης στην επιφάνεια εργασίας του και που αποτελεί μέρος της παρούσας εργασίας. Ο σκοπός της client-side εφαρμογής είναι να αξιοποιήσει και να παρουσιάσει την πληροφορία που εκτιμάται ότι ενδιαφέρει τον χρήστη, μορφοποιώντας την κατάλληλα ώστε να είναι πραγματικά χρήσιμη και ευανάγνωστη. Σκοπός δεν είναι να ``πλημμυριστεί'' ο χρήστης με ακόμη περισσότερη πληροφορία από αυτή που μπορεί να βρει μόνος του στο διαδίκτυο, αλλά να φιλτραριστεί αυτή ώστε να αντιπροσωπεύει πραγματικά τα ενδιαφέροντα του χρήστη. Η εφαρμογή που αναπτύχθηκε στηρίζεται σε standard πρωτόκολλα τόσο μετάδοσης όσο και μορφοποίησης της πληροφορίας και είναι εύκολα παραμετροποιήσιμη από τον χρήστη, ενώ παράλληλα προσφέρει πλήθος λειτουργιών που την καθιστούν ικανή να αντικαταστήσει τις κοινές μεθόδους καθημερινής ενημέρωσης που χρησιμοποιούν οι χρήστες του διαδικτύου. / The aim of the current thesis is the amendment of the existing procedures of the mechanism that was constructed with better and more effective methods and algorithms, as well as the
development of a desktop application which shall exploit to the maximum the presentation
capabilities of the system though the classic client-server model.
More specifically, all the operation stages of the mechanism are upgraded. Thus, the data
retrieval stage is improved with a new, more effective web crawler. The implemented algorithm
at this stage takes into consideration, among others, the modification rate of the RSS Feeds
that are analyzed in order to decide if the article's page should be fetched. In this manner,
unneeded crawling executions are bypassed and system resources are conserved. Furthermore,
the recognition and useful text extraction algorithms are enhanced in order to run faster and
return with higher precision the content which responds to the useful text of an article's page.
The text preprocessing keyword extraction unneeded are also significantly improved. The
algorithms now are parametrized and are adjusted according to the text and its origin. Moreover,
the system can recognize the texts language through a modular architecture. In addition, the
keyword extraction procedure is enhanced with noun retrieval capabilities. Nouns usually baring
the most semantic meaning of the text are now identified and can be weighted accordingly.
This subsystem is also designed to support multimedia content which will be correlated with keywords.
One step more, the categorization and summarization mechanism are improved with heuristics
that deliver better results than the initial version of the system. The summarization
procedure has improved significantly with techniques that utilize the system's knowledge not
only for the text itself, but also for the user requesting the summary. The categorization procedure
is also benefitted by the text's summary using it as a shorter, more meaningful version of
the initial text, in order to decide in occasions that the categorization of the full text does not
give clear results.
The procedure concludes with the personalized presentation of the results on the user's
side. The personalization algorithm takes into consideration many parameters, along which
the browsing history, the times spent by the user at a text's summary or full body, etc. The algorithm is also "leaning" by the user choices and adjusts itself to the real user preferences as
time passes. Thus the system can actually respond positively to the continually changing user
preferences.
In the φnal stage of the show of information, the results are returned to the application that
the user is running on his/her desktop and the development of which is part of this thesis. The
aim of the client side application is to utilize and properly present the information that the
system has decided to be user-interesting. This information is suitably formatted so as to be
really useful and readable on the desktop application. We are not targetting to the "information
flooding" of the user, but contrary, to the filtering of information in order to truly represent the
user's interests. The developed application is based on standard protocols for the transmission
and formatting of information and is easily adjustable by the user, while it also offers many
functions which make it able to replace the common methods for the user's everyday internet
news reading needs.
|
7 |
A HYBRID APPROACH TO RETRIEVING WEB DOCUMENTS AND SEMANTIC WEB DATAImmaneni, Trivikram 18 January 2008 (has links)
No description available.
|
8 |
[en] TEXT MINING AT THE INTELLIGENT WEB CRAWLING PROCESS / [pt] MINERAÇÃO DE TEXTOS NA COLETA INTELIGENTE DE DADOS NA WEBFABIO DE AZEVEDO SOARES 31 March 2009 (has links)
[pt] Esta dissertação apresenta um estudo sobre a utilização de
Mineração de
Textos no processo de coleta inteligente de dados na Web. O
método mais comum
de obtenção de dados na Web consiste na utilização de web
crawlers. Web
crawlers são softwares que, uma vez alimentados por um
conjunto inicial de
URLs (sementes), iniciam o procedimento metódico de visitar
um site, armazenálo
em disco e extrair deste os hyperlinks que serão utilizados
para as próximas
visitas. Entretanto, buscar conteúdo desta forma na Web é
uma tarefa exaustiva e
custosa. Um processo de coleta inteligente de dados na Web,
mais do que coletar
e armazenar qualquer documento web acessível, analisa as
opções de crawling
disponíveis para encontrar links que, provavelmente,
fornecerão conteúdo de alta
relevância a um tópico definido a priori. Na abordagem de
coleta de dados
inteligente proposta neste trabalho, tópicos são definidos,
não por palavras chaves,
mas, pelo uso de documentos textuais como exemplos. Em
seguida, técnicas de
pré-processamento utilizadas em Mineração de Textos, entre
elas o uso de um
dicionário thesaurus, analisam semanticamente o documento
apresentado como
exemplo. Baseado nesta análise, o web crawler construído
será guiado em busca
do seu objetivo: recuperar informação relevante sobre o
documento. A partir de
sementes ou realizando uma consulta automática nas máquinas
de buscas
disponíveis, o crawler analisa, igualmente como na etapa
anterior, todo
documento recuperado na Web. Então, é executado um processo
de comparação
entre cada documento recuperado e o documento exemplo.
Depois de obtido o
nível de similaridade entre ambos, os hyperlinks do
documento recuperado são
analisados, empilhados e, futuramente, serão desempilhados
de acordo seus
respectivos e prováveis níveis de importância. Ao final do
processo de coleta de
dados, outra técnica de Mineração de Textos é aplicada,
objetivando selecionar os
documentos mais representativos daquela coleção de textos:
a Clusterização de
Documentos. A implementação de uma ferramenta que contempla
as heurísticas
pesquisadas permitiu obter resultados práticos, tornando
possível avaliar o
desempenho das técnicas desenvolvidas e comparar os
resultados obtidos com
outras formas de recuperação de dados na Web. Com este
trabalho, mostrou-se
que o emprego de Mineração de Textos é um caminho a ser
explorado no
processo de recuperação de informação relevante na Web. / [en] This dissertation presents a study about the application of
Text Mining as
part of the intelligent Web crawling process. The most
usual way of gathering
data in Web consists of the utilization of web crawlers.
Web crawlers are
softwares that, once provided with an initial set of URLs
(seeds), start the
methodical proceeding of visiting a site, store it in disk
and extract its hyperlinks
that will be used for the next visits. But seeking for
content in this way is an
expensive and exhausting task. An intelligent web crawling
process, more than
collecting and storing any web document available, analyses
its available crawling
possibilities for finding links that, probably, will
provide high relevant content to
a topic defined a priori. In the approach suggested in this
work, topics are not
defined by words, but rather by the employment of text
documents as examples.
Next, pre-processing techniques used in Text Mining,
including the use of a
Thesaurus, analyze semantically the document submitted as
example. Based on
this analysis, the web crawler thus constructed will be
guided toward its objective:
retrieve relevant information to the document. Starting
from seeds or querying
through available search engines, the crawler analyzes,
exactly as in the previous
step, every document retrieved in Web. the similarity level
between them is
obtained, the retrieved document`s hyperlinks are analysed,
queued and, later, will
be dequeued according to each one`s probable degree of
importance. By the end
of the gathering data process, another Text Mining
technique is applied, with the
propose of selecting the most representative document among
the collected texts:
Document Clustering. The implementation of a tool
incorporating all the
researched heuristics allowed to achieve results, making
possible to evaluate the
performance of the developed techniques and compare all
obtained results with
others means of retrieving data in Web. The present work
shows that the use of
Text Mining is a track worthy to be exploited in the
process of retrieving relevant
information in Web.
|
9 |
Vulnerability in online social network profiles. A Framework for Measuring Consequences of Information Disclosure in Online Social Networks.Alim, Sophia January 2011 (has links)
The increase in online social network (OSN) usage has led to personal details
known as attributes being readily displayed in OSN profiles. This can lead to the
profile owners being vulnerable to privacy and social engineering attacks which
include identity theft, stalking and re identification by linking.
Due to a need to address privacy in OSNs, this thesis presents a framework to
quantify the vulnerability of a user¿s OSN profile. Vulnerability is defined as the
likelihood that the personal details displayed on an OSN profile will spread due
to the actions of the profile owner and their friends in regards to information
disclosure.
The vulnerability measure consists of three components. The individual
vulnerability is calculated by allocating weights to profile attribute values
disclosed and neighbourhood features which may contribute towards the
personal vulnerability of the profile user. The relative vulnerability is the
collective vulnerability of the profiles¿ friends. The absolute vulnerability is the
overall profile vulnerability which considers the individual and relative
vulnerabilities.
The first part of the framework details a data retrieval approach to extract
MySpace profile data to test the vulnerability algorithm using real cases. The
profile structure presented significant extraction problems because of the
dynamic nature of the OSN. Issues of the usability of a standard dataset
including ethical concerns are discussed. Application of the vulnerability
measure on extracted data emphasised how so called ¿private profiles¿ are not
immune to vulnerability issues. This is because some profile details can still be
displayed on private profiles.
The second part of the framework presents the normalisation of the measure, in
the context of a formal approach which includes the development of axioms and
validation of the measure but with a larger dataset of profiles. The axioms
highlight that changes in the presented list of profile attributes, and the
attributes¿ weights in making the profile vulnerable, affect the individual
vulnerability of a profile.
iii
Validation of the measure showed that vulnerability involving OSN profiles does occur and this provides a good basis for other researchers to build on the measure further. The novelty of this vulnerability measure is that it takes into account not just the attributes presented on each individual profile but features of the profiles¿ neighbourhood.
|
10 |
Assimilation des observations satellitaires de l'interféromètre atmosphérique de sondage infrarouge (IASI) dans un modèle de chimie-transport pour des réanalyses d'ozone à l'échelle globale / Satellites data assimilation of the infrared atmospheric sounding interferometer (IASI) in a chemistry transport model for ozone reanalyses at global scalePeiro, Hélène 12 January 2018 (has links)
L'impact sur le climat et sur la qualité de l'air des gaz émis par les activités humaines a de fortes retombées sociales et économiques. L'ozone (O3) troposphérique est produit à partir des polluants primaires comme les oxydes d'azote. Il est le troisième gaz par importance dans l'effet de serre après le dioxyde de carbone et le méthane, et il est l'un des polluants principaux pour ses effets oxydants sur les tissus organiques. Pour répondre au besoin de mesure continue de la concentration d'O3 plusieurs satellites emportent des sondeurs capables de mesurer leur signal dans les domaines ultra-violet, visible ou infrarouge du rayonnement terrestre. Le CNES développe notamment le sondeur infrarouge IASI à bord des satellites météorologiques polaires METOP. IASI, en orbite depuis de nombreuses années, permet d'estimer la concentration de certains gaz atmosphériques, notamment l'O3, avec une couverture spatio-temporelle jamais atteinte jusqu'à présent. Chaque jour IASI mesure le spectre infrarouge de l'atmosphère entre 650 et 2700 nm avec une résolution horizontale de 12 km, ce qui fait un volume de données géolocalisées de plusieurs dizaines de gigaoctets par jour. Ces observations constituent un jeu de données idéal pour la validation des modèles de chimie-transport (CTM) à la base des systèmes de surveillance et de prévision de la qualité de l'air. Ces modèles peuvent prendre en compte les observations satellitaires par une procédure mathématique appelée 'assimilation de données'. Cette technique permet de compléter l'information parfois parcellaire des satellites (par exemple à cause de la présence des nuages ou durant la nuit pour les capteurs UV-visible) et d'obtenir des champs 3D globaux des concentrations de certaines espèces chimiques avec une fréquence horaire. Dans ce contexte, il est très important de développer des algorithmes fiables et efficaces pour assimiler les données IASI dans les CTM. A cette fin, l'UMR/CECI (CERFACS) développe en collaboration avec le CNRM/Météo-France un outil d'assimilation (VALENTINA) pour le CTM MOCAGE ayant des applications à l'échelle globale ou régionale pour l'étude du climat ou de la qualité de l'air, notamment dans le cadre du projet européen Copernicus sur la composition de l'atmosphère (CAMS). Il collabore également avec le Laboratoire d'Aérologie, qui développe depuis plusieurs années l'algorithme SOFRID de restitution des profils verticaux d' O3 IASI basé sur le code de transfert radiatif RTTOV. Le travail de cette thèse concerne la mise au point et la production d'analyses d' O3 troposphérique tridimensionnelles par l'assimilation d'observations satellitaires (MLS, IASI) dans le CTM MOCAGE. L'objectif principal est de constituer une nouvelle base de données pour l'étude de la variabilité de l'ozone de l'échelle journalière à celle décennale. On démontre ainsi la capacité des analyses utilisant les données IASI à reproduire la réponse de l' O3 troposphérique à l'ENSO (El Niño Southern Oscillation) aux basses latitudes, apportant notamment des informations nouvelles sur la distribution verticale des anomalies associées. Une large part de ce travail a de plus consisté à analyser les biais entre les analyses et les données de sondages indépendantes. Une des raisons expliquant ces biais pourrait être l'utilisation d'a-priori et de covariances d'erreurs climatologiques fortement biaisés (notamment au niveau de la tropopause) dans la procédure d'inversion des produits d' O3 de IASI. Une seconde partie de la thèse a donc consisté à mettre en place une méthode permettant de prescrire des a-priori plus proches des situations réelles améliorant ainsi les profils d' O3 restitués. En conclusion cette thèse constitue une avancée significative vers l'amélioration des produits d' O3 troposphérique issus de l'instrument IASI, permettant d'envisager un suivi à long terme que le caractère opérationnel des satellites METOP facilitera. / Human activity produces gases impacting the climate and the air quality with important economic and social consequences. Tropospheric ozone (O3) is created by chemical reactions from primary pollutants as nitrogen oxides. O3 is the third most important greenhouse gas after carbon dioxide and methane. It is one of the most important pollutants due to its oxidant effects on biological tissue. Several sensors on board satellites measure ozone concentration in the Ultraviolet, visible, or in the Earth infrared radiance. The French national center for space studies CNES (Centre National d'Etudes Spatiales) has developed the infrared sounding IASI on board polar meteorological satellites METOP. IASI, in orbit for several years, has allowed to estimate concentration of atmospheric gases, particularly O3, with a spatio-temporal coverage never reached so far. Every day, IASI measures infrared spectrum of the atmosphere between 650 to 2700 nm with an horizontal resolution of 12 km, giving tens of Gigaoctet per day of geolocated data. These observations form a part of an ideal set of data for the Chemistry Transport Model (CTM). CTM are used to analyze and predict air quality and can take into account satellite data according to a mathematical procedure called 'data assimilation'. This technic allows to fill gaps in the satellite information (for instance due to clouds or during night for the UV-visible sensor) and to obtain 3D global fields of chemical species concentration on an hourly basis. Therefore, it is important to develop accurate and efficient algorithms to assimilate IASI data in the CTM's. To this end, the UMR/CECI (CERFACS) develops in collaboration with the CNRM/Météo-France an assimilation tool (named VALENTINA) to the CTM MOCAGE that has applications on global and regional scales for climate or air quality study. The CTM MOCAGE is part of the European Copernicus project on the atmospheric composition (CAMS). In addition, the UMR/CECI collaborates with the Laboratoire d'Aérologie that has developed for several years the SOFRID algorithm for the vertical profiles retrieval of IASI ozone data based on the radiative transfer code RTTOV. The study of this PhD includes the tridimensional production of tropospheric ozone analysis with data assimilation (MLS, IASI) in the CTM MOCAGE, and on the ozone variability. Hence, we demonstrate the analysis ability to reproduce tropospheric ozone in response to ENSO, by bringing new informations on the vertical structure of associated anomalies. The PhD also focuses on the study of biases between analyses and independent ozone soundings. One of the main reasons could be due to the use of the climatological a-priori and matrix error covariance associated, strongly biased (particularly around the tropopause) in the retrieval method of IASI ozone data. Therefore, the second part of the PhD has consisted implementation of a method that generates accurate a-priori to improve retrieved ozone profiles. As a conclusion, this PhD brings a significant progress towards the improvement of tropospheric ozone products from IASI instrument, that should contribute to the long-term monitoring of tropospheric ozone thanks to the operational nature of METOP satellites.
|
Page generated in 0.1129 seconds