• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 23
  • 22
  • 21
  • 16
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 270
  • 43
  • 42
  • 38
  • 34
  • 34
  • 31
  • 31
  • 30
  • 27
  • 26
  • 23
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Nonparametric adaptive estimation for discretely observed Lévy processes

Kappus, Julia Johanna 30 October 2012 (has links)
Die vorliegende Arbeit hat nichtparametrische Schätzmethoden für diskret beobachtete Lévyprozesse zum Gegenstand. Ein Lévyprozess mit endlichen zweiten Momenten und endlicher Variation auf Kompakta wird niederfrequent beobachtet. Die Sprungdynamik wird vollständig durch das endliche signierte Maß my(dx):= x ny(dx) beschrieben. Ein lineares Funktional von my soll nichtparametrisch geschätzt werden. Im ersten Teil werden Kernschätzer konstruiert und obere Schranken für das korrespondierende Risiko bewiesen. Daraus werden Konvergenzraten unter Glattheitsannahmen an das Lévymaß hergeleitet. Für Spezialfälle werden untere Schranken bewiesen und daraus Minimax-Optimalität gefolgert. Der Schwerpunkt liegt auf dem Problem der datengetriebenen Wahl des Glättungsparameters, das im zweiten Teil untersucht wird. Da die nichtparametrische Schätzung für Lévyprozesse starke strukturelle Ähnlichkeiten mit Dichtedekonvolutionsproblemen mit unbekannter Fehlerdichte aufweist, werden beide Problemstellungen parallel diskutiert und die Methoden allgemein sowohl für Lévyprozesse als auch für Dichtedekonvolution entwickelt. Es werden Methoden der Modellwahl durch Penalisierung angewandt. Während das Prinzip der Modellwahl im üblichen Fall darauf beruht, dass die Fluktuation stochastischer Terme durch Penalisierung mit einer deterministischen Größe beschränkt werden kann, ist die Varianz im hier betrachteten Fall unbekannt und der Strafterm somit stochastisch. Das Hauptaugenmerk der Arbeit liegt darauf, Strategien zum Umgang mit dem stochastischen Strafterm zu entwickeln. Dabei ist ein modifizierter Schätzer für die charakteristische Funktion im Nenner zentral, der es erlaubt, die punktweise Kontrolle der Abweichung dieses Objects von seiner Zielgröße auf die gesamte reelle Achse zu erweitern. Für die Beweistechnik sind insbesondere Talagrand-Konzentrationsungleichungen für empirische Prozesse relevant. / This thesis deals with nonparametric estimation methods for discretely observed Lévy processes. A Lévy process X having finite variation on compact sets and finite second moments is observed at low frequency. The jump dynamics is fully described by the finite signed measure my(dx)=x ny(dx). The goal is to estimate, nonparametrically, some linear functional of my. In the first part, kernel estimators are constructed and upper bounds on the corresponding risk are provided. From this, rates of convergence are derived, under regularity assumptions on the Lévy measure. For particular cases, minimax lower bounds are proved. The rates of convergence are thus shown to be minimax optimal. The focus lies on the data driven choice of the smoothing parameter, which is being considered in the second part. Since nonparametric estimation methods for Lévy processes have strong structural similarities with with nonparametric density deconvolution with unknown error density, both fields are discussed in parallel and the concepts are developed in generality, for Lévy processes as well as for density deconvolution. The choice of the bandwidth is realized, using techniques of model selection via penalization. The principle of model selection via penalization usually relies on the fact that the fluctuation of certain stochastic quantities can be controlled by penalizing with a deterministic term. Contrarily to this, the variance is unknown in the setting investigated here and the penalty term is hence itself a stochastic quantity. It is the main concern of this thesis to develop strategies to dealing with the stochastic penalty term. The most important step in this direction will be a modified estimator of the unknown characteristic function in the denominator, which allows to make the pointwise control of this object uniform on the real line. The main technical tools involved in the arguments are concentration inequalities of Talagrand type for empirical processes.
252

Central limit theorems and confidence sets in the calibration of Lévy models and in deconvolution

Söhl, Jakob 03 May 2013 (has links)
Zentrale Grenzwertsätze und Konfidenzmengen werden in zwei verschiedenen, nichtparametrischen, inversen Problemen ähnlicher Struktur untersucht, und zwar in der Kalibrierung eines exponentiellen Lévy-Modells und im Dekonvolutionsmodell. Im ersten Modell wird eine Geldanlage durch einen exponentiellen Lévy-Prozess dargestellt, Optionspreise werden beobachtet und das charakteristische Tripel des Lévy-Prozesses wird geschätzt. Wir zeigen, dass die Schätzer fast sicher wohldefiniert sind. Zu diesem Zweck beweisen wir eine obere Schranke für Trefferwahrscheinlichkeiten von gaußschen Zufallsfeldern und wenden diese auf einen Gauß-Prozess aus der Schätzmethode für Lévy-Modelle an. Wir beweisen gemeinsame asymptotische Normalität für die Schätzer von Volatilität, Drift und Intensität und für die punktweisen Schätzer der Sprungdichte. Basierend auf diesen Ergebnissen konstruieren wir Konfidenzintervalle und -mengen für die Schätzer. Wir zeigen, dass sich die Konfidenzintervalle in Simulationen gut verhalten, und wenden sie auf Optionsdaten des DAX an. Im Dekonvolutionsmodell beobachten wir unabhängige, identisch verteilte Zufallsvariablen mit additiven Fehlern und schätzen lineare Funktionale der Dichte der Zufallsvariablen. Wir betrachten Dekonvolutionsmodelle mit gewöhnlich glatten Fehlern. Bei diesen ist die Schlechtgestelltheit des Problems durch die polynomielle Abfallrate der charakteristischen Funktion der Fehler gegeben. Wir beweisen einen gleichmäßigen zentralen Grenzwertsatz für Schätzer von Translationsklassen linearer Funktionale, der die Schätzung der Verteilungsfunktion als Spezialfall enthält. Unsere Ergebnisse gelten in Situationen, in denen eine Wurzel-n-Rate erreicht werden kann, genauer gesagt gelten sie, wenn die Sobolev-Glattheit der Funktionale größer als die Schlechtgestelltheit des Problems ist. / Central limit theorems and confidence sets are studied in two different but related nonparametric inverse problems, namely in the calibration of an exponential Lévy model and in the deconvolution model. In the first set-up, an asset is modeled by an exponential of a Lévy process, option prices are observed and the characteristic triplet of the Lévy process is estimated. We show that the estimators are almost surely well-defined. To this end, we prove an upper bound for hitting probabilities of Gaussian random fields and apply this to a Gaussian process related to the estimation method for Lévy models. We prove joint asymptotic normality for estimators of the volatility, the drift, the intensity and for pointwise estimators of the jump density. Based on these results, we construct confidence intervals and sets for the estimators. We show that the confidence intervals perform well in simulations and apply them to option data of the German DAX index. In the deconvolution model, we observe independent, identically distributed random variables with additive errors and we estimate linear functionals of the density of the random variables. We consider deconvolution models with ordinary smooth errors. Then the ill-posedness of the problem is given by the polynomial decay rate with which the characteristic function of the errors decays. We prove a uniform central limit theorem for the estimators of translation classes of linear functionals, which includes the estimation of the distribution function as a special case. Our results hold in situations, for which a square-root-n-rate can be obtained, more precisely, if the Sobolev smoothness of the functionals is larger than the ill-posedness of the problem.
253

Adaptive and efficient quantile estimation / From deconvolution to Lévy processes

Trabs, Mathias 07 July 2014 (has links)
Die Schätzung von Quantilen und verwandten Funktionalen wird in zwei inversen Problemen behandelt: dem klassischen Dekonvolutionsmodell sowie dem Lévy-Modell in dem ein Lévy-Prozess beobachtet wird und Funktionale des Sprungmaßes geschätzt werden. Im einem abstrakteren Rahmen wird semiparametrische Effizienz im Sinne von Hájek-Le Cam für Funktionalschätzung in regulären, inversen Modellen untersucht. Ein allgemeiner Faltungssatz wird bewiesen, der auf eine große Klasse von statistischen inversen Problem anwendbar ist. Im Dekonvolutionsmodell beweisen wir, dass die Plugin-Schätzer der Verteilungsfunktion und der Quantile effizient sind. Auf der Grundlage von niederfrequenten diskreten Beobachtungen des Lévy-Prozesses wird im nichtlinearen Lévy-Modell eine Informationsschranke für die Schätzung von Funktionalen des Sprungmaßes hergeleitet. Die enge Verbindung zwischen dem Dekonvolutionsmodell und dem Lévy-Modell wird präzise beschrieben. Quantilschätzung für Dekonvolutionsprobleme wird umfassend untersucht. Insbesondere wird der realistischere Fall von unbekannten Fehlerverteilungen behandelt. Wir zeigen unter minimalen und natürlichen Bedingungen, dass die Plugin-Methode minimax optimal ist. Eine datengetriebene Bandweitenwahl erlaubt eine optimale adaptive Schätzung. Quantile werden auf den Fall von Lévy-Maßen, die nicht notwendiger Weise endlich sind, verallgemeinert. Mittels äquidistanten, diskreten Beobachtungen des Prozesses werden nichtparametrische Schätzer der verallgemeinerten Quantile konstruiert und minimax optimale Konvergenzraten hergeleitet. Als motivierendes Beispiel von inversen Problemen untersuchen wir ein Finanzmodell empirisch, in dem ein Anlagengegenstand durch einen exponentiellen Lévy-Prozess dargestellt wird. Die Quantilschätzer werden auf dieses Modell übertragen und eine optimale adaptive Bandweitenwahl wird konstruiert. Die Schätzmethode wird schließlich auf reale Daten von DAX-Optionen angewendet. / The estimation of quantiles and realated functionals is studied in two inverse problems: the classical deconvolution model and the Lévy model, where a Lévy process is observed and where we aim for the estimation of functionals of the jump measure. From a more abstract perspective we study semiparametric efficiency in the sense of Hájek-Le Cam for functional estimation in regular indirect models. A general convolution theorem is proved which applies to a large class of statistical inverse problems. In particular, we consider the deconvolution model, where we prove that our plug-in estimators of the distribution function and of the quantiles are efficient. In the nonlinear Lévy model based on low-frequent discrete observations of the Lévy process, we deduce an information bound for the estimation of functionals of the jump measure. The strong relationship between the Lévy model and the deconvolution model is given a precise meaning. Quantile estimation in deconvolution problems is studied comprehensively. In particular, the more realistic setup of unknown error distributions is covered. Under minimal and natural conditions we show that the plug-in method is minimax optimal. A data-driven bandwidth choice yields optimal adaptive estimation. The concept of quantiles is generalized to the possibly infinite Lévy measures by considering left and right tail integrals. Based on equidistant discrete observations of the process, we construct a nonparametric estimator of the generalized quantiles and derive minimax convergence rates. As a motivating financial example for inverse problems, we empirically study the calibration of an exponential Lévy model for asset prices. The estimators of the generalized quantiles are adapted to this model. We construct an optimal adaptive quantile estimator and apply the procedure to real data of DAX-options.
254

Caractérisation des aérosols par inversion des données combinées des photomètres et lidars au sol.

Nassif Moussa Daou, David January 2012 (has links)
Aerosols are small, micrometer-sized particles, whose optical effects coupled with their impact on cloud properties is a source of large uncertainty in climate models. While their radiative forcing impact is largely of a cooling nature, there can be significant variations in the degree of their impact, depending on the size and the nature of the aerosols. The radiative and optical impact of aerosols are, first and foremost, dependent on their concentration or number density (an extensive parameter) and secondly on the size and nature of the aerosols (intensive, per particle, parameters). We employed passive (sunphotmetry) and active (backscatter lidar) measurements to retrieve extensive optical signals (aerosol optical depth or AOD and backscatter coefficient respectively) and semi-intensive optical signals (fine and coarse mode OD and fine and coarse mode backscatter coefficient respectively) and compared the optical coherency of these retrievals over a variety of aerosol and thin cloud events (pollution, dust, volcanic, smoke, thin cloud dominated). The retrievals were performed using an existing spectral deconvolution method applied to the sunphotometry data (SDA) and a new retrieval technique for the lidar based on a colour ratio thresholding technique. The validation of the lidar retrieval was accomplished by comparing the vertical integrations of the fine mode, coarse mode and total backscatter coefficients of the lidar with their sunphotometry analogues where lidar ratios (the intensive parameter required to transform backscatter coefficients into extinction coefficients) were (a) computed independently using the SDA retrievals for fine mode aerosols or prescribed for coarse mode aerosols and clouds or (b) computed by forcing the computed (fine, coarse and total) lidar ODs to be equal to their analog sunphotometry ODs. Comparisons between cases (a) and (b) as well as the semi-qualitative verification of the derived fine and coarse mode vertical profiles with the expected backscatter coefficient behavior of fine and coarse mode aerosols yielded satisfactory agreement (notably that the fine, coarse and total OD errors were <~ sunphotometry instrument errors). Comparisons between cases (a) and (b) also showed a degree of optical coherency between the fine mode lidar ratios.
255

Predicting the sound field from aeroacoustic sources on moving vehicles : Towards an improved urban environment

Pignier, Nicolas January 2017 (has links)
In a society where environmental noise is becoming a major health and economical concern, sound emissions are an increasingly critical design factor for vehicle manufacturers. With about a quarter of the European population living close to roads with heavy traffic, traffic noise in urban landscapes has to be addressed first. The current introduction of electric vehicles on the market and the need for sound systems to alert their presence is causing a shift in mentalities requiring engineering methods that will have to treat noise management problems from a broader perspective. That in which noise emissions need not only be considered as a by-product of the design but as an integrated part of it. Developing more sustainable ground transportation will require a better understanding of the sound field emitted in various realistic operating conditions, beyond the current requirements set by the standard pass-by test, which is performed in a free-field. A key aspect to improve this understanding is the development of efficient numerical tools to predict the generation and propagation of sound from moving vehicles. In the present thesis, a methodology is proposed aimed at evaluating the pass-by sound field generated by vehicle acoustic sources in a simplified urban environment, with a focus on flow sound sources. Although it can be argued that the aerodynamic noise is still a minor component of the total emitted noise in urban driving conditions, this share will certainly increase in the near future with the introduction of quiet electric engines and more noise-efficient tyres on the market. This work presents a complete modelling of the problem from sound generation to sound propagation and pass-by analysis in three steps. Firstly, computation of the flow around the geometry of interest; secondly, extraction of the sound sources generated by the flow, and thirdly, propagation of the sound generated by the moving sources to observers including reflections and scattering by nearby surfaces. In the first step, the flow is solved using compressible detached-eddy simulations. The identification of the sound sources in the second step is performed using direct numerical beamforming with linear programming deconvolution, with the phased array pressure data being extracted from the flow simulations. The outcome of this step is a set of uncorrelated monopole sources. Step three uses this set as input to a propagation method based on a point-to-point moving source Green's function and a modified Kirchhoff integral under the Kirchhoff approximation to compute reflections on built surfaces. The methodology is demonstrated on the example of the aeroacoustic noise generated by a NACA air inlet moving in a simplified urban setting. Using this methodology gives insights on the sound generating mechanisms, on the source characteristics and on the sound field generated by the sources when moving in a simplified urban environment. / I ett samhälle där buller håller på att bli ett stort hälsoproblem och en ekonomisk belastning, är ljudutsläpp en allt viktigare aspekt för fordonstillverkare. Då ungefär en fjärdedel av den europeiska befolkningen bor nära vägar med tung trafik, är åtgärder för minskat trafikbuller i stadsmiljö en hög prioritet. Introduktionen av elfordon på marknaden och behovet av ljudsystem för att varna omgivningen kräver också ett nytt synsätt och tekniska angreppssätt som behandlar bullerproblemen ur ett bredare perspektiv. Buller bör inte längre betraktas som en biprodukt av konstruktionen, utan som en integrerad del av den. Att utveckla mer hållbara marktransporter kommer att kräva en bättre förståelse av det utstrålade ljudfältet vid olika realistiska driftsförhållanden, utöver de nuvarande standardiserade kraven för förbifartstest som utförs i ett fritt fält. En viktig aspekt för att förbättra denna förståelse är utvecklingen av effektiva numeriska verktyg för att beräkna ljudalstring och ljudutbredning från fordon i rörelse. I denna avhandling föreslås en metodik som syftar till att utvärdera förbifartsljud som alstras av fordons akustiska källor i en förenklad stadsmiljö, här med fokus på strömningsgenererat ljud. Även om det aerodynamiska bullret är fortfarande en liten del av de totala bullret från vägfordon i urbana miljöer, kommer denna andel säkerligen att öka inom en snar framtid med införandet av tysta elektriska motorer och de bullerreducerande däck som introduceras på marknaden. I detta arbete presenteras en komplett modellering av problemet från ljudalstring till ljudutbredning och förbifartsanalys i tre steg. Utgångspunkten är beräkningar av strömningen kring geometrin av intresse; det andra steget är identifiering av ljudkällorna som genereras av strömningen, och det tredje steget rör ljudutbredning från rörliga källor till observatörer, inklusive effekten av reflektioner och spridning från närliggande ytor. I det första steget löses flödet genom detached-eddy simulation (DES) för kompressibel strömning. Identifiering av ljudkällor i det andra steget görs med direkt numerisk lobformning med avfaltning med hjälp av linjärprogrammering, där källdata extraheras från flödessimuleringarna. Resultatet av detta steg är en uppsättning av okorrelerade akustiska monopolkällor. Steg tre utnyttjar dessa källor som indata till en ljudutbredningsmodel baserad på beräkningar punkt-till-punkt med Greensfunktioner för rörliga källor, och med en modifierad Kirchhoff-integral under Kirchhoffapproximationen för att beräkna reflektioner mot byggda ytor. Metodiken demonstreras med exemplet med det aeroakustiska ljud som genereras av ett NACA-luftintag som rör sig i en förenklad urban miljö. Med hjälp av denna metod kan man få insikter om ljudalstringsmekanismer, om källegenskaper och om ljudfältet som genereras av källor när de rör sig i en förenklad stadsmiljö. / <p>QC 20170425</p>
256

Méthodes modernes d'analyse de données en biophysique analytique : résolution des problèmes inverses en RMN DOSY et SM / New methods of data analysis in analytical biophysics : solving the inverse ill-posed problems in DOSY NMR and MS

Cherni, Afef 20 September 2018 (has links)
Cette thèse s’intéresse à la création de nouvelles approches algorithmiques pour la résolution du problème inverse en biophysiques. Dans un premier temps, on vise l’application RMN de type DOSY: une nouvelle approche de régularisation hybride a été proposée avec un nouvel algorithme PALMA (http://palma.labo.igbmc.fr/). Cet algorithme permet d’analyser des données réelles DOSY avec une précision importante quelque soit leur type. Dans un deuxième temps, notre intérêt s’est tourné vers l’application de spectrométrie de masse. Nous avons proposé une nouvelle approche par dictionnaire dédiée à l’analyse protéomique en utilisant le modèle averagine et une stratégie de minimisation sous contraintes d'une pénalité de parcimonie. Afin d’améliorer la précision de l’information obtenue, nous avons proposé une nouvelle méthode SPOQ, basée sur une nouvelle fonction de pénalisation, résolue par un nouvel algorithme Forward-Backward à métrique variable localement ajustée. Tous nos algorithmes bénéficient de garanties théoriques de convergence, et ont été validés expérimentalement sur des spectres synthétisés et des données réelles / This thesis aims at proposing new approaches to solve the inverse problem in biophysics. Firstly, we study the DOSY NMR experiment: a new hybrid regularization approach has been proposed with a novel PALMA algorithm (http://palma.labo.igbmc.fr/). This algorithm ensures the efficient analysis of real DOSY data with a high precision for all different type. In a second time, we study the mass spectrometry application. We have proposed a new dictionary based approach dedicated to proteomic analysis using the averagine model and the constrained minimization approach associated with a sparsity inducing penalty. In order to improve the accuracy of the information, we proposed a new SPOQ method based on a new penalization, solved with a new Forward-Backward algorithm with a variable metric locally adjusted. All our algorithms benefit from sounded convergence guarantees, and have been validated experimentally on synthetics and real data.
257

Correction des effets de volume partiel en tomographie d'émission

Le Pogam, Adrien 29 April 2010 (has links)
Ce mémoire est consacré à la compensation des effets de flous dans une image, communément appelés effets de volume partiel (EVP), avec comme objectif d’application l’amélioration qualitative et quantitative des images en médecine nucléaire. Ces effets sont la conséquence de la faible résolutions spatiale qui caractérise l’imagerie fonctionnelle par tomographie à émission mono-photonique (TEMP) ou tomographie à émission de positons (TEP) et peuvent être caractérisés par une perte de signal dans les tissus présentant une taille comparable à celle de la résolution spatiale du système d’imagerie, représentée par sa fonction de dispersion ponctuelle (FDP). Outre ce phénomène, les EVP peuvent également entrainer une contamination croisée des intensités entre structures adjacentes présentant des activités radioactives différentes. Cet effet peut conduire à une sur ou sous estimation des activités réellement présentes dans ces régions voisines. Différentes techniques existent actuellement pour atténuer voire corriger les EVP et peuvent être regroupées selon le fait qu’elles interviennent avant, durant ou après le processus de reconstruction des images et qu’elles nécessitent ou non la définition de régions d’intérêt provenant d’une imagerie anatomique de plus haute résolution(tomodensitométrie TDM ou imagerie par résonance magnétique IRM). L’approche post-reconstruction basée sur le voxel (ne nécessitant donc pas de définition de régions d’intérêt) a été ici privilégiée afin d’éviter la dépendance aux reconstructions propres à chaque constructeur, exploitée et améliorée afin de corriger au mieux des EVP. Deux axes distincts ont été étudiés. Le premier est basé sur une approche multi-résolution dans le domaine des ondelettes exploitant l’apport d’une image anatomique haute résolution associée à l’image fonctionnelle. Le deuxième axe concerne l’amélioration de processus de déconvolution itérative et ce par l’apport d’outils comme les ondelettes et leurs extensions que sont les curvelets apportant une dimension supplémentaire à l’analyse par la notion de direction. Ces différentes approches ont été mises en application et validées par des analyses sur images synthétiques, simulées et cliniques que ce soit dans le domaine de la neurologie ou dans celui de l’oncologie. Finalement, les caméras commerciales actuelles intégrant de plus en plus des corrections de résolution spatiale dans leurs algorithmes de reconstruction, nous avons choisi de comparer de telles approches en TEP et en TEMP avec une approche de déconvolution itérative proposée dans ce mémoire. / Partial Volume Effects (PVE) designates the blur commonly found in nuclear medicine images andthis PhD work is dedicated to their correction with the objectives of qualitative and quantitativeimprovement of such images. PVE arise from the limited spatial resolution of functional imaging witheither Positron Emission Tomography (PET) or Single Photon Emission Computed Tomography(SPECT). They can be defined as a signal loss in tissues of size similar to the Full Width at HalfMaximum (FWHM) of the PSF of the imaging device. In addition, PVE induce activity crosscontamination between adjacent structures with different tracer uptakes. This can lead to under or overestimation of the real activity of such analyzed regions. Various methodologies currently exist tocompensate or even correct for PVE and they may be classified depending on their place in theprocessing chain: either before, during or after the image reconstruction process, as well as theirdependency on co-registered anatomical images with higher spatial resolution, for instance ComputedTomography (CT) or Magnetic Resonance Imaging (MRI). The voxel-based and post-reconstructionapproach was chosen for this work to avoid regions of interest definition and dependency onproprietary reconstruction developed by each manufacturer, in order to improve the PVE correction.Two different contributions were carried out in this work: the first one is based on a multi-resolutionmethodology in the wavelet domain using the higher resolution details of a co-registered anatomicalimage associated to the functional dataset to correct. The second one is the improvement of iterativedeconvolution based methodologies by using tools such as directional wavelets and curveletsextensions. These various developed approaches were applied and validated using synthetic, simulatedand clinical images, for instance with neurology and oncology applications in mind. Finally, ascurrently available PET/CT scanners incorporate more and more spatial resolution corrections in theirimplemented reconstruction algorithms, we have compared such approaches in SPECT and PET to aniterative deconvolution methodology that was developed in this work.
258

Localisation et contribution de sources acoustiques de navire au passage par traitement d’antenne réduite / Array processing for the localization and the contribution of acoustic sources of a passing-by ship using a short antenna

Oudompheng, Benoit 03 November 2015 (has links)
Le bruit rayonné par le trafic maritime étant la principale source de nuisance acoustique sous-marine dans les zones littorales, la Directive-Cadre Stratégie pour le Milieu Marin de la Commission Européenne promeut le développement de méthodes de surveillance et de réduction de l'impact du bruit du trafic maritime. Le besoin de disposer d'un système industriel d'imagerie du bruit rayonné par les navires de surface a motivé la présente étude, il permettra aux industriels du naval d'identifier quels éléments d'un navire rayonnent le plus de bruit.Dans ce contexte, ce travail de recherche porte sur la mise en place de méthodes d'imagerie acoustique sous-marine passive d'un navire de surface au passage au-dessus d'une antenne linéaire et fixe au nombre réduit d'hydrophones. Deux aspects de l'imagerie acoustique sont abordés : la localisation de sources acoustiques et l'identification de la contribution relative de chacune de ces sources dans la signature acoustique du navire.Tout d'abord, une étude bibliographique sur le rayonnement acoustique d'un navire de surface au passage est menée afin d'identifier les principales sources acoustiques et de pouvoir ensuite simuler des sources représentatives d'un navire. La propagation acoustique est simulée par la théorie des rayons et intègre le mouvement des sources. Ce simulateur de rayonnement acoustique de navire au passage est construit afin de valider les algorithmes d'imagerie acoustique proposés et de dimensionner une configuration expérimentale. Une étude sur l'influence du mouvement des sources sur les algorithmes d'imagerie acoustique a conduit à l'utilisation d'un algorithme de formation de voies pour sources mobiles pour la localisation des sources et une méthode de déconvolution pour accéder à l'identification de la contribution des sources. Les performances de ces deux méthodes sont évaluées en présence de bruit de mesure et d'incertitudes sur le modèle de propagation afin d'en connaître les limites. Une première amélioration de la méthode de formation de voies consiste en un traitement d'antenne à ouverture synthétique qui exploite le mouvement relatif entre le navire et l'antenne pour notamment améliorer la résolution en basses fréquences. Un traitement de correction acoustique de la trajectoire permet de corriger la trajectographie du navire au passage qui est souvent incertaine. Enfin, la dernière partie de cette thèse concerne une campagne de mesures de bruit de passage d'une maquette de navire de surface tractée en lac, ces mesures ont permis de valider les méthodes d'imagerie acoustique proposées ainsi que les améliorations proposées, dans un environnement réel maîtrisé. / Since the surface ship radiated noise is the main contribution to the underwater acoustic noise in coastal waters, The Marine Framework Strategy Directive of the European Commission recommends the development of the monitoring and the reduction of the impact of the traffic noise. The need for developing an industrial system for the noise mapping of the surface ship have motivated this study, it will allow the naval industries to identify which part of the ship radiates the stronger noise level.In this context, this research work deals with the development of passive noise mapping methods of a surface ship passing-by above a static linear array with a reduced number of hydrophones. Two aspects of the noise mapping are considered: the localization of acoustic sources and the identification of the relative contribution of each source to the ship acoustic signature.First, a bibliographical study concerning the acoustic radiation of a passing-by surface ship is conducted in order to list the main acoustic sources and then to simulate representative ship sources. The acoustic propagation is simulated according to the ray theory and takes the source motion into account. The simulator of the acoustic radiation of a passing-by ship is built in order to validate the proposed noise mapping methods and to design an experimental set-up. A study about the influence of the source motion on the noise mapping methods led to the use of the beamforming method for moving sources for the source localization and a deconvolution method for the identification of the source contribution. The performances of both methods are assessed considering measurement noise and uncertainties about the propagation model in order to know their limitations. A first improvement of the beamforming method consists of a passive synthetic aperture array algorithm which benefits from the relative motion between the ship and the antenna in order to improve the spatial resolution at low frequencies. Then, an algorithm is proposed to acoustically correct the trajectography mismatches of a passing-by surface ship. Finally, the last part of this thesis concerns a pass-by experiment of a towed-ship model in a lake. These measurements allowed us to validate the proposed noise mapping methods and their proposed improvements, in a real and controlled environment.
259

Mass Spectrometric Deconvolution of Libraries of Natural Peptide Toxins

Gupta, Kallol January 2013 (has links) (PDF)
This thesis deals with the analysis of natural peptide libraries using mass spectrometry. In the course of the study, both ribosomal and non-ribosomal classes of peptides have been investigated. Microheterogeneity, post-translational modifications (PTM), isobaric amino acids and disulfide crosslinks present critical challenges in routine mass spectral structure determination of natural peptides. These problems form the core of this thesis. Chapter 2 describes an approach where chemical derivatization, in unison with high resolution LC-MSn experiments, resulted in deconvolution of a microheterogenous peptide library of B. subtilis K1. Chapter 3 describes an approach for distinction between isobaric amino acids (Leu/Ile/Hyp), by the use of combined ETD-CID fragmentation, through characteristic side chain losses. Chapters 4-6 address a long standing problem in structure elucidation of peptide toxins; the determination of disulfide connectivity. Through the use of direct mass spectral CID fragmentation, a methodology has been proposed for determination of the S-S pairing schemes in polypeptides. Further, an algorithm DisConnect has been developed for a rapid and robust solution to the problem. This general approach is applicable to both peptides and proteins, irrespective of the size and the number of disulfide bonds present. The method has been successfully applied to a large number of peptide toxins from marine cone snails, conotoxins, synthetic foldamers and proteins. Chapter 7 describes an attempt to integrate next generation sequencing (NGS) data with mass spectrometric analysis of the crude venom. This approach couples rapidly generated cDNA sequences, with high-throughput LC-ESI-MS/MS analysis, which provides mass spectral fragmentation information. An algorithm has been developed that allows the construction of a putative conus peptide database from the NGS data, followed by a protocol that permits rapid annotation of tandem MS data. The approach is exemplified by an analysis of the peptide components present in the venom of Conus amadis, yielding 225 chemically unique sequences, with identification of more than 150 sites of PTMs. In summary, this thesis presents different methodologies that address the existing limitations of de novo mass spectral structure determination of natural peptides and presents new methodologies that permit for rapid and efficient analysis of complex mixtures.
260

Implementace rekonstrukčních metod pro čtení čárového kódu / Implementation of restoring method for reading bar code

Kadlčík, Libor January 2013 (has links)
Bar code stores information in the form of series of bars and gaps with various widths, and therefore can be considered as an example of bilevel (square) signal. Magnetic bar codes are created by applying slightly ferromagnetic material to a substrate. Sensing is done by reading oscillator, whose frequency is modulated by presence of the mentioned ferromagnetic material. Signal from the oscillator is then subjected to frequency demodulation. Due to temperature drift of the reading oscillator, the demodulated signal is accompanied by DC drift. Method for removal of the drift is introduced. Also, drift-insensitive detection of presence of a bar code is described. Reading bar codes is complicated by convolutional distortion, which is result of spatially dispersed sensitivity of the sensor. Effect of the convolutional distortion is analogous to low-pass filtering, causing edges to be smoothed and overlapped, and making their detection difficult. Characteristics of convolutional distortion can be summarized into point-spread function (PSF). In case of magnetic bar codes, the shape of the PSF can be known in advance, but not its width of DC transfer. Methods for estimation of these parameters are discussed. The signal needs to be reconstructed (into original bilevel form) before decoding can take place. Variational methods provide effective way. Their core idea is to reformulate reconstruction as an optimization problem of functional minimization. The functional can be extended by other functionals (regularizations) in order to considerably improve results of reconstruction. Principle of variational methods will be shown, including examples of use of various regularizations. All algorithm and methods (including frequency demodulation of signal from reading oscillator) are digital. They are implemented as a program for a microcontroller from the PIC32 family, which offers high computing power, so that even blind deconvolution (when the real PSF also needs to be found) can be finished in a few seconds. The microcontroller is part of magnetic bar code reader, whose hardware allows the read information to be transferred to personal computer via the PS/2 interface or USB (by emulating key presses on virtual keyboard), or shown on display.

Page generated in 0.0637 seconds