• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 21
  • 6
  • 4
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 74
  • 74
  • 21
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Addressing Challenges with Big Data for Maritime Navigation: AIS Data within the Great Lakes System

Dhar, Samir January 2016 (has links)
No description available.
62

Enriquecimento de elementos pesados no aglomerado globular do bojo NGC 6522: traços da primeira geração de estrelas / Heavy elements enrichment in the bulge globular cluster NGC 6522: traces from the first stellar generation

Cantelli, Elvis William Carvalho dos Santos 13 August 2018 (has links)
Há uma concentração de aglomerados globulares moderadamente pobres em metais no bojo galáctico, e muitos deles mostram um Ramo Horizontal Azul (BHB). Essas características juntas apontam para uma idade antiga. Para entender melhor a origem desses aglomerados, o estudo de seu padrão de abundâncias pode ajudar a identificar o tipo das primeiras supernovas nas partes centrais da Galáxia. O NGC 6522 na janela do Baade é um representante desta classe de aglomerados. Análises de abundância de estrelas individuais nesses aglomerados confirmaram sua metalicidade de [Fe/H] -1.0, enriquecimento em elementos-$\\alpha$ e detectaram uma variação nas abundâncias dos elementos pesados de processo-s. Entre os maiores enriquecimentos em Y e Ba, a explicação usual da transferência de massa de uma companheira do ramo assintótico das gigantes pode não se aplicar, e um enriquecimento por estrelas massivas de alta rotação foi sugerido. A fim de estudar melhor as abundâncias em NGC 6522, obtivemos um programa com o FLAMES em 2012, a partir da qual, com os dados de UVES, mostramos que o enriquecimento em elementos-s ainda poderia ser acomodado com o modelo de transferência de massa de uma estrela companheira. Além disso, obtivemos novos dados com o FLAMES em 2016. No presente trabalho analisamos outras 6 estrelas observadas em alta resolução com UVES, e 32 estrelas em resolução média-alta observadas com GIRAFFE, onde foram selecionadas por suas velocidades radiais em torno de -14,3 km/s com uma abrangência de ±15 km/s. Os parâmetros atmosféricos e as abundâncias dos elementos leves C, N, O, elementos de Z ímpar Na e Al, elementos-$alpha$ Mg, Si, Ca, Ti, elementos de pico de ferro Mn, Cu, Zn, elementos de processo-s Y, Zr, Ba, La, Ce, Nd e o elemento de processo-r Eu são derivados para a amostra UVES e preliminarmente para a amostra GIRAFFE. Entre as estrelas UVES, duas delas mostram um enriquecimento significativo nos elementos do processo-s e uma com alto valor nas razões [Y/Ba] e [Zr/Ba], sugerindo um enriquecimento por estrelas massivas de alta rotação. / There is a concentration of moderately metal-poor globular clusters in the Galactic bulge, and many of them show a Blue Horizontal Branch (BHB). These characteristics together point to an old age. In order to better understand the origin of these clusters, the study of their abundance pattern can help identifying the kind of the earliest supernovae in the central parts of the Galaxy. NGC 6522 in Baades Window is a representative of this class of clusters. Abundance analyses of individual stars in this clusters have confirmed its metallicity of [Fe/H]-1.0, enhanced -elements, and detected a variation in the abundances of the s-process heavy elements. Among the highest enhancements of Y and Ba, the usual explanation of mass transfer from a companion in the Asymptotic Giant Branch might not apply, and an enrichment by early fast-rotating massive stars was suggested. In order to further study the abundances in NGC 6522 we obtained a run with FLAMES- UVES in 2012, from which with the UVES data we have shown that the enhancement in s-elements could still be accommodated with the companion transfer model. We further obtained new data with FLAMES-UVES in 2016. In the present work we analyze another 6 stars observed at high resolution with UVES, and 32 stars at medium-high resolution observed with GIRAFFE. The latter were selected from their radial velocities of -14.3±15 km/s. The abundances of the light elements C, N, O, odd-Z elements Na, Al, -elements Mg, Si, Ca, Ti, iron-peak elements Mn, Cu, Zn, s-process elements Y, Zr, Ba, La, Ce, Nd and r-process element Eu are derived. Among the UVES stars, two of them show a significant enrichment in s-process elements and one of them show high [Y/Ba] and [Zr/Ba] ratios, suggesting that an early enrichment by fast rotating massive stars is a probable scenario.
63

Enriquecimento de elementos pesados no aglomerado globular do bojo NGC 6522: traços da primeira geração de estrelas / Heavy elements enrichment in the bulge globular cluster NGC 6522: traces from the first stellar generation

Elvis William Carvalho dos Santos Cantelli 13 August 2018 (has links)
Há uma concentração de aglomerados globulares moderadamente pobres em metais no bojo galáctico, e muitos deles mostram um Ramo Horizontal Azul (BHB). Essas características juntas apontam para uma idade antiga. Para entender melhor a origem desses aglomerados, o estudo de seu padrão de abundâncias pode ajudar a identificar o tipo das primeiras supernovas nas partes centrais da Galáxia. O NGC 6522 na janela do Baade é um representante desta classe de aglomerados. Análises de abundância de estrelas individuais nesses aglomerados confirmaram sua metalicidade de [Fe/H] -1.0, enriquecimento em elementos-$\\alpha$ e detectaram uma variação nas abundâncias dos elementos pesados de processo-s. Entre os maiores enriquecimentos em Y e Ba, a explicação usual da transferência de massa de uma companheira do ramo assintótico das gigantes pode não se aplicar, e um enriquecimento por estrelas massivas de alta rotação foi sugerido. A fim de estudar melhor as abundâncias em NGC 6522, obtivemos um programa com o FLAMES em 2012, a partir da qual, com os dados de UVES, mostramos que o enriquecimento em elementos-s ainda poderia ser acomodado com o modelo de transferência de massa de uma estrela companheira. Além disso, obtivemos novos dados com o FLAMES em 2016. No presente trabalho analisamos outras 6 estrelas observadas em alta resolução com UVES, e 32 estrelas em resolução média-alta observadas com GIRAFFE, onde foram selecionadas por suas velocidades radiais em torno de -14,3 km/s com uma abrangência de ±15 km/s. Os parâmetros atmosféricos e as abundâncias dos elementos leves C, N, O, elementos de Z ímpar Na e Al, elementos-$alpha$ Mg, Si, Ca, Ti, elementos de pico de ferro Mn, Cu, Zn, elementos de processo-s Y, Zr, Ba, La, Ce, Nd e o elemento de processo-r Eu são derivados para a amostra UVES e preliminarmente para a amostra GIRAFFE. Entre as estrelas UVES, duas delas mostram um enriquecimento significativo nos elementos do processo-s e uma com alto valor nas razões [Y/Ba] e [Zr/Ba], sugerindo um enriquecimento por estrelas massivas de alta rotação. / There is a concentration of moderately metal-poor globular clusters in the Galactic bulge, and many of them show a Blue Horizontal Branch (BHB). These characteristics together point to an old age. In order to better understand the origin of these clusters, the study of their abundance pattern can help identifying the kind of the earliest supernovae in the central parts of the Galaxy. NGC 6522 in Baades Window is a representative of this class of clusters. Abundance analyses of individual stars in this clusters have confirmed its metallicity of [Fe/H]-1.0, enhanced -elements, and detected a variation in the abundances of the s-process heavy elements. Among the highest enhancements of Y and Ba, the usual explanation of mass transfer from a companion in the Asymptotic Giant Branch might not apply, and an enrichment by early fast-rotating massive stars was suggested. In order to further study the abundances in NGC 6522 we obtained a run with FLAMES- UVES in 2012, from which with the UVES data we have shown that the enhancement in s-elements could still be accommodated with the companion transfer model. We further obtained new data with FLAMES-UVES in 2016. In the present work we analyze another 6 stars observed at high resolution with UVES, and 32 stars at medium-high resolution observed with GIRAFFE. The latter were selected from their radial velocities of -14.3±15 km/s. The abundances of the light elements C, N, O, odd-Z elements Na, Al, -elements Mg, Si, Ca, Ti, iron-peak elements Mn, Cu, Zn, s-process elements Y, Zr, Ba, La, Ce, Nd and r-process element Eu are derived. Among the UVES stars, two of them show a significant enrichment in s-process elements and one of them show high [Y/Ba] and [Zr/Ba] ratios, suggesting that an early enrichment by fast rotating massive stars is a probable scenario.
64

Développement et exploitation scientifique d’un nouvel instrument interférométrique visible en optique guidée / Development and scientific exploitation of a new guided optics visible in interferometric instrument

Martinod, Marc-Antoine 14 December 2018 (has links)
L'interférométrie visible longue base est une technique d'observation en astronomie permettant de sonder les objets avec une résolution spatiale qu'il est impossible d'atteindre avec un télescope seul. La mise en œuvre au sol de cette méthode est limitée en sensibilité et précision de mesure à cause de la turbulence atmosphérique. Or les nouveaux besoins scientifiques, tels que la détermination des paramètres fondamentaux, l'étude de l'environnement proche ou de la surface des étoiles, requièrent la capacité d'observer des objets de moins en moins brillants et de faire des mesures de plus en plus précises, en interférométrie visible. Pour s'affranchir de la turbulence, l'interférométrie multimode a été développée en reprenant le concept de l'interférométrie des tavelures utilisée sur un seul télescope. Aujourd'hui, pour améliorer davantage les performances des futurs instruments, cette instrumentation évolue vers l'utilisation de la nouvelle génération de détecteur, l'Electron Multiplying Charge-Coupled Device (EMCCD), et de l'emploi des fibres optiques interfacées avec des optiques adaptatives. Cette avancée est motivée par le succès de l'utilisation conjointe de l'optique adaptative et du suivi de franges pour s'affranchir partiellement de la turbulence en interférométrie infrarouge, en 2017 avec l'instrument GRAVITY (Gravity Collaboration et al. 2017). Le prototype FRIEND (Fibered and spectrally Resolved Interferometer - New Design) a été conçu pour caractériser et évaluer les performances de la combinaison de ces éléments, dans le domaine visible. L'amélioration de la précision des instruments interférométriques est apportée par les fibres optiques et par la dynamique du signal délivré par une EMCCD. L'inconvénient de l'emploi des fibres dans le visible est une perte de la sensibilité du fait que le taux d'injection du flux dans celles-ci est très faible à cause de la turbulence atmosphérique. Mais il se trouve que l'optique adaptative et l'EMCCD permettent d'améliorer la sensibilité. En effet, l'optique adaptative maximise l'injection en réduisant l'influence de la turbulence atmosphérique, et l'EMCCD est capable de détecteur de faibles flux. FRIEND prépare ainsi le développement du futur instrument SPICA, recombinant jusqu'à six télescopes (Mourard et al. 2017, 2018). Celui-ci devra explorer la stabilisation des interférences grâce au suivi de franges. Cet aspect n'est pas abordé dans cette thèse. Je présente dans cette thèse le prototype FRIEND, capable de recombiner jusqu'à trois télescopes, opérant dans la bande R en franges dispersées. Il est doté de fibres optiques gaussiennes monomodes à maintien de polarisation et d'une EMCCD. Il est installé sur l'interféromètre visible Center for High Angular Resolution Astronomy (CHARA), au Mount Wilson, en Californie, qui est en train de s'équiper d'optiques adaptatives. J'ai développé des estimateurs de visibilité et de clôture de phase, la méthode de réduction des données de ce prototype et une stratégie d'observation. Grâce à ces outils, j'ai montré que les optiques adaptatives améliorent le taux d'injection dans les fibres. Il est alors apparu que la stabilisation de l'injection est importante pour maximiser le rapport signal-à-bruit dans chaque image. La biréfringence des fibres dégrade les performances de l'instrument mais elle a pu être compensée. J'ai montré qu'un instrument, basé sur la conception de FRIEND, permet d'accéder à des mesures de visibilité faibles avec une précision, inatteignable avec la génération actuelle, grâce au développement et l'utilisation d'un modèle de rapport signal-à-bruit. L'instrument a enfin été testé dans son intégralité sur le système binaire connu ζ Ori A. Cette observation montre la fiabilité et la précision des mesures interférométriques obtenues avec ce prototype, montrant l'intérêt de cette association de technologies pour les futurs interféromètres visibles. / Long baseline visible interferometry in astronomy is an observing technique which allows to get insights of an object with an outstanding angular resolution, unreachable with single-dish telescope. Interferometric measurements with ground-based instrumentation are currently limited in sensitivity and precision due to atmospheric turbulence. However, the new astrophysical needs, particularly the determination of fundamental parameters or the study of the closed environment and the surface of the stars, require to observe fainter objects with a better precision than now in visible interferometry. Ought to overcome the atmospheric turbulence, multispeckle interferometry has been developed by adapting speckle imaging technics used on single-dish telescope. Today, in order to improve the performance of the future combiners, instrumentation progresses to the use of a new generation detector called EMCCD, and the use of optical fibers which are coupled with adaptive optics. This path is chosen thank to the success of the use of the adaptive optics with the fringe tracking in the infrared interferometry in 2017 (Gravity Collaboration et al. 2017), in order to compensate turbulence. FRIEND prototype (Fibered and spectrally Resolved Interferometer - New Design) has been designed to characterize and estimate the performance of such a combination of technologies, in the visible spectral band. The improvement of the precision of the measurements from interferometric instruments is due to optical fibers and the dynamical range of the EMCCD. The counterpart of using the optical fibers is a loss in sensitivity due to a low injection rate of flux into the fibers because of the atmospheric turbulence. On the other hand, sensitivity is improved thanks to adaptive optics and EMCCDs. Indeed, adaptive optics increases the injection rate and EMCCDs can measure low fluxes. Lastly, FRIEND is a pathfinder for the future instrument SPICA which should recombine up to 6 telescopes (Mourard et al. 2017, 2018). Fringe-tracking aspects will have to be studied for SPICA; this topic is not dealt with in this thesis. In this work, I present the FRIEND prototype, which can recombine up to three telescopes and operates in the R band with dispersed fringes. It has Gaussian polarization-maintaining single mode optical fibers and an EMCCD. It is set up at the Center for High Angular Resolution Astronomy (CHARA), at Mount Wilson, in California. CHARA is currently being equipped with adaptive optics. I develop estimators of visibility modulus and closure phase, the data reduction software and an observing strategy. Thanks to that, I am able to show that adaptive optics improves the injection rate. I also demonstrate how important the stabilization of injection is to maximize the signal-to-noise ratio (SNR) per frame. Birefringence of the fibers decreases the performance of the instrument but we manage to compensate it. I show how such an instrument can measure low visibility with a better precision than now by developing and using a SNR model of FRIEND. Finally, FRIEND has entirely been tested on the known binary system ζ Ori A. These observations demonstrate how reliable and accurate the measurements of FRIEND are.
65

Data Reduction based energy-efficient approaches for secure priority-based managed wireless video sensor networks / Approches écoénergétiques basées sur la réduction des données pour les réseaux de capteurs vidéo sans fil

Salim, Christian 03 December 2018 (has links)
L'énorme quantité de données dans les réseaux de capteurs vidéo sans fil (WVSN) pour les nœuds de capteurs de ressources limitées augmente les défis liés à la consommation d'énergie et à la consommation de bande passante. La gestion du réseau est l’un des défis de WMSN en raison de l’énorme quantité d’images envoyées simultanément par les capteurs au coordinateur. Dans cette thèse, pour surmonter ces problèmes, plusieurs contributions ont été apportées. Chaque contribution se concentre sur un ou deux défis, comme suit: Dans la première contribution, pour réduire la consommation d'énergie, une nouvelle approche pour l'agrégation des données dans WVSN basée sur des fonctions de similarité des plans est proposée. Il est déployé sur deux niveaux: le niveau du nœud du capteur vidéo et le niveau du coordinateur. Au niveau du nœud de capteur, nous proposons une technique d'adaptation du taux de trame et une fonction de similarité pour réduire le nombre de trames détectées par les nœuds de capteur et envoyées au coordinateur. Au niveau du coordinateur, après avoir reçu des plans de différents nœuds de capteurs voisins, la similarité entre ces plans est calculée pour éliminer les redondances. Dans la deuxième contribution, certains traitements et analyses sont ajoutés en fonction de la similarité entre les images au niveau du capteur-nœud pour n'envoyer que les cadres importants au coordinateur. Les fonctions cinématiques sont définies pour prévoir l'étape suivante de l'intrusion et pour planifier le système de surveillance en conséquence. Dans la troisième contribution, sur la phase de transmission, au niveau capteur-nœud, un nouvel algorithme d'extraction des différences entre deux images est proposé. Cette contribution prend également en compte le défi de sécurité en adaptant un algorithme de chiffrement efficace au niveau du nœud de capteur. Dans la dernière contribution, pour éviter une détection plus lente des intrusions conduisant à des réactions plus lentes du coordinateur, un protocole mac-layer basé sur le protocole S-MAC a été proposé pour contrôler le réseau. Cette solution consiste à ajouter un bit de priorité au protocole S-MAC pour donner la priorité aux données critiques. / The huge amount of data in Wireless Video Sensor Networks (WVSNs) for tiny limited resources sensor nodes increases the energy and bandwidth consumption challenges. Controlling the network is one of the challenges in WMSN due to the huge amount of images sent at the same time from the sensors to the coordinator. In this thesis, to overcome these problems, several contributions have been made. Each contribution concentrates on one or two challenges as follows: In the first contribution, to reduce the energy consumption a new approach for data aggregation in WVSN based on shot similarity functions is proposed. It is deployed on two levels: the video-sensor node level and the coordinator level. At the sensor node level, we propose a frame rate adaptation technique and a similarity function to reduce the number of frames sensed by the sensor nodes and sent to the coordinator. At the coordinator level, after receiving shots from different neighboring sensor nodes, the similarity between these shots is computed to eliminate redundancies. In the second contribution, some processing and analysis are added based on the similarity between frames on the sensor-node level to send only the important frames to the coordinator. Kinematic functions are defined to predict the next step of the intrusion and to schedule the monitoring system accordingly. In the third contribution, on the transmission phase, on the sensor-node level, a new algorithm to extract the differences between two images is proposed. This contribution also takes into account the security challenge by adapting an efficient ciphering algorithm on the sensor node level. In the last contribution, to avoid slower detection of intrusions leading to slower reactions from the coordinator, a mac-layer protocol based on S-MAC protocol has been proposed to control the network. This solution consists in adding a priority bit to the S-MAC protocol to give priority to critical data.
66

Impact des raies d'absorption telluriques sur la vélocimétrie de haute précision

Beauvais, Simon-Gabriel 05 1900 (has links)
Dans la recherche d’exoplanètes comme la Terre dans la zone habitable de leur étoile, la vélocimétrie radiale s’est avérée un outil important. Les candidats les plus intéressants pour faire ce genre de mesures sont les naines rouges, car la zone habitable se trouve proche de l’étoile et possède donc une courte période, et de par leurs faibles masses, l’orbite d’une planète comme la Terre induirait un signal de l’ordre de 1 ms^−1. L’effet Doppler résultant de ce mouvement est mesurable par des spectrographes optimisés pour la vélocimétrie de haute précision. Par contre, comme les naines rouges émettent principalement dans l’infrarouge et que l’atmosphère terrestre présente de fortes raies d’absorption dans ces longueurs d’onde, il est alors important de soustraire ces raies pour minimiser le biais de vitesse radiale de l’étoile induit par l’atmosphère. Le but de ce travail de maîtrise fut de développer un algorithme permettant de faire des mesures de vélocimétrie de haute précision dans le domaine infrarouge, de procéder à la quantification de l’impact des raies d’absorption telluriques et de déterminer une cible pour le niveau requis de retrait de ces raies. Une méthode de traitement de données basée sur l’analyse d’un spectre segmenté adéquatement pondéré fut aussi développée pour extraire optimalement la vitesse radiale en présence de raies telluriques résiduelles. On note l’existence d’une corrélation entre l’époque des mesures et l’incertitude de vitesse radiale associée avec les raies telluriques résiduelles ce qui souligne toute l’importance du choix de la fenêtre d’observation pour atteindre une précision de 1 ms^−1. De cette analyse, on conclut qu’un masque de 80% de transmission couplé avec un retrait laissant au maximum 10% des raies telluriques est requis pour atteindre des performances mieux que le ms^−1. / In the search for an exoplanet like Earth in the habitable zone of its star, radial velocimetry has proved itself an important tool. The most promising candidates for this type of measurements are red dwarfs. Since their habitable zone is very close to the star with relatively small orbital periods (a few weeks), and because of their small masses, the presence of an Earth-like planet in their habitable zone would produce a signal of ∼ 1 ms−1. Such a small Doppler effect resulting from this reflex motion is within the capabilities of precision radial velocity instruments. But, since red dwarfs emit mostly in the infrared and Earth’s atmosphere has strong absorption lines in that domain, the removal of telluric absorption lines is crucial to minimize the velocity bias induced by the atmosphere. The goal of this work was the development of an algorithm capable of performing high precision radial velocimetry measurements, to quantify the impact of telluric lines on the measurements and to determine the level of telluric line masking and attenuation needed to minimize their impact on velocity measurements. A data processing method based on the analysis of an adequately weighted segmented spectrum was also developed to optimally extract radial velocities in the presence of residual telluric lines. We note the existence of a correlation between the time of the measurements and the radial velocity uncertainty associated with the residual telluric lines, which underlines the importance of the choice of the observation window to achieve an accuracy of 1 meter per second. From this analysis, it is concluded that a mask of 80% transmission coupled with an attenuation leaving a maximum of 10% of the telluric lines is required to achieve performance better than 1 meter per second.
67

Caractérisation d’atmosphères d’exoplanètes à haute résolution à l’aide de l’instrument SPIRou et développement de méthodes d’extraction spectrophotométriques pour le télescope spatial James Webb

Darveau-Bernier, Antoine 10 1900 (has links)
L'étude des exoplanètes et de leur atmosphère a connu une croissance fulgurante dans les deux dernières décennies. Les observations spectrophotométriques à partir d'observatoires spatiaux comme Hubble ont permis d'apporter certaines contraintes sur les phénomènes physiques et la composition de leur atmosphère, notamment grâce à la spectroscopie d'éclipse. Ces découvertes concernent généralement les planètes les plus favorables à cette technique, dont font partie les Jupiter chaudes. Cependant, les conclusions tirées à partir telles observations comportent leur lot de dégénérescences, causées par leur faible résolution spectrale, leur couverture restreinte en longueurs d'onde et leur précision photométrique limitée. Ces lacunes peuvent être corrigées en partie grâce à la complémentarité des spectrographes à haute résolution basés au sol ainsi qu'à l'aide du nouveau télescope spatial James Webb (JWST). Cette thèse présente, en premier lieu, une des premières analyses combinées d'observations spectrophotométriques prises avec l'instrument Wide Field Camera 3 de Hubble et d'observations à haute résolution avec l'instrument SPIRou (SpectroPolarimètre InfraRouge) du télescope Canada-France-Hawaï. Cette analyse avait pour cible le côté jour de la Jupiter ultra chaude WASP-33b, la deuxième exoplanète la plus chaude connue à ce jour. Aux températures se retrouvant dans l'atmosphère de WASP-33b, avoisinant les 3000 K, des molécules comme l'eau ne peuvent demeurer stables. Cependant, le CO, beaucoup plus résistant à la dissociation thermique, reste observable. Les données de SPIRou ont donc permis de confirmer la détection des raies d'émission du CO, en accord avec deux précédentes études. La combinaison avec les données de Hubble a aussi mené à l'obtention d'un premier estimé de son abondance avec un rapport de mélange volumique de logCO = -4.07 (-0.60) (+1.51). De plus, cette analyse a pu améliorer les contraintes sur la structure verticale en température et confirmer la présence d'une stratosphère. Des limites supérieures sur d'autres molécules comme l'eau, le TiO et le VO ont aussi pu être établies. En second lieu, un algorithme d'extraction spectrale intitulé ATOCA (Algorithme de Traitement d'Ordres ContAminés) est présenté. Celui-ci est dédié au mode d'observation SOSS (Single Object Slitless Spectroscopy) de l'instrument NIRISS (Near InfraRed Imager and Slitless Spectrograph), la contribution canadienne à JWST. Ce mode d'observation couvre une plage de longueurs d'onde allant de 0,6 à 2,8 um simultanément grâce à la présence des deux premiers ordres de diffraction sur le détecteur. La nécessité d'un nouvel algorithme provient du fait que ces deux ordres générés par le « grisme » du mode SOSS se chevauchent sur une petite portion du détecteur. En conséquence, la région de l'ordre 1 couvrant les plus grandes longueurs d'onde (1,6–2,8 um) est contaminée par l'ordre 2 couvrant l'intervalle entre 0,85 et 1,4 um. ATOCA permet donc de décontaminer chacun des ordres en construisant d'abord un modèle linéaire de chaque pixel individuel du détecteur, en fonction du flux incident. Ce flux peut ensuite être extrait simultanément pour les deux ordres en comparant le modèle aux observations et en solutionnant le système selon un principe de moindres carrés. Ces travaux ont pu montrer qu'il est possible de décontaminer en dessous de 10 ppm pour chaque spectre individuel. / In the last decades, the research on exoplanets and their atmosphere has grown phenomenally. Space based observatories with spectrophotometric capabilities like Hubble allowed to put some constraints on the physical processes occuring in exoplanets’ atmosphere and their chemical composition. These discoveries concern mainly the hotter and larger planets, such as Hot Jupiters, which are the most favorable for atmospheric characterization. However, due to their low spectral resolution and their limited wavelength range and photometric accuracy, the scientific conclusions based on these observations can be degenerate. Some of these degeneracies can be lifted with the use of ground-based high-resolution spectrographs or the new James Webb Space Telescope (JWST). On the one hand, this thesis present one of the first analysis combining Hubble’s spectrosphometric data and high-resolution observations obtained with SPirou (SpectroPolarimètre InfraRouge) at the Canada-France-Hawai telescope. This analysis targeted the dayside of the Ultra Hot Jupiter WASP-33 b, the second-hottest exoplanet known to date. WASP-33 b atmosphere can reach temperatures high enough (≥ 3000 K) to dissociate molecules such as water. However, CO, which is much more resistant to thermal dissociation, remains observable. SPIRou’s observations allowed us to confirm the presence of CO emission lines in WASP-33 b emission spectrum, in agreement with two previous studies. With the addition of published Hubble data, we were able to push further and provide the first estimate CO abundance, with a volume mixing ratio of log10 CO = ≠4.07+1.51 ≠0.60. On the other hand, this thesis propose a new spectral extraction algorithm called ATOCA (Algorithm to Treat Order ContAmination) specifically designed for the SOSS (Single Object Slitless Spectroscopy) mode of the NIRISS instrument (Near InfraRed Imager and Slitless Spectrograph), the Canadian contribution to JWST. This observing mode covers the wavelength range spaning from 0.6 to 2.8 µm simultaneously, due to the presence of the two first diraction orders on NIRISS detector. The need for a new algorithm arises from the fact that these orders, originating from SOSS “grism”, overlap on a small portion of the detector. Consequently, the region of order 1 covering the longest wavelengths (1.6–2.8 µm) is contaminated by the signal from order 2 between 0.85 and 1.4 µm. Hence, ATOCA allows to decontaminate both orders by first building a linear model of each individual pixel of the detector, with the incident flux as an independant variable. This flux is then extracted simultaneously for the two orders by comparing the model to the detector image and by solving the system for the best least square fit. This work has shown that ATOCA can reduce the contamination level below 10 ppm for each individual spectrum.
68

[en] A NOVEL SELF-ADAPTIVE APPROACH FOR OPTIMIZING THE USE OF IOT DEVICES IN PATIENT MONITORING USING EWS / [pt] UMA NOVA ABORDAGEM AUTOADAPTÁVEL PARA OTIMIZAR O USO DE DISPOSITIVOS IOT NO MONITORAMENTO DE PACIENTES USANDO O EWS

ANTONIO IYDA PAGANELLI 15 May 2023 (has links)
[pt] A Internet das Coisas (IoT) se propõe a interligar o mundo físico e a Internet, o que abre a possibilidade de desenvolvimento de diversas aplicações, principalmente na área da saúde. Essas aplicações requerem um grande número de sensores para coletar informações continuamente, gerando grandes fluxos de dados, muitas vezes excessivos, redundantes ou sem significado para as operações do sistema. Essa geração massiva de dados de sensores desperdiça recursos computacionais para adquirir, transmitir, armazenar e processar informações, levando à perda de eficiência desses sistemas ao longo do tempo. Além disso, os dispositivos IoT são projetados para serem pequenos e portáteis, alimentados por baterias, para maior mobilidade e interferência minimizada no ambiente monitorado. No entanto, esse design também resulta em restrições de consumo de energia, tornando a vida útil da bateria um desafio significativo que precisa ser enfrentado. Além disso, esses sistemas geralmente operam em ambientes imprevisíveis, o que pode gerar alarmes redundantes e insignificantes, tornando-os ineficazes. No entanto, um sistema auto-adaptativo que identifica e prevê riscos iminentes através de um sistema de pontuação de alertas antecipados (EWS) pode lidar com esses problemas. Devido ao seu baixo custo de processamento, a referência EWS pode ser incorporada em dispositivos vestíveis e sensores, permitindo um melhor gerenciamento das taxas de amostragem, transmissões, produção de alarmes e consumo de energia. Seguindo a ideia acima, esta tese apresenta uma solução que combina um sistema EWS com um algoritmo auto-adaptativo em aplicações IoT de monitoramento de pacientes. Desta forma, promovendo uma redução na aquisição e transmissão de dados , diminuindo alarmes não acionáveis e proporcionando economia de energia para esses dispositivos. Além disso, projetamos e desenvolvemos um protótipo de hardware capaz de embarcar nossa proposta, evidenciando a sua viabilidade técnica. Além disso, usando nosso protótipo, coletamos dados reais de consumo de energia dos componentes de hardware que foram usados durante nossas simulações com dados reais de pacientes provenientes de banco de dados públicos. Nossos experimentos demonstraram grandes benefícios com essa abordagem, reduzindo em 87 por cento os dados amostrados, em 99 por cento a carga total das mensagens transmitidas do dispositivo de monitoramento, 78 por cento dos alarmes e uma economia de energia de quase 82 por cento. No entanto, a fidelidade do monitoramento do estado clínico dos pacientes apresentou um erro absoluto total médio de 6,8 por cento (mais ou menos 5,5 por cento), mas minimizado para 3,8 por cento (mais ou menos 2,8 por cento) em uma configuração com menores ganhos na redução de dados. A perda de detecção total dos alarmes dependendo da configuração de frequências e janelas de tempo analisadas ficou entre 0,5 por cento e 9,5 por cento, com exatidão do tipo de alarme entre 89 por cento e 94 por cento. Concluindo, este trabalho apresenta uma abordagem para o uso mais eficiente de recursos computacionais, de comunicação e de energia para implementar aplicativos de monitoramento de pacientes baseados em IoT. / [en] The Internet of Things (IoT) proposes to connect the physical world to the Internet, which opens up the possibility of developing various applications, especially in healthcare. These applications require a huge number of sensors to collect information continuously, generating large data flows, often excessive, redundant, or without meaning for the system s operations. This massive generation of sensor data wastes computational resources to acquire, transmit, store, and process information, leading to the loss of efficiency of these systems over time. In addition, IoT devices are designed to be small and portable, powered by batteries, for increased mobility and minimized interference with the monitored environment. However, this design also results in energy consumption restrictions, making battery lifetime a significant challenge that needs to be addressed. Furthermore, these systems often operate in unpredictable environments, which can generate redundant and negligible alarms, rendering them ineffective. However, a self-adaptive system that identifies and predicts imminent risks using early-warning scores (EWS) can cope with these issues. Due to its low processing cost, EWS guidelines can be embedded in wearable and sensor devices, allowing better management of sampling rates, transmissions, alarm production, and energy consumption. Following the aforementioned idea, this thesis presents a solution combining EWS with a self-adaptive algorithm for IoT patient monitoring applications. Thus, promoting a reduction in data acquisition and transmission, decreasing non-actionable alarms, and providing energy savings for these devices. In addition, we designed and developed a hardware prototype capable of embedding our proposal, which attested to its technical feasibility. Moreover, using our wearable prototype, we collected the energy consumption data of hardware components and used them during our simulations with real patient data from public datasets. Our experiments demonstrated great benefits of our approach, reducing by 87 percent the sampled data, 99 percent the total payload of the transmitted messages from the monitoring device, 78 percent of the alarms, and an energy saving of almost 82 percent. However, the fidelity of monitoring the clinical status of patients showed a mean total absolute error of 6.8 percent (plus-minus 5.5 percent) but minimized to 3.8 percent (plus-minus 2.8 percent) in a configuration with lower data reduction gains. The total loss of alarm detection depends on the configuration of frequencies and time windows, remaining between 0.5 percent and 9.5 percent, with an accuracy of the type of alarm between 89 percent and 94 percent. In conclusion, this work presents an approach for more efficient use of computational, communication, and energy resources to implement IoT-based patient monitoring applications.
69

Approximations de rang faible et modèles d'ordre réduit appliqués à quelques problèmes de la mécanique des fluides / Low rank approximation techniques and reduced order modeling applied to some fluid dynamics problems

Lestandi, Lucas 16 October 2018 (has links)
Les dernières décennies ont donné lieux à d'énormes progrès dans la simulation numérique des phénomènes physiques. D'une part grâce au raffinement des méthodes de discrétisation des équations aux dérivées partielles. Et d'autre part grâce à l'explosion de la puissance de calcul disponible. Pourtant, de nombreux problèmes soulevés en ingénierie tels que les simulations multi-physiques, les problèmes d'optimisation et de contrôle restent souvent hors de portée. Le dénominateur commun de ces problèmes est le fléau des dimensions. Un simple problème tridimensionnel requiert des centaines de millions de points de discrétisation auxquels il faut souvent ajouter des milliers de pas de temps pour capturer des dynamiques complexes. L'avènement des supercalculateurs permet de générer des simulations de plus en plus fines au prix de données gigantesques qui sont régulièrement de l'ordre du pétaoctet. Malgré tout, cela n'autorise pas une résolution ``exacte'' des problèmes requérant l'utilisation de plusieurs paramètres. L'une des voies envisagées pour résoudre ces difficultés est de proposer des représentations ne souffrant plus du fléau de la dimension. Ces représentations que l'on appelle séparées sont en fait un changement de paradigme. Elles vont convertir des objets tensoriels dont la croissance est exponentielle $n^d$ en fonction du nombre de dimensions $d$ en une représentation approchée dont la taille est linéaire en $d$. Pour le traitement des données tensorielles, une vaste littérature a émergé ces dernières années dans le domaine des mathématiques appliquées.Afin de faciliter leurs utilisations dans la communauté des mécaniciens et en particulier pour la simulation en mécanique des fluides, ce manuscrit présente dans un vocabulaire rigoureux mais accessible les formats de représentation des tenseurs et propose une étude détaillée des algorithmes de décomposition de données qui y sont associées. L'accent est porté sur l'utilisation de ces méthodes, aussi la bibliothèque de calcul texttt{pydecomp} développée est utilisée pour comparer l'efficacité de ces méthodes sur un ensemble de cas qui se veut représentatif. La seconde partie de ce manuscrit met en avant l'étude de l'écoulement dans une cavité entraînée à haut nombre de Reynolds. Cet écoulement propose une physique très riche (séquence de bifurcation de Hopf) qui doit être étudiée en amont de la construction de modèle réduit. Cette étude est enrichie par l'utilisation de la décomposition orthogonale aux valeurs propres (POD). Enfin une approche de construction ``physique'', qui diffère notablement des développements récents pour les modèles d'ordre réduit, est proposée. La connaissance détaillée de l'écoulement permet de construire un modèle réduit simple basé sur la mise à l'échelle des fréquences d'oscillation (time-scaling) et des techniques d'interpolation classiques (Lagrange,..). / Numerical simulation has experienced tremendous improvements in the last decadesdriven by massive growth of computing power. Exascale computing has beenachieved this year and will allow solving ever more complex problems. But suchlarge systems produce colossal amounts of data which leads to its own difficulties.Moreover, many engineering problems such as multiphysics or optimisation andcontrol, require far more power that any computer architecture could achievewithin the current scientific computing paradigm. In this thesis, we proposeto shift the paradigm in order to break the curse of dimensionality byintroducing decomposition and building reduced order models (ROM) for complexfluid flows.This manuscript is organized into two parts. The first one proposes an extendedreview of data reduction techniques and intends to bridge between appliedmathematics community and the computational mechanics one. Thus, foundingbivariate separation is studied, including discussions on the equivalence ofproper orthogonal decomposition (POD, continuous framework) and singular valuedecomposition (SVD, discrete matrices). Then a wide review of tensor formats andtheir approximation is proposed. Such work has already been provided in theliterature but either on separate papers or into a purely applied mathematicsframework. Here, we offer to the data enthusiast scientist a comparison ofCanonical, Tucker, Hierarchical and Tensor train formats including theirapproximation algorithms. Their relative benefits are studied both theoreticallyand numerically thanks to the python library texttt{pydecomp} that wasdeveloped during this thesis. A careful analysis of the link between continuousand discrete methods is performed. Finally, we conclude that for mostapplications ST-HOSVD is best when the number of dimensions $d$ lower than fourand TT-SVD (or their POD equivalent) when $d$ grows larger.The second part is centered on a complex fluid dynamics flow, in particular thesingular lid driven cavity at high Reynolds number. This flow exhibits a seriesof Hopf bifurcation which are known to be hard to capture accurately which iswhy a detailed analysis was performed both with classical tools and POD. Oncethis flow has been characterized, emph{time-scaling}, a new ``physics based''interpolation ROM is presented on internal and external flows. This methodsgives encouraging results while excluding recent advanced developments in thearea such as EIM or Grassmann manifold interpolation.
70

Metody sumarizace dokumentů na webu / Methods of Document Summarization on the Web

Belica, Michal January 2013 (has links)
The work deals with automatic summarization of documents in HTML format. As a language of web documents, Czech language has been chosen. The project is focused on algorithms of text summarization. The work also includes document preprocessing for summarization and conversion of text into representation suitable for summarization algorithms. General text mining is also briefly discussed but the project is mainly focused on the automatic document summarization. Two simple summarization algorithms are introduced. Then, the main attention is paid to an advanced algorithm that uses latent semantic analysis. Result of the work is a design and implementation of summarization module for Python language. Final part of the work contains evaluation of summaries generated by implemented summarization methods and their subjective comparison of the author.

Page generated in 0.0608 seconds