• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 171
  • 47
  • 29
  • 29
  • 17
  • 15
  • 9
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 399
  • 54
  • 44
  • 44
  • 43
  • 36
  • 34
  • 33
  • 30
  • 28
  • 27
  • 26
  • 26
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Antal tvärsektioners påverkan på djupmodeller producerad av SeaFloor HydroLite ™ enkelstråligt ekolod : En jämförelse mot djupmodeller producerad av Kongsberg EM 2040P MKII flerstråligt ekolod

Hägg, Linnéa, Stenberg Jönsson, Simon January 2023 (has links)
Hydroakustiska mätningar har gjorts i nästan två hundra år. Det kan liknas med topografiska mätningar på land och visar hur sjö- eller havsbottnar ser ut. Idag används ekolod vilket är en teknik som skickar ut ljudvågor i vattnet för att mäta hur lång tid det tar för ljudet att studsa på bottnen och sedan komma upp till instrumentet igen. Därefter går det att räkna ut djupet med hjälp av ljudhastighetsberäkningar. Vid inmätning av enkelstråligt ekolod rekommenderas användande av tvärsektioner som kontroll av data. Flerstråligt ekolod behöver däremot inte tvärsektioner då övertäckning mellan stråken används som kontroll. I denna studie undersöks hur antalet tvärsektioner påverkar djupkartor skapade av Seafloor HydroLite TM enkelstråligt ekolod. Detta är även en undersökning av hur djupkartor producerade av SeaFloor HydroLite TM enkelstråligt ekolod skiljer sig mot djupkartor producerade av Kongsberg EM 2040 MK11 flerstråligt ekolod. Studieområdet är 1820 m2 och är beläget vid Forsbackas hamn i Storsjön, Gävle kommun. Vid inmätning av flerstråligt ekolod användes en övertäckning av lägst 50 %. Fem huvudstråk och sju tvärsektioner mättes med enkelstråligt ekolod för området. Djupkartor med olika antal tvärsektioner gjordes i Surfer 10 från enkelstråligt ekolod. Därefter jämfördes djupkartor av enkelstråligt ekolod mot kartor gjorda av data från flerstråligt ekolod för att se hur djupkartorna skiljer sig och för att se hur djupkartorna av enkelstråligt ekolod påverkas av olika antal tvärsektioner. Med användande av flerstråligt ekolod som referens mot djupkartor gjorda av enkelstråligt ekolod blev resultaten att RMS och standardosäkerhet minskar med 1 cm i RMS-värde och med 2 cm i standardosäkerhet. Jämförelse mellan ekolods systemen visar att skillnaden av djupvärderna är runt 10 cm. Slutsatserna från denna studie är att tvärsektioner endast förbättrar kvalitén på djupkartor marginellt vid jämn och enhetlig bottentopografi, men fyller en viktig funktion genom att kontrollera kvalitén av inmätningsdatat. Samt att SeaFloor HydroLite TM klarar av order 1b vid ett djup omkring en till fyra meter om ej kravet på full bottentäckning beaktas. Seafloor HydroLite TM skapar en översiktlig djupkarta medan djupmodellerna från Kongsberg EM 2040 MKII ser mera detaljer. / Hydroacoustic measurements have been conducted for almost two hundred years. It can be compared to topographic measurements on land and shows the appearance of lake or ocean floors. Today, echosounders are used, which is a technique that sends out sound waves into the water to measure the time it takes for the sound to bounce off the bottom and return to the instrument. Sound velocity calculations can then be used to calculate the depth. The use of cross-sections is recommended as a data control of single beam echosounder. However, multi beam echosounders only use overlap as control. This study examines how the number of cross-sections affects depth maps created by Seafloor HydroLite TM single beam echosounder. It also investigates the differences between depth maps produced by the SeaFloor HydroLite TM single beam echosounder and the Kongsberg EM 2040 MK11 multi beam echosounder. The study area covers 1820 m2 and is located at Forsbackas Harbor in Storsjön, Gävle municipality. A minimum overlap of 50% was used for the surveying with the multi beam echosounder. Five main lines and seven cross-sections were measured using the single beam echosounder. Depth maps with different numbers of cross-sections were created using data from the single beam echosounder. The maps from the single beam echosounder were compared to maps created from the data obtained by the multi beam echosounder to assess the differences and the impact of varying numbers of cross-sections on the depth maps from the single beam echosounder. By using the multi beam echosounder as a reference for the depth maps created by the single beam echosounder, the results showed a decrease of 1 cm in RMS value and 2 cm in standard deviation. The comparison between the echosounder systems revealed a difference of around 10 cm in depth values. The conclusions from this study are that cross-sections only marginally improve the quality of depth maps in cases of even and uniform bottom topography but serve an important function in validating the quality of the survey data. Additionally, the SeaFloor HydroLite TM is capable of meeting Order 1b at depths ranging from one to four meters if the requirement for full bottom coverage is not considered. The Seafloor HydroLite TM creates a general overview of the depth map, while the depth models from the Kongsberg EM 2040 MKII provide more detailed information.
102

An Estimation Technique for Spin Echo Electron Paramagnetic Resonance

Golub, Frank 29 August 2013 (has links)
No description available.
103

Algoritmer och filterbubblors påverkan på sociala plattformars politiska innehåll : Hur ser filterbubblans livscykel ut? / Algorithms and filterbubbles effect on the political content of social platforms : What does the filter bubble life cycle look like?

Brynjarsson, Aron Már, Hallberg Wotango, Lucas January 2022 (has links)
På senare år har filterbubblor blivit ett välkänt begrepp som syftar på de bubblor av innehåll som användare slängs in i på sociala medier. I dessa bubblor finns en oro för att inlägg är ensidiga och kan stänga in oss i bland annat politiska åsikter. Filterbubblor anses därför av många bidra till en ökad polarisering, inte bara på grund av de inlägg som faktiskt visas för oss, utan även de inlägg som algoritmen väljer att inte visa på vårt nyhetsflöde. Trots att medvetenheten kring filterbubblor ökat, finns inte många studier på hur vägen mot en filterbubbla ser ut. För det första förser vi i denna rapport genom ett kvantitativt experiment, användare av sociala plattformar med en tydligare bild av hur filterbubblans livscykel ser ut, det vill säga hur lång tid det tar att hamna i en filterbubbla, och hur användare kan ta sig ur den. För det andra testar vi de teorier och de metodologiska svårigheterna i tidigare forskning kring hur användare potentiellt kan ta sig ur, eller byta filterbubbla. Vårt resultat visar att det går snabbt att komma in i en filterbubbla, medans att ta sig ur filterbubblan eller byta filterbubbla visa sig vara svårare än vad vi trodde och därför kräver mer tid. Eftersom det inte finns tidigare forskning på hur en filterbubbla ser ut eller när man kan räkna det som en filterbubbla så har vi  utifrån vårt egna experiments resultat, skapat ett mått för just detta. / In recent years, filter bubbles have become a well-known concept that refers to the bubbles of content that users throw in on social media. In these bubbles there is a concern that posting is one-sided and can shut us down in, among other things, political opinions. Filter bubbles are therefore considered by many to contribute to increased polarization, not only because of the posts that are actually displayed to us, but also the posts that the algorithm chooses not to display on our news feed. Although awareness of filter bubbles has increased, there are not many studies on what the path to a filter bubble looks like. Firstly, in a quantitative experiment, we provide users of social platforms with a clearer picture of what the filter bubble's life cycle looks like, i.e. how long it takes to end up in a filter bubble, and how users can get out of it. Secondly, we test the theories and methodological difficulties in previous research on how users can potentially get out, or change the filter bubble. Our results show that it is easy to get into a filter bubble, but getting out of the filter bubble or changing the filter bubble turns out to be more difficult than we thought and requires more time. Since there is no previous research on what a filter bubble looks like or when you can count it as a filter bubble we have based on the results of our own experiment created a measure for just this.
104

Ultrasound Imaging Velocimetry using Polyvinyl Alcohol Shelled Microbubbles / Ultrasound imaging velocimetry användande mikrobubblor med ett polyvinylalkoholskal

Johansson, Ida January 2022 (has links)
Current research within the field of ultrasound contrast agents (UCAs) aims at developing capsules which are not only acoustically active, but also have a chemically modifiable surface. This enables use in new areas, including targeted drug delivery and theranostics. For such purposes, air-filled microbubbles (MBs) with a polyvinyl alcohol (PVA) shell are being studied. Ultrasound imaging velocimetry (UIV) is a technique used to evaluate various types of liquid flows by tracking patterns caused by UCAs across ultrasound images, and has shown great potential for flow measurements in terms of accuracy. The aim of this thesis was to implement a basic UIV program in Matlab to investigate the flow behavior of air-filled PVA MBs being pumped through a phantom, mimicking a blood vessel. The images were acquired using the programmable Verasonics research system by plane wave imaging with coherent compounding, and UIV was implemented as a post-processing technique. Three parameters were varied to study how the UIV performance and flow behavior of the MBs were affected: the concentration of MBs, the flow velocity, and the transducer voltage. The resulting velocity vector fields showed that it is possible to track PVA MBs using the implemented UIV program, and that the concentration 5·106 MBs/ml gave the best results out of the five concentrations tested. The generated velocity vector fields indicated a turbulent and pulsatile flow behavior, which was in line with the predicted flow behavior, although there was a disparity between the measured average flow velocity of the MBs and the predicted flow velocity. It was also observed that the MBs were increasingly pushed in the axial direction with increasing voltage, as according to theory. Even though a more advanced UIV algorithm could improve the accuracy of the velocity measurements, the results show possible use of air-filled PVA MBs in combination with UIV. / Nuvarande forskning inom ultraljudskontrastmedel syftar till att utveckla kapslar som inte bara är akustiskt aktiva, utan som även har en kemiskt modifierbar yta. Detta möjliggör användning inom nya områden, så som målinriktade läkemedel och theanostics. För detta syfte studeras luftfyllda mikrobubblor med ett skal av polyvinylalkohol (PVA). Ultrasound imaging velocimetry (UIV) är en teknik som används för att analysera olika typer av vätskeflöden genom att spåra mönster orsakade av ultraljudskontrastmedel över ett antal ultraljudsbilder. Metoden har visats ha stor potential för flödesmätningar, och hög noggrannhet har uppnåtts. Detta projekt syftade till att implementera ett grundläggande UIV-program i Matlab för att undersöka flödesbeteenden hos luftfyllda PVA-mikrobubblor som pumpas genom en modell av ett blodkärl. Ultraljudsbilderna togs med hjälp av det programmerbara forskningssystemet Verasonics, genom att använda planvågsavbildning och coherent compounding, och UIV implementerades som ett efterbearbetningsprogram. Tre parametrar varierades för att studera hur prestandan av UIV-programmet och flödesbeteendet hos mikrobubblorna påverkades: koncentrationen av mikrobubblor, flödeshastigheten, och spänningsamplituden hos ultraljudsproben. De resulterande hastighetsvektorfälten visade det möjligt att evaluera flödesbeteenden hos PVA-mikrobubblor med hjälp av det implementerade UIV-programmet. Bäst resultat erhölls genom att använda koncentrationen 5·106 mikrobubblor/ml, av de fem testade koncentrationerna. De genererade hastighetsvektorfälten indikerade ett turbulent och pulserande flöde, vilket överensstämde med teorin, trots att det fanns skillnader mellan genomsnittliga uppmätta flödeshastigheter och den beräknade flödeshastigheten. Det kunde också observeras att mikrobubblorna trycktes i den axiella riktningen när spänningsamplituden ökade, vilket överensstämde med teorin. Trots att metodens noggrannhet skulle kunna ökas genom att använda ett mer avancerat UIV-program, visade resultaten på möjligheten att använda luftfyllda PVA-mikrobubblor i kombination med UIV.
105

T₂ mapping of the heart with a double-inversion radial fast spin-echo method with indirect echo compensation

Hagio, T., Huang, C., Abidov, A., Singh, J., Ainapurapu, B., Squire, S., Bruck, D., Altbach, M. I. January 2015 (has links)
BACKGROUND: The abnormal signal intensity in cardiac T₂-weighted images is associated with various pathologies including myocardial edema. However, the assessment of pathologies based on signal intensity is affected by the acquisition parameters and the sensitivities of the receiver coils. T₂ mapping has been proposed to overcome limitations of T₂-weighted imaging, but most methods are limited in spatial and/or temporal resolution. Here we present and evaluate a double inversion recovery radial fast spin-echo (DIR-RADFSE) technique that yields data with high spatiotemporal resolution for cardiac T₂ mapping. METHODS: DIR-RADFSE data were collected at 1.5 T on phantoms and subjects with echo train length (ETL) = 16, receiver bandwidth (BW) = +/-32 kHz, TR = 1RR, matrix size = 256 x 256. Since only 16 views per echo time (TE) are collected, two algorithms designed to reconstruct highly undersampled radial data were used to generate images for 16 time points: the Echo-Sharing (ES) and the CUrve Reconstruction via pca-based Linearization with Indirect Echo compensation (CURLIE) algorithm. T₂ maps were generated via least-squares fitting or the Slice-resolved Extended Phase Graph (SEPG) model fitting. The CURLIE-SEPG algorithm accounts for the effect of indirect echoes. The algorithms were compared based on reproducibility, using Bland-Altman analysis on data from 7 healthy volunteers, and T₂ accuracy (against a single-echo spin-echo technique) using phantoms. RESULTS: Both reconstruction algorithms generated in vivo images with high spatiotemporal resolution and showed good reproducibility. Mean T₂ difference between repeated measures and the coefficient of repeatability were 0.58 ms and 2.97 for ES and 0.09 ms and 4.85 for CURLIE-SEPG. In vivo T₂ estimates from ES were higher than those from CURLIE-SEPG. In phantoms, CURLIE-SEPG yielded more accurate T₂s compared to reference values (error was 7.5-13.9% for ES and 0.6-2.1% for CURLIE-SEPG), consistent with the fact that CURLIE-SEPG compensates for the effects of indirect echoes. The potential of T₂ mapping with CURLIE-SEPG is demonstrated in two subjects with known heart disease. Elevated T₂ values were observed in areas of suspected pathology. CONCLUSIONS: DIR-RADFSE yielded TE images with high spatiotemporal resolution. Two algorithms for generating T₂ maps from highly undersampled data were evaluated in terms of accuracy and reproducibility. Results showed that CURLIE-SEPG yields T₂ estimates that are reproducible and more accurate than ES.
106

A General-Purpose GPU Reservoir Computer

Keith, Tūreiti January 2013 (has links)
The reservoir computer comprises a reservoir of possibly non-linear, possibly chaotic dynamics. By perturbing and taking outputs from this reservoir, its dynamics may be harnessed to compute complex problems at “the edge of chaos”. One of the first forms of reservoir computer, the Echo State Network (ESN), is a form of artificial neural network that builds its reservoir from a large and sparsely connected recurrent neural network (RNN). The ESN was initially introduced as an innovative solution to train RNNs which, up until that point, was a notoriously difficult task. The innovation of the ESN is that, rather than train the RNN weights, only the output is trained. If this output is assumed to be linear, then linear regression may be used. This work presents an effort to implement the Echo State Network, and an offline linear regression training method based on Tikhonov regularisation. This implementation targeted the general purpose graphics processing unit (GPU or GPGPU). The behaviour of the implementation was examined by comparing it with a central processing unit (CPU) implementation, and by assessing its performance against several studied learning problems. These assessments were performed using all 4 cores of the Intel i7-980 CPU and an Nvidia GTX480. When compared with a CPU implementation, the GPU ESN implementation demonstrated a speed-up starting from a reservoir size of between 512 and 1,024. A maximum speed-up of approximately 6 was observed at the largest reservoir size tested (2,048). The Tikhonov regularisation (TR) implementation was also compared with a CPU implementation. Unlike the ESN execution, the GPU TR implementation was largely slower than the CPU implementation. Speed-ups were observed at the largest reservoir and state history sizes, the largest of which was 2.6813. The learning behaviour of the GPU ESN was tested on three problems, a sinusoid, a Mackey-Glass time-series, and a multiple superimposed oscillator (MSO). The normalised root-mean squared errors of the predictors were compared. The best observed sinusoid predictor outperformed the best MSO predictor by 4 orders of magnitude. In turn, the best observed MSO predictor outperformed the best Mackey-Glass predictor by 2 orders of magnitude.
107

Mechanistic Studies on the Electrochemistry of Proton Coupled Electron Transfer and the Influence of Hydrogen Bonding

Alligrant, Timothy 30 June 2010 (has links)
This research has investigated proton-coupled electron transfer (PCET) of quinone/hydroquinone and other simple organic PCET species for the purpose of furthering the knowledge of the thermodynamic and kinetic effects due to reduction and oxidation of such systems. Each of these systems were studied involving the addition of various acid/base chemistries to influence the thermodynamics and kinetics upon electron transfer. It is the expectation that the advancement of the knowledge of acid/base catalysis in electrochemistry gleaned from these studies might be applied in fuel cell research, chemical synthesis, the study of enzymes within biological systems or to simply advance the knowledge of acid/base catalysis in electrochemistry. Furthermore, it was the intention of this work to evaluate a system that involved concerted-proton electron transfer (CPET), because this is the process by which enzymes are believed to catalyze PCET reactions. However, none of the investigated systems were found to transfer an electron and proton by concerted means. Another goal of this work was to investigate a system where hydrogen bond formation could be controlled or studied via electrochemical methods, in order to understand the kinetic and thermodynamic effects complexation has on PCET systems. This goal was met, which allowed for the establishment of in situ studies of hydrogen bonding via 1H-NMR methods, a prospect that is virtually unknown in the study of PCET systems in electrochemistry, yet widely used in fields such as supramolecular chemistry. Initial studies involved the addition of Brønsted bases (amines and carboxylates) to hydroquinones (QH2’s). The addition of the conjugate acids to quinone solutions were used to assist in the determination of the oxidation processes involved between the Brønsted bases and QH2’s. Later work involved the study of systems that were initially believed to be less intricate in their oxidation/reduction than the quinone/hydroquinone system. The addition of amines (pyridine, triethylamine and diisopropylethylamine) to QH2’s in acetonitrile involved a thermodynamic shift of the voltammetric peaks of QH2 to more negative oxidation potentials. This effect equates to the oxidation of QH2 being thermodynamically more facile in the presence of amines. Conjugate acids were also added to quinone, which resulted in a shift of the reduction peaks to more positive potentials. To assist in the determination of the oxidation process, the six pKa’s of the quinone nine-membered square scheme were determined. 1H-NMR spectra and diffusion measurements also assisted in determining that none of the added species hydrogen bond with the hydroquinones or quinone. The observed oxidation process of the amines with the QH2’s was determined to be a CEEC process. While the observed reduction process, due to the addition of the conjugate acids to quinone were found to proceed via an ECEC process without the influence of a hydrogen bond interaction between the conjugate acid and quinone. Addition of carboxylates (trifluoroacetate, benzoate and acetate) to QH2’s in acetonitrile resulted in a similar thermodynamic shift to that found with addition of the amines. However, depending on the concentration of the added acetate and the QH2 being oxidized, either two or one oxidation peak(s) was found. Two acetate concentrations were studied, 10.0 mM and 30.0 mM acetate. From 1H-NMR spectra and diffusion measurements, addition of acetates to QH2 solutions causes the phenolic proton peak to shift from 6.35 ppm to as great as ~11 ppm, while the measured diffusion coefficient decreases by as much as 40 %, relative to the QH2 alone in deuterated acetonitrile (ACN-d3). From the phenolic proton peak shift caused by the titration of each of the acetates, either a 1:1 or 1:2 binding equation could be applied and the association constants could be determined. The oxidation process involved in the voltammetry of the QH2’s with the acetates at both 10.0 and 30.0 mM was determined via voltammetric simulations. The oxidation process at 10.0 mM acetate concentrations involves a mixed process involving both oxidation of QH2 complexes and proton transfer from an intermediate radical species. However, at 30.0 mM acetate concentrations, the oxidation of QH2-acetate complexes was observed to involve an ECEC process. While on the reverse scan, or reduction, the process was determined to be an CECE process. Furthermore, the observed voltammetry was compared to that of the QH2’s with amines. From this comparison it was determined that the presence of hydrogen bonds imparts a thermodynamic influence on the oxidation of QH2, where oxidation via a hydrogen bond mechanism is slightly easier. In order to understand the proton transfer process observed at 10.0 mM concentrations of acetate with 1,4-QH2 and also the transition from a hydrogen bond dominated oxidation to a proton transfer dominated oxidation, conjugate acids were added directly to QH2 and acetate solutions. Two different acetate/conjugate acid ratios were focused on for this study, one at 10.0 mM/25.0 mM and another at 30.0 mM/50.0 mM. The results of voltammetric and 1H-NMR studies were that addition of the conjugate acids effects a transition from a hydrogen bond oxidation to a proton transfer oxidation. The predominant oxidation species and proton acceptor under these conditions is the uncomplexed QH2 and the homoconjugate of the particular acetate being studied, respectively. Furthermore, voltammetry of QH2 in these solutions resembles that measured with the QH2’s and added amines, as determined by scan rate analysis. In an attempt to understand a less intricate redox-active system under aqueous conditions, two viologen-like molecules were studied. These molecules, which involve a six-membered fence scheme reduction, were studied under buffered and unbuffered conditions. One of these molecules, N-methyl-4,4’-bipyridyl chloride (NMBC+), was observed to be reduced reversibly, while the other, 1-(4-pyridyl)pyridinium chloride (PPC+), involved irreversible reduction. The study of these molecules was accompanied by the study of a hypothetical four-membered square scheme redox system studied via digital simulations. In unbuffered solutions each species, both experimental and hypothetical, were observed to be reduced at either less negative (low pH) or more negative (high pH), depending on the formal potentials, pKa’s of the particular species and solution pH. The presence of buffer components causes the voltammetric peaks to thermodynamically shift from a less negative potential (low pH buffer) to a more negative potential (high pH buffer). Both of these observations have been previously noted in the literature, however, there has been no mention, to our knowledge, of kinetic effects. In unbuffered solutions the reduction peaks were found to separate near the pKa,1. While in buffered solutions, there was a noted peak separation throughout the pH region defined by pKa’s 1 and 2 (pKa,1 and pKa,2) of the species under study. The cause for this kinetic influence was the transition from a CE reduction at low pH to an EC reduction process at high pH in both buffered and unbuffered systems. This effect was further amplified via the study of the hypothetical species by decreasing the rate of proton transfer. In an effort to further this work, some preliminary work involving the attachment of acid/base species at the electrode surface and electromediated oxidation of phenol-acetate complexes has also been studied. The attachment of acid/base species at the surface is believed to assist in the observation of heterogeneous acid/base catalysis, similar to that observed in homogeneous acid/base additions to quinone/hydroquinone systems. Furthermore, our efforts to visualize a concerted mechanism are advanced in our future experiments involving electromediated oxidation of phenol-acetate complexes by inorganic species. It may be possible to interrogate the various intermediates more efficiently via homogeneous electron-proton transfer rather than heterogeneous electron transfer/homogeneous proton transfer.
108

Imagerie quantitative du dépôt d’aérosols dans les voies aériennes du petit animal par résonance magnétique / Quantitative imaging of aerosol deposition in small animal airways using magnetic resonance imaging

Wang, Hongchen 13 March 2015 (has links)
Cette thèse s’inscrit dans le projet OxHelease (ANR-TecSan 2011) qui vise à étudier l’impact de l’inhalation de l’hélium-oxygène sur la ventilation, l’oxygénation sanguine, le dépôt d’aérosol dans l’asthme et l’emphysème. Dans ce cadre, ce travail de thèse a consisté à mettre au point des méthodes d’imagerie par résonance magnétique pour quantifier les dépôts d’aérosols chez le rat. L’administration de médicaments par voie inhalée est une approche possible pour le traitement des maladies pulmonaires comme les broncho-pneumopathies chroniques obstructives. C’est également une voie intéressante pour l’administration systémique de médicaments en raison d’un transfert potentiellement rapide dans le sang. Néanmoins, le transport et les dépôts de particules dans les poumons sont complexes et difficiles à prédire, à cause de la dépendance de nombreux paramètres, tels que le protocole d’administration, la morphologie des voies aériennes, le profil respiratoire, ou encore les propriétés aérodynamiques du gaz et des particules. Pour mieux maîtriser cette voie d’administration de médicaments, des outils d’imagerie peuvent être utilisés. L’IRM est moins conventionnelle que d’autres approches pour caractériser le poumon, mais les progrès techniques et les multiples mécanismes de contraste exploitables peuvent être mis à profit pour ce faire.Pour obtenir un signal exploitable du parenchyme pulmonaire chez le rat, une séquence IRM à temps d’écho court a été mise en place sur un système clinique à 1,5 T. Cette technique a été combinée à une administration de courte durée d’un aérosol de chélate de Gadolinium en respiration spontanée. Le mécanisme de contraste principal utilisé est la modification du temps de relaxation longitudinale induisant un rehaussement du signal et qui permet d’estimer la concentration locale avec une résolution spatiale de (0,5 mm)3 et temporelle de 7,5 min permettant également de suivre l’élimination pulmonaire au cours du temps. La sensibilité de cette approche (seuil de détection de l’ordre de 20 µM) a été déterminée et pour cela des méthodes d’analyses spécifiques globales et locales incluant des segmentations, des analyses de distributions et des statistiques ont été développées. Après validation sur des rats sains, pour lesquels un rehaussement moyen de 50%, une distribution homogène de dépôt et une dose totale relativement faible (~1 µmol/kg de poids corporel) ont été observés, cette modalité d’imagerie a pu être appliquée chez des modèles asthmatiques et emphysémateux qui ont montrés des différences significatives de certains paramètres comme l’homogénéité des dépôts ou la cinétique d’élimination. Par ailleurs, des résultats préliminaires de mise en place d’une étude multimodale, où l’IRM est comparée à la tomodensitométrie et à l’imagerie nucléaire sur les mêmes animaux a été effectuée. Enfin, dans une optique d’évaluation de la faisabilité d’approches quantitatives par IRM, un système double noyaux proton-fluor pour déterminer la sensibilité de l’imagerie de gaz et d’aérosols fluorés a été implémenté et testé sur des rats.Ces approches par IRM ouvrent des perspectives pour permettre la caractérisation in vivo des dépôts de particules inhalées dans des conditions d’administration variées et leur sensibilité suggère un transfert potentiel chez l’homme / This PhD thesis is part of the OxHelease project (ANR-TecSan 2011) that aims to study the impact of helium-oxygen inhalation on ventilation, blood oxygenation, and aerosol deposition in chronic obstructive respiratory diseases, such as asthma and emphysema. In this context, this work consisted of developing magnetic resonance imaging methods to quantify aerosol deposition in rat lung.The inhalation of pharmaceutical aerosols is an attractive approach for the treatment of lung diseases such as chronic obstructive pulmonary diseases. This is also an interesting route for the treatment of systemic disorders with the potentially fast drug transfer into circulation. However, the transport and the deposition of particles within the lungs are complex and difficult to predict, since deposition patterns depend on a number of parameters, such as administration protocols, airway geometries, inhalation patterns, and gas and aerosol aerodynamic properties. Thus, understanding drug delivery through the lungs requires imaging methods to quantify particle deposition. MRI is less conventional than other approaches for lung characterization, but the technical advances and the multiple contrast mechanisms render lung imaging more feasible.To obtain exploitable signal from the lung parenchyma of the rat, an ultra-short echo (UTE) sequence was implemented on a 1.5 T clinical system. This technique was combined with a Gadolinium-based aerosol nebulization of short duration in spontaneously breathing rats. The main contrast mechanism used here is the modification of the longitudinal relaxation time yielding signal enhancement and allowing to assess the local concentration with a spatial resolution of (0.5 mm)3 and a temporal resolution of 7.5 min enabling to quantitatively follow up lung clearance. The sensitivity of this approach (with a detection limit close to 20 µM) was determined. To do so several specific processing methods were developed for local and total lung evaluation, including segmentation, distribution analysis and statistics. After validation in the healthy rats, for which a signal enhancement of 50% on average, a homogenous distribution of deposition and a relatively low total deposited dose (~1 µmol/kg body weight) were observed, this imaging modality could be applied in asthmatic and emphysematous animal models. Significant differences were obtained such as homogeneity of deposition or clearance. Moreover, preliminary results of a multimodal study, in which MRI was compared with computed tomography and with nuclear medicine imaging in the same animals, were obtained. Finally, in order to evaluate the feasibility of other potential quantitative MRI approaches, a dual-nuclei proton/fluorine system was implemented and tested in rats for determining the sensitivity of fluorine-based gas and aerosol imaging.These MRI strategies may be applied for the in vivo characterization of particle deposition inhaled under variable administration conditions. Their sensitivity suggests a feasible translation to human.
109

Machine Learning for Air Flow Characterization : An application of Theory-Guided Data Science for Air Fow characterization in an Industrial Foundry / Maskininlärning för Luftflödeskarakterisering : En applikation för en Teorivägledd Datavetenskapsmodell för Luftflödeskarakterisering i en Industrimiljö

Lundström, Robin January 2019 (has links)
In industrial environments, operators are exposed to polluted air which after constant exposure can cause irreversible lethal diseases such as lung cancer. The current air monitoring techniques are carried out sparely in either a single day annually or at few measurement positions for a few days.In this thesis a theory-guided data science (TGDS) model is presented. This hybrid model combines a steady state Computational Fluid Dynamics (CFD) model with a machine learning model. Both the CFD model and the machine learning algorithm was developed in Matlab. The CFD model serves as a basis for the airflow whereas the machine learning model addresses dynamical features in the foundry. Measurements have previously been made at a foundry where five stationary sensors and one mobile robot were used for data acquisition. An Echo State Network was used as a supervised learning technique for airflow predictions at each robot measurement position and Gaussian Processes (GP) were used as a regression technique to form an Echo State Map (ESM). The stationary sensor data were used as input for the echo state network and the difference between the CFD and robot measurements were used as teacher signal which formed a dynamic correction map that was added to the steady state CFD. The proposed model utilizes the high spatio-temporal resolution of the echo state map whilst making use of the physical consistency of the CFD. The initial applications of the novel hybrid model proves that the best qualities of these two models could come together in symbiosis to give enhanced characterizations.The proposed model could have an important role for future characterization of airflow and more research on this and similar topics are encouraged to make sure we properly understand the potential of this novel model. / Industriarbetare utsätts för skadliga luftburna ämnen vilket över tid leder till högre prevalens för lungsjukdomar så som kronisk obstruktiv lungsjukdom, stendammslunga och lungcancer. De nuvarande luftmätningsmetoderna genomförs årligen under korta sessioner och ofta vid få selekterade platser i industrilokalen. I denna masteruppsats presenteras en teorivägledd datavetenskapsmodell (TGDS) som kombinerar en stationär beräkningsströmningsdynamik (CFD) modell med en dynamisk maskininlärningsmodell. Både CFD-modellen och maskininlärningsalgoritmen utvecklades i Matlab. Echo State Network (ESN) användes för att träna maskininlärningsmodellen och Gaussiska Processer (GP) används som regressionsteknik för att kartlägga luftflödet över hela industrilokalen. Att kombinera ESN med GP för att uppskatta luftflöden i stålverk genomfördes första gången 2016 och denna modell benämns Echo State Map (ESM). Nätverket använder data från fem stationära sensorer och tränades på differensen mellan CFD-modellen och mätningar genomfördes med en mobil robot på olika platser i industriområdet. Maskininlärningsmodellen modellerar således de dynamiska effekterna i industrilokalen som den stationära CFD-modellen inte tar hänsyn till. Den presenterade modellen uppvisar lika hög temporal och rumslig upplösning som echo state map medan den också återger fysikalisk konsistens som CFD-modellen. De initiala applikationerna för denna model påvisar att de främsta egenskaperna hos echo state map och CFD används i symbios för att ge förbättrad karakteriseringsförmåga. Den presenterade modellen kan spela en viktig roll för framtida karakterisering av luftflöden i industrilokaler och fler studier är nödvändiga innan full förståelse av denna model uppnås.
110

Desenvolvimento de sequências de pulso de eco de spin de baixa potência para RMN on-line / Low-power Spin echo pulse sequences to NMR online

Andrade, Fabiana Diuk de 20 May 2011 (has links)
A Ressonância Magnética Nuclear em baixa resolução (RMN-BR) vem sendo aplicada no controle e certificação de qualidade na indústria. Para acelerar e automatizar as análises tem sido proposto o uso de RMN online, com carreamento de amostras por uma esteira. Apesar do grande potencial, as metodologias utilizadas podem provocar o aquecimento da amostra, causando erros nas medidas, além de reduzir a vida útil do equipamento. Assim, nesse trabalho de doutorado foram desenvolvidas sequências de eco de spin para medidas de RMN-BR online utilizando baixa potência de RF (radiofrequência), baseadas nas técnicas CP, Carr-Purcell, e CPMG, Carr-Purcell-Meiboom-Gill, utilizando baixos ângulos de refocalização, LRFA (Low Refocusing Flip Angle). Observou-se que LRFA de 45&deg; (CPMG45) pode fornecer valores de tempo de relaxação transversal (T2) com erros inferiores a 5% em campos magnéticos (B0) com homogeneidade equivalente a largura de linha de &Delta;&nu; &le; 15 Hz. Em B0 menos homogêneos (&Delta;&nu; &ge; 100 Hz), a escolha dos LRFA deve considerar a redução do sinal e o aumento de T2, que se torna dependente do tempo de relaxação longitudinal (T1). CP90 produz ecos entre os pulsos de refocalização que decaem para valores mínimos dependente de T2* (constante de tempo do Decaimento Livre da Indução). O sinal obtido por CP90 aumenta de valores mínimos para o regime de estado estacionário (EE) com constante de tempo T* = 2T1T2/(T1+T2) e amplitude MCP90 = M0T2/(T1+T2), a mesma constante e a mesma amplitude observada para o sinal de Precessão Livre em Onda Contínua (CWFP - Continuous Wave Free Precession), por isso foi denominada CP-CWFP. A principal vantagem de CP-CWFP ocorre para amostras com T1 ~ T2. CP-CWFP apresenta um decaimento abrupto da amplitude do sinal antes de aumentar e alcançar o EE. Essa diferença na amplitude torna o ajuste de T* menos dependente da razão sinal/ruído, sendo mais eficiente que CWFP para medidas de T1 e T2 em B0 baixos, onde T1 e T2 tendem a ter valores similares. No método de alternância de fase (AF) em CPMG90AF (y\'/-y\') e CP90AF (x\'-x\') os sinais entraram num EE, nesse caso CPMG90AF e CP90AF tiveram comportamentos similares ao CP-CWFP, sendo mais uma alternativa para obtenção de T1 e T2. Para CPMG, a redução do ângulo de refocalização para 90&deg; equivale a uma redução de 75% da potência incidida na amostra. CP-CWFP pode ser uma técnica mais robusta para análises em espectrômetros de RMN de bancada em controle de qualidade industrial, com B0 < 0,5 T (20 MHz) e em ferramentas de RMN utilizadas em poços de petróleo com B0 ~ 0,05 T (2 MHz). As análises por métodos estatísticos demonstraram que CPMG e CPMG90, CWFP e CP-CWFP apresentam alta correlação entre si, demonstrando que uma técnica pode substituir a outra nas análises online de sementes oleaginosas. / Low Resolution Nuclear Magnetic Resonance (LR-NMR) has been applied in quality control and certification in industry. To speed up the analysis the online NMR has been proposed. Despite LR-NMR online potential, some problems related to equipment overload occurs. It is due to methodologies used that can cause an increase in sample temperature and consequently errors in measurements as well reduction on equipment durability. Thus, Carr-Purcell, CP and Meiboom-Gill, CPMG using low refocusing flip angles (LRFA) were developed. It was observed that LRFA as low as 45&deg; (CPMG45) can provide transverse relaxation time values (T2) with errors below 5% for homogeneous field (&Delta;&nu; &le; 15Hz). For a less homogeneous magnetic field (&Delta;&nu; &ge; 100 Hz) the choice of the LRFA has to take into account the reduction in the intensity of the CPMG signal and the increase in the time constant of the CPMG decay that also becomes dependent on longitudinal relaxation time (T1). The Carr-Purcell (CP) pulse sequence, with LRFA, produces echoes midway between refocusing pulses that decay to a minimum value dependent on T2* (Free Induction Decay time constant - FID). When &tau; &gt; T2*, the signal increased to reach a Steady-State Free Precession regime (SSFP) after the minimum value and was composed of FID signal after each pulse and an echo, before the next pulse. CP90 signal increased from the minimum value to the steady-state regime with a time constant (T*) = 2T1T2/(T1+T2), identical to the time constant observed Continuous Wave Free Precession (CWFP). The Steady-State amplitude obtained with CP90 (MCP90) = M0T2/(T1+T2) was identical to CWFP. Therefore, this sequence was named CP-CWFP because it is a CP sequence that produces results similar to the CWFP. However, CP-CWFP is a better sequence than CWFP for measuring the longitudinal and transverse relaxation times in single scan, when the sample exhibits T1 ~ T2. When phase alternation is applied in CPMG90AF (y\'/-y\') e CP90AF (x\'-x\') the signal reaches a Steady-State. CPMG90AF and CP90AF showed similar behavior to CP-CWFP, so one more alternative method to measure T1 and T2. Therefore, the T2 measurements can be performed with 90&deg; refocusing pulses (CPMG90), which use only 25% of the RF power used in conventional CPMG. This reduces the heating problem in the probe and reduces the power deposition in the samples. The CP-CWFP sequence can be a useful method in low-resolution NMR and can be widely used in the agriculture, food and petrochemical industries because those samples tend to have similar relaxation times in low magnetic fields.

Page generated in 0.1041 seconds