• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 12
  • 10
  • 9
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 137
  • 137
  • 36
  • 23
  • 19
  • 17
  • 12
  • 10
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Directional analysis of cardiac left ventricular motion from PET images. / Análise direcional do movimento do ventrículo esquerdo cardíaco a partir de imagens de PET.

Sims, John Andrew 28 June 2017 (has links)
Quantification of cardiac left ventricular (LV) motion from medical images provides a non-invasive method for diagnosing cardiovascular disease (CVD). The proposed study continues our group\'s line of research in quantification of LV motion by applying optical flow (OF) techniques to quantify LV motion in gated Rubidium Chloride-82Rb (82Rb) and Fluorodeoxyglucose-18F (FDG) PET image sequences. The following challenges arise from this work: (i) the motion vector field (MVF) should be made as accurate as possible to maximise sensitivity and specificity; (ii) the MVF is large and composed of 3D vectors in 3D space, making visual extraction of information for medical diagnosis dffcult by human observers. Approaches to improve the accuracy of motion quantification were developed. While the volume of interest is the region of the MVF corresponding to the LV myocardium, non-zero values of motion exist outside this volume due to artefacts in the motion detection method or from neighbouring structures, such as the right ventricle. Improvements in accuracy can be obtained by segmenting the LV and setting the MVF to zero outside the LV. The LV myocardium was automatically segmented in short-axis slices using the Hough circle transform to provide an initialisation to the distance regularised level set evolution algorithm. Our segmentation method attained Dice similarity measure of 93.43% when tested over 395 FDG slices, compared with manual segmentation. Strategies for improving OF performance at motion boundaries were investigated using spatially varying averaging filters, applied to synthetic image sequences. Results showed improvements in motion quantification accuracy using these methods. Kinetic Energy Index (KEf), an indicator of cardiac motility, was used to assess 63 individuals with normal and altered/low cardiac function from a 82Rb PET image database. Sensitivity and specificity tests were performed to evaluate the potential of KEf as a classifier of cardiac function, using LV ejection fraction as gold standard. A receiver operating characteristics curve was constructed, which provided an area under the curve of 0.906. Analysis of LV motion can be simplified by visualisation of directional motion field components, namely radial, rotational (or circumferential) and linear, obtained through automated decomposition. The Discrete Helmholtz Hodge Decomposition (DHHD) was used to generate these components in an automated manner, with a validation performed using synthetic cardiac motion fields from the Extended Cardiac Torso phantom. Finally, the DHHD was applied to OF fields from gated FDG images, allowing an analysis of directional components from an individual with normal cardiac function and a patient with low function and a pacemaker fitted. Motion field quantification from PET images allows the development of new indicators to diagnose CVDs. The ability of these motility indicators depends on the accuracy of the quantification of movement, which in turn can be determined by characteristics of the input images, such as noise. Motion analysis provides a promising and unprecedented approach to the diagnosis of CVDs. / A quantificação do movimento cardíaco do ventrículo esquerdo (VE) a partir de imagens médicas fornece um método não invasivo para o diagnóstico de doenças cardiovasculares (DCV). O estudo aqui proposto continua na mesma linha de pesquisa do nosso grupo sobre quantificação do movimento do VE por meio de técnicas de fluxo óptico (FO), aplicando estes métodos para quantificar o movimento do VE em sequências de imagens associadas às substâncias de cloreto de rubídio-82Rb (82Rb) e fluorodeoxiglucose-18F (FDG) PET. Com a extração dos campos vetoriais surgiram os seguintes desafios: (i) o campo vetorial de movimento (motion vector field, MVF) deve ser feito da forma mais precisa possível para maximizar a sensibilidade e especificidade; (ii) o MVF é extenso e composto de vetores 3D no espaço 3D, dificultando a análise visual de informações por observadores humanos para o diagnóstico médico. Foram desenvolvidas abordagens para melhorar a precisão da quantificação de movimento, considerando que o volume de interesse seja a região do MVF correspondente ao miocárdio do VE, em que valores de movimento não nulos existem fora deste volume devido aos artefatos do método de detecção de movimento ou de estruturas vizinhas, como o ventrículo direito. As melhorias na precisão foram obtidas segmentando o VE e ajustando os valores de MVF para zero fora do VE. O miocárdio VE foi segmentado automaticamente em fatias de eixo curto usando a Transformada de Hough na detecção de círculos para fornecer uma inicialização ao algoritmo de curvas de nível, um tipo de modelo deformável. A segmentação automática do VE atingiu 93,43% de medida de similaridade Dice, quando foi testado em 395 fatias de eixo menor de FDG, comparado com a segmentação manual. Estratégias para melhorar o desempenho do algoritmo OF nas bordas de movimento foram investigadas usando spatially varying averaging filters, aplicados em seqüências de imagens sintéticas. Os resultados mostraram melhorias na precisão de quantificação de movimento utilizando estes métodos. O Índice de Energia Cinética (KEf), um indicador de motilidade cardíaca, foi utilizado para avaliar 63 sujeitos com função cardíaca normal e alterada / baixa de uma base de dados de imagens PET de 82Rb. Foram realizados testes de sensibilidade e especificidade para avaliar o potencial de KEf para classificar a função cardíaca, utilizando a fração de ejeção do VE como padrão ouro. Foi construída uma curva ROC, que proporcionou uma área sob a curva de 0,906. A análise do movimento do VE pode ser simplificada pela visualização de componentes de campo de movimento direcional, ou seja, radial, rotacional (ou circunferencial) e linear, obtidos por decomposição automatizada. A decomposição discreta de Helmholtz Hodge (DHHD) foi utilizada para gerar estes componentes de forma automatizada, com uma validação utilizando campos de movimento cardíaco sintéticos a partir do conjunto Extended Cardiac Torso Phantom. Finalmente, o método DHHD foi aplicado a campos de FO, criado a partir de imagens FDG, permitindo uma análise de componentes direcionais de um indivíduo com função cardíaca normal e um paciente com baixa função e utilizando um marca-passo. A quantificação do campo de movimento a partir de imagens PET possibilita o desenvolvimento de novos indicadores para diagnosticar DCVs. A capacidade destes indicadores de motilidade depende na precisão da quantificação de movimento que, por sua vez, pode ser determinado por características das imagens de entrada como ruído. A análise de movimento fornece um promissor e sem precedente método para o diagnóstico de DCVs.
52

Directional analysis of cardiac left ventricular motion from PET images. / Análise direcional do movimento do ventrículo esquerdo cardíaco a partir de imagens de PET.

John Andrew Sims 28 June 2017 (has links)
Quantification of cardiac left ventricular (LV) motion from medical images provides a non-invasive method for diagnosing cardiovascular disease (CVD). The proposed study continues our group\'s line of research in quantification of LV motion by applying optical flow (OF) techniques to quantify LV motion in gated Rubidium Chloride-82Rb (82Rb) and Fluorodeoxyglucose-18F (FDG) PET image sequences. The following challenges arise from this work: (i) the motion vector field (MVF) should be made as accurate as possible to maximise sensitivity and specificity; (ii) the MVF is large and composed of 3D vectors in 3D space, making visual extraction of information for medical diagnosis dffcult by human observers. Approaches to improve the accuracy of motion quantification were developed. While the volume of interest is the region of the MVF corresponding to the LV myocardium, non-zero values of motion exist outside this volume due to artefacts in the motion detection method or from neighbouring structures, such as the right ventricle. Improvements in accuracy can be obtained by segmenting the LV and setting the MVF to zero outside the LV. The LV myocardium was automatically segmented in short-axis slices using the Hough circle transform to provide an initialisation to the distance regularised level set evolution algorithm. Our segmentation method attained Dice similarity measure of 93.43% when tested over 395 FDG slices, compared with manual segmentation. Strategies for improving OF performance at motion boundaries were investigated using spatially varying averaging filters, applied to synthetic image sequences. Results showed improvements in motion quantification accuracy using these methods. Kinetic Energy Index (KEf), an indicator of cardiac motility, was used to assess 63 individuals with normal and altered/low cardiac function from a 82Rb PET image database. Sensitivity and specificity tests were performed to evaluate the potential of KEf as a classifier of cardiac function, using LV ejection fraction as gold standard. A receiver operating characteristics curve was constructed, which provided an area under the curve of 0.906. Analysis of LV motion can be simplified by visualisation of directional motion field components, namely radial, rotational (or circumferential) and linear, obtained through automated decomposition. The Discrete Helmholtz Hodge Decomposition (DHHD) was used to generate these components in an automated manner, with a validation performed using synthetic cardiac motion fields from the Extended Cardiac Torso phantom. Finally, the DHHD was applied to OF fields from gated FDG images, allowing an analysis of directional components from an individual with normal cardiac function and a patient with low function and a pacemaker fitted. Motion field quantification from PET images allows the development of new indicators to diagnose CVDs. The ability of these motility indicators depends on the accuracy of the quantification of movement, which in turn can be determined by characteristics of the input images, such as noise. Motion analysis provides a promising and unprecedented approach to the diagnosis of CVDs. / A quantificação do movimento cardíaco do ventrículo esquerdo (VE) a partir de imagens médicas fornece um método não invasivo para o diagnóstico de doenças cardiovasculares (DCV). O estudo aqui proposto continua na mesma linha de pesquisa do nosso grupo sobre quantificação do movimento do VE por meio de técnicas de fluxo óptico (FO), aplicando estes métodos para quantificar o movimento do VE em sequências de imagens associadas às substâncias de cloreto de rubídio-82Rb (82Rb) e fluorodeoxiglucose-18F (FDG) PET. Com a extração dos campos vetoriais surgiram os seguintes desafios: (i) o campo vetorial de movimento (motion vector field, MVF) deve ser feito da forma mais precisa possível para maximizar a sensibilidade e especificidade; (ii) o MVF é extenso e composto de vetores 3D no espaço 3D, dificultando a análise visual de informações por observadores humanos para o diagnóstico médico. Foram desenvolvidas abordagens para melhorar a precisão da quantificação de movimento, considerando que o volume de interesse seja a região do MVF correspondente ao miocárdio do VE, em que valores de movimento não nulos existem fora deste volume devido aos artefatos do método de detecção de movimento ou de estruturas vizinhas, como o ventrículo direito. As melhorias na precisão foram obtidas segmentando o VE e ajustando os valores de MVF para zero fora do VE. O miocárdio VE foi segmentado automaticamente em fatias de eixo curto usando a Transformada de Hough na detecção de círculos para fornecer uma inicialização ao algoritmo de curvas de nível, um tipo de modelo deformável. A segmentação automática do VE atingiu 93,43% de medida de similaridade Dice, quando foi testado em 395 fatias de eixo menor de FDG, comparado com a segmentação manual. Estratégias para melhorar o desempenho do algoritmo OF nas bordas de movimento foram investigadas usando spatially varying averaging filters, aplicados em seqüências de imagens sintéticas. Os resultados mostraram melhorias na precisão de quantificação de movimento utilizando estes métodos. O Índice de Energia Cinética (KEf), um indicador de motilidade cardíaca, foi utilizado para avaliar 63 sujeitos com função cardíaca normal e alterada / baixa de uma base de dados de imagens PET de 82Rb. Foram realizados testes de sensibilidade e especificidade para avaliar o potencial de KEf para classificar a função cardíaca, utilizando a fração de ejeção do VE como padrão ouro. Foi construída uma curva ROC, que proporcionou uma área sob a curva de 0,906. A análise do movimento do VE pode ser simplificada pela visualização de componentes de campo de movimento direcional, ou seja, radial, rotacional (ou circunferencial) e linear, obtidos por decomposição automatizada. A decomposição discreta de Helmholtz Hodge (DHHD) foi utilizada para gerar estes componentes de forma automatizada, com uma validação utilizando campos de movimento cardíaco sintéticos a partir do conjunto Extended Cardiac Torso Phantom. Finalmente, o método DHHD foi aplicado a campos de FO, criado a partir de imagens FDG, permitindo uma análise de componentes direcionais de um indivíduo com função cardíaca normal e um paciente com baixa função e utilizando um marca-passo. A quantificação do campo de movimento a partir de imagens PET possibilita o desenvolvimento de novos indicadores para diagnosticar DCVs. A capacidade destes indicadores de motilidade depende na precisão da quantificação de movimento que, por sua vez, pode ser determinado por características das imagens de entrada como ruído. A análise de movimento fornece um promissor e sem precedente método para o diagnóstico de DCVs.
53

ALTERNATE POWER AND ENERGY STORAGE/REUSE FOR DRILLING RIGS: REDUCED COST AND LOWER EMISSIONS PROVIDE LOWER FOOTPRINT FOR DRILLING OPERATIONS

Verma, Ankit 2009 May 1900 (has links)
Diesel engines operating the rig pose the problems of low efficiency and large amount of emissions. In addition the rig power requirements vary a lot with time and ongoing operation. Therefore it is in the best interest of operators to research on alternate drilling energy sources which can make entire drilling process economic and environmentally friendly. One of the major ways to reduce the footprint of drilling operations is to provide more efficient power sources for drilling operations. There are various sources of alternate energy storage/reuse. A quantitative comparison of physical size and economics shows that rigs powered by the electrical grid can provide lower cost operations, emit fewer emissions, are quieter, and have a smaller surface footprint than conventional diesel powered drilling. This thesis describes a study to evaluate the feasibility of adopting technology to reduce the size of the power generating equipment on drilling rigs and to provide ?peak shaving? energy through the new energy generating and energy storage devices such as flywheels. An energy audit was conducted on a new generation light weight Huisman LOC 250 rig drilling in South Texas to gather comprehensive time stamped drilling data. A study of emissions while drilling operation was also conducted during the audit. The data was analyzed using MATLAB and compared to a theoretical energy audit. The study showed that it is possible to remove peaks of rig power requirement by a flywheel kinetic energy recovery and storage (KERS) system and that linking to the electrical grid would supply sufficient power to operate the rig normally. Both the link to the grid and the KERS system would fit within a standard ISO container. A cost benefit analysis of the containerized system to transfer grid power to a rig, coupled with the KERS indicated that such a design had the potential to save more than $10,000 per week of drilling operations with significantly lower emissions, quieter operation, and smaller size well pad.
54

Experimental Study of Three-Dimensional Turbulent Offset Jets and Wall Jets

Agelin-Chaab, Martin 19 October 2010 (has links)
An experimental study was designed to examine and document the development and structures of turbulent 3D offset jets. The generic 3D wall jets at the same Reynolds numbers was used as the basis of comparison. The experiments were performed using a high resolution particle image velocimetry technique to perform velocity measurements at three Reynolds numbers based on the jet exit diameter and velocities of 5000, 10000 and 20000 and four jet offset height ratios of 0.5, 1.0, 2.0 and 4.0. The measurements were performed in the streamwise/wall-normal plane from 0 to 120 jet exit diameters and in the streamwise/lateral plane from 10 to 80 jet exit diameters. The velocity data were analyzed using (i) mean velocities and one-point statistics such as turbulence intensities, Reynolds stresses, triple velocity products and some terms in the transport equations for the turbulence kinetic energy, (ii) two-point velocity correlations to study how the turbulence quantities are correlated as well as the length scale and angle of inclination of the hairpin-like vortex structures, and (iii) proper orthogonal decomposition to examine the energy distribution and the role of the large scale structures in the turbulence intensities and Reynolds shear stresses. The decay of the maximum mean velocities and spread of the jet half widths became independent of Reynolds number much earlier in the generic wall jet than the offset jets. The flow development is delayed with increasing offset heights. The decay rate and wall-normal spread rate increased with the offset heights, whereas the lateral spread rate decreased with offset heights, which is consistent with previous studies. The two-point auto-correlations and the proper orthogonal decomposition results indicate the presence of more large scale structures in the outer and self-similar regions than in the inner and developing regions. The iso-contours of the streamwise autocorrelations in the inner regions were inclined at similar angles of β = 11.2 ± 0.6 degrees, which are in good agreement with reported values in boundary layer studies. The angles decrease with increasing distance from the wall.
55

Quantification of 4D Left Ventricular Blood Flow in Health and Disease

Eriksson, Jonatan January 2013 (has links)
The main function of the heart is to pump blood throughout the cardiovascular system by generating pressure differences created through volume changes. Although the main purpose of the heart and vessels is to lead the flowing blood throughout the body, clinical assessments of cardiac function are usually based on morphology, approximating the flow features by viewing the motion of the myocardium and vessels. Measurement of three-directional, three-dimensional and time-resolved velocity (4D Flow) data is feasible using magnetic resonance (MR). The focus of this thesis is the development and application of methods that facilitate the analysis of larger groups of data in order to increase our understanding of intracardiac flow patterns and take the 4D flow technique closer to the clinical setting. In the first studies underlying this thesis, a pathline based method for analysis of intra ventricular blood flow patterns has been implemented and applied. A pathline is integrated from the velocity data and shows the path an imaginary massless particle would take through the data volume. This method separates the end-diastolic volume (EDV) into four functional components, based on the position for each individual pathline at end-diastole (ED) and end-systole (ES). This approach enables tracking of the full EDV over one cardiac cycle and facilitates calculation of parameters such as e.g. volumes and kinetic energy (KE). Besides blood flow, pressure plays an important role in the cardiac dynamics. In order to study this parameter in the left ventricle, the relative pressure field was computed using the pressure Poisson equation. A comprehensive presentation of the pressure data was obtained dividing the LV blood pool into 17 pie-shaped segments based on a modification of the standard seventeen segment model. Further insight into intracardiac blood flow dynamics was obtained by studying the turbulent kinetic energy (TKE) in the LV. The methods were applied to data from a group of healthy subjects and patients with dilated cardiomyopathy (DCM). DCM is a pathological state where the cardiac function is impaired and the left ventricle or both ventricles are dilated. The validation study of the flow analysis method showed that a reliable user friendly tool for intra ventricular blood flow analysis was obtained. The application of this tool also showed that roughly one third of the blood that enters the LV, directly leaves the LV again in the same heart beat. The distribution of the four LV EDV components was altered in the DCM group as compared to the healthy group; the component that enters and leaves the LV during one cardiac cycle (Direct Flow) was significantly larger in the healthy subjects. Furthermore, when the kinetic energy was normalized by the volume for each component, at time of ED, the Direct Flow had the highest values in the healthy subjects. In the DCM group, however, the Retained Inflow and Delayed Ejection Flow had higher values. The relative pressure field showed to be highly heterogeneous, in the healthy heart. During diastole the predominate pressure differences in the LV occur along the long axis from base to apex. The distribution and variability of 3D pressure fields differ between early and late diastolic filling phases, but common to both phases is a relatively lower pressure in the outflow segment. In the normal LV, TKE values are low. The highest TKE values can be seen during early diastole and are regionally distributed near the basal LV regions. In contrast, in a heterogeneous group of DCM patients, total diastolic and late diastolic TKE values are higher than in normals, and increase with the LV volume. In conclusion, in this thesis, methods for analysis of multidirectional intra cardiac velocity data have been obtained. These methods allow assessment of data quality, intra cardiac blood flow patterns, relative pressure fields, and TKE. Using these methods, new insights have been obtained in intra cardiac blood flow dynamics in health and disease. The work underlying this thesis facilitates assessment of data from a larger population of healthy subjects and patients, thus bringing the 4D Flow MRI technique closer to the clinical setting.
56

Experimental Study of Three-Dimensional Turbulent Offset Jets and Wall Jets

Agelin-Chaab, Martin 19 October 2010 (has links)
An experimental study was designed to examine and document the development and structures of turbulent 3D offset jets. The generic 3D wall jets at the same Reynolds numbers was used as the basis of comparison. The experiments were performed using a high resolution particle image velocimetry technique to perform velocity measurements at three Reynolds numbers based on the jet exit diameter and velocities of 5000, 10000 and 20000 and four jet offset height ratios of 0.5, 1.0, 2.0 and 4.0. The measurements were performed in the streamwise/wall-normal plane from 0 to 120 jet exit diameters and in the streamwise/lateral plane from 10 to 80 jet exit diameters. The velocity data were analyzed using (i) mean velocities and one-point statistics such as turbulence intensities, Reynolds stresses, triple velocity products and some terms in the transport equations for the turbulence kinetic energy, (ii) two-point velocity correlations to study how the turbulence quantities are correlated as well as the length scale and angle of inclination of the hairpin-like vortex structures, and (iii) proper orthogonal decomposition to examine the energy distribution and the role of the large scale structures in the turbulence intensities and Reynolds shear stresses. The decay of the maximum mean velocities and spread of the jet half widths became independent of Reynolds number much earlier in the generic wall jet than the offset jets. The flow development is delayed with increasing offset heights. The decay rate and wall-normal spread rate increased with the offset heights, whereas the lateral spread rate decreased with offset heights, which is consistent with previous studies. The two-point auto-correlations and the proper orthogonal decomposition results indicate the presence of more large scale structures in the outer and self-similar regions than in the inner and developing regions. The iso-contours of the streamwise autocorrelations in the inner regions were inclined at similar angles of β = 11.2 ± 0.6 degrees, which are in good agreement with reported values in boundary layer studies. The angles decrease with increasing distance from the wall.
57

Paper-based Supercapacitors

Andres, Britta January 2014 (has links)
The growing market of mobile electronic devices, renewable off-grid energy sources and electric vehicles requires high-performance energy storage devices. Rechargeable batteries are usually the first choice due to their high energy density. However, supercapacitors have a higher power density and longer life-time compared to batteries. For some applications supercapacitors are more suitable than batteries. They can also be used to complement batteries in order to extend a battery's life-time. The use of supercapacitors is, however, still limited due to their high costs. Most commercially available supercapacitors contain expensive electrolytes and costly electrode materials. In this thesis I will present the concept of cost efficient, paper-based supercapacitors. The idea is to produce supercapacitors with low-cost, green materials and inexpensive production processes. We show that supercapacitor electrodes can be produced by coating graphite on paper. Roll-to-roll techniques known from the paper industry can be employed to facilitate an economic large-scale production. We investigated the influence of paper on the supercapacitor's performance and discussed its role as passive component. Furthermore, we used chemically reduced graphite oxide (CRGO) and a CRGO-gold nanoparticle composite to produce electrodes for supercapacitors. The highest specific capacitance was achieved with the CRGO-gold nanoparticle electrodes. However, materials produced by chemical synthesis and intercalation of nanoparticles are too costly for a large-scale production of inexpensive supercapacitor electrodes. Therefore, we introduced the idea of producing graphene and similar nano-sized materials in a high-pressure homogenizer. Layered materials like graphite can be exfoliated when subjected to high shear forces. In order to form mechanical stable electrodes, binders need to be added. Nanofibrillated cellulose (NFC) can be used as binder to improve the mechanical stability of the porous electrodes. Furthermore, NFC can be prepared in a high-pressure homogenizer and we aim to produce both NFC and graphene simultaneously to obtain a NFC-graphene composite. The addition of 10% NFC in ratio to the amount of graphite, increased the supercapacitor's capacitance, enhanced the dispersion stability of homogenized graphite and improved the mechanical stability of graphite electrodes in both dry and wet conditions. Scanning electron microscope images of the electrode's cross section revealed that NFC changed the internal structure of graphite electrodes depending on the type of graphite used. Thus, we discussed the influence of NFC and the electrode structure on the capacitance of supercapacitors.
58

Etude multi-échelle de la granulométrie des particules fines générées par érosion hydrique : apports pour la modélisation / Multi-scale study of fine particle size generated by water erosion : contributions for modeling

Grangeon, Thomas 07 November 2012 (has links)
Les particules en suspension transportées dans les réseaux hydrographiques résultent des processus de rivière et des apports depuis les versants. Nous avons étudié dans cette thèse la dynamique des tailles de particules le long du continuum versant-rivière afin d'apporter des éléments de réponse à la réflexion aujourd'hui menée sur les distances de transport et sur le concept de connectivité sédimentaire. Des observations de terrain sont menées à l'exutoire d'un bassin versant de tête (~20 km²). Elles mettent en évidence une corrélation positive entre débit liquide et taille des particules. L'établissement et la mise en oeuvre d'un protocole de mesure original montre que les particules sont agrégées. A cette échelle, les apports des versants semblent importants pour expliquer les variations de taille des particules. Des expériences de laboratoire utilisant un canal annulaire sont menées et indiquent qu'une partie de ces variations peut être attribuée à la désagrégation ou à la floculation des particules dans l'écoulement. Des variations de taille notables sont dues au type de sol. Elles sont moindres à la fin des évènements schématiques simulés en canal, suggérant que l'écoulement prend une part prépondérante pour expliquer les variations de taille des particules. Cet effet du type de sol a motivé l'étude des processus de versant, et en particulier ceux de la pluie. Des expériences de simulations de pluie menées en laboratoire (~1 m²) sur deux sols révèlent que l'augmentation de l'énergie cinétique de la pluie a tendance à générer des agrégats plus fins. Une paramétrisation du détachement par la pluie par fraction granulométrique est développée sur la base de ces expériences et implémentée au sein de deux modèles numériques d'érosion hydrique à base physique. Les simulations numériques confirment que cette tendance a des impacts sur les exports à l'échelle du versant. Enfin, des variations de granulométrie en lien avec l'énergie cinétique de la pluie sont perceptibles lors d'observations de terrain à l'échelle du versant (~ 100 m²), confirmant l'importance d'une description correcte du forçage pluviométrique. / The suspended particles of catchment networks are dependent on both river and hillslope erosion processes. During this thesis, the particle size dynamics was studied along this continuum in order to improve the understanding of particle delivery from hillslopes to the outlets of headwater catchments. Field measurements were conducted at the headwater catchment scale (~20 km²). The discharge displayed a positive correlation with the particle size. An original measurement protocol has been set up and it demonstrated that particles were mostly aggregated. The inputs from hillslopes were possibly involved in some of the variations of the measured particle size. Laboratory experiments carried out using an annular flume demonstrated that a part of these variations could be explained by disaggregation or flocculation within the flow. Important variations due to the soil type were observed. However, they were less pronounced in the falling limbs of the schematic flood events, suggesting that flow conditions progressively became more important than the soil signature. The latter encouraged the analysis of hillslope processes, among which a special attention was given to the rainfall effects. Rainfall simulation experiments (~1 m²) demonstrated for two soils that an increase in the rainfall kinetic energy resulted in smallest aggregates detached from the soil matrix. The importance of this mecanism at the hillslope scale (~ 100 m²) with regard to runoff selectivity was demonstrated developing a size-dependent detachment parametrisation included in two physically based numerical models. Finally, the effects of the rainfall kinetic energy on the particle size were observed during field measurements made at the plot scale as well, underlining the need to adequatly describe the rainfall forcing field at this scale.
59

Determinação de elementos essenciais em vinhos por Análise por Ativação com Nêutrons / Determination of essential trace elements in wine by Neutron Activation Analysis

DANIELE, ANNA P. 10 March 2017 (has links)
Submitted by Maria Eneide de Souza Araujo (mearaujo@ipen.br) on 2017-03-10T12:19:37Z No. of bitstreams: 1 22065.pdf: 2518783 bytes, checksum: e0f2e9eb82a52ecd6b57db50dcdac3fb (MD5) / Made available in DSpace on 2017-03-10T12:19:37Z (GMT). No. of bitstreams: 1 22065.pdf: 2518783 bytes, checksum: e0f2e9eb82a52ecd6b57db50dcdac3fb (MD5) / Muitos estudos têm sido realizados para determinar elementos essenciais nos alimentos, dentre eles o vinho, devido aos seus importantes papéis nutricionais em funções do corpo humano. Estudos apontam que o consumo diário e moderado de vinho contribui de forma significativa para as necessidades dos elementos essenciais para o corpo humano, tais como Ca, Co, Cr, Fe, K, Mg, Mn, Zn, V, entre outros, bem como traz benefícios para a saúde como a prevenção de inúmeras doenças e maior expectativa de vida, relacionados em particular com a ingestão de antioxidantes como os compostos polifenólicos. Por outro lado, outros elementos são bons indicadores da origem do vinho e as suas concentrações podem ser utilizadas como critério para garantir a autenticidade e a qualidade do vinho, bem como avaliar se os limites de tolerância estabelecidos pela lei foram respeitados durante todo o processo de produção. Todavia, embora a indústria do vinho no Brasil esteja entre as 15 maiores do mundo, ainda são poucos os estudos analíticos dos elementos no vinho se comparado a outros grandes produtores. Neste sentido este estudo teve como objetivo avaliar alguns procedimentos para preparação de amostras de vinho para determinar elementos essenciais por Análise por Ativação Neutrônica Instrumental (INAA) e comparar os resultados com a técnica de Espectrometria de Emissão Atômica por Plasma de Argônio Acoplado Indutivamente (ICP OES). Três procedimentos de preparação da amostra foram estudados: liofilização, evaporação e calcinação. Os parâmetros estudados foram precisão, exatidão e limite de detecção. Foram aplicados testes estatísticos de ANOVA e Tukey Kramer para verificar as diferenças estatísticas entre as médias obtidas pelos três procedimentos de preparação do vinho para INAA com as médias obtidas por ICP OES. Foi observado que cerca de 60% dos resultados obtidos por liofilização foram concordantes com aqueles obtidos por ICP OES. / Dissertação (Mestrado em Tecnologia Nuclear) / IPEN/D / Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
60

Three-dimensional computational investigations of flow mechanisms in compound meandering channels

Shukla, Deepak R. January 2006 (has links)
Flow mechanisms of compound meandering channels are recognised to be far more complicated than compound straight channels. The compound meandering channels are mainly characterised by the continuous variation of mean and turbulent flow parameters along a meander wavelength; the existence of horizontal shear layer at the bankfull level and the presence of strong helical secondary flow circulations in the streamwise direction. The secondary flow circulations are very important as they govern the advection of flow momentum, distort isovels, and influence bed shear stress, thus producing a complicated and fully three-dimensional turbulent flow structures. A great deal of experiments has been conducted in the past, which explains flow mechanisms, mixing patterns and the behaviour of secondary flow circulations. However, a complete understanding of secondary flow structures still remains far from conclusive mainly because the secondary flow structures are influenced by the host of geometrical and flow parameters, which are yet to be investigated in detail. The three-dimensional Reynolds-averaged Navier-Stokes and continuity equations were solved using a standard Computational Fluid Dynamics solver to predict mean velocity, secondary flow and turbulent kinetic energy. Five different flow cases of various model scales and relative depths were considered. Detailed analyses of the measured and predicted flow variables were carried out to understand mean flow mechanisms and turbulent secondary flow structures in compound meandering channels. The streamwise vorticity equation was used to quantify the complex and three-dimensional behaviour of secondary flow circulations in terms of their generation, development and decay along the half-meander wavelength. The turbulent kinetic energy equation was used to understand energy expense mechanisms of secondary flow circulations. The strengths of secondary flow circulations were calculated and compared for different flow cases considered. The main findings from this research are as follows. The shearing of the main channel flow as the floodplain flow plunges into and over the main channel influences the mean and turbulent flow structures particularly in the crossover region. The horizontal shear layer at the inner bankfull level generates secondary flow circulations. As the depth of flow increases, the point of generation of secondary flow circulations moves downstream. The secondary shear stress significantly contributes towards the generation of streamwise vorticity and the production of turbulent kinetic energy. The rate of turbulence kinetic energy production was found to be higher than the rate of its dissipation in the crossover region, which demonstrates that the turbulence extracts more energy from the mean flu\\' than what is actually dissipated. This also implies that, in the crossover region, the turbulence is always advected downstream by the mean and secondary flows, The strength of geometry induced secondary flow circulation increases with the increase in the relative depth.

Page generated in 0.0531 seconds