Spelling suggestions: "subject:"scattered""
131 |
Avaliação da radiação espalhada em mamografia como ferramenta diagnóstica utilizando simulações Monte Carlo / Evaluation of scattered radiation in mammography as diagnostic tool using Monte Carlo simulations.Diego Merigue da Cunha 31 March 2010 (has links)
Neste trabalho, avaliou-se a quantidade de informação diagnóstica contida na distribuição da radiação espalhada em mamografia, através de simulações Monte Carlo (MC). Para isto, este trabalho consistiu de dois objetivos: o primeiro diz respeito ao desenvolvimento de um código MC para o transporte de fótons em radiodiagnóstico, com ênfase em mamografia. O segundo diz respeito ao estudo da distribuição da radiação espalhada pela mama, e seu potencial diagnóstico. O modelo geométrico adotado nas simulações consistiu de uma mama comprimida semi-infinita, com um nódulo esférico inserido, simulando um nódulo maligno. Um receptor plano ideal foi posicionado abaixo da mama, a uma distância h. A distribuição angular da radiação espalhada pela mama, e sua distribuição espacial sobre o receptor, foram obtidas para um feixe estreito incidindo perpendicularmente sobre a superfície da mama. Estas distribuições foram utilizadas para calcular valores de contraste (CS) e razão contraste-ruído (CNRS) dos fótons espalhados, comparando as distribuições provenientes de regiões da mama sem e com o nódulo. Valores de CS e CNRS foram estudados para diferentes energias do feixe incidente, tamanho e posição do nódulo, e espessura e composição da mama. A influência de feixes polienergéticos nos valores de CS e CNRS também foi investigada. Para a distribuição espacial da radiação espalhada, valores de CS e CNRS também foram estudados como função da distância do receptor à mama. Os resultados mostram que a distribuição da radiação espalhada apresenta picos de espalhamento, que são produzidos pelos fótons elasticamente espalhados, e estão relacionados com a composição da mama. As distribuições angulares de CS e CNRS mostraram que valores máximos destas distribuições ocorrem próximos ao primeiro pico de espalhamento. Valores de CS maiores que o contrate primário foram obtidos em todas situações analisadas, embora o CNRS tenha se mostrado consideravelmente mais baixo que o CNR primário. As distribuições espaciais de CS e CNRS no receptor indicam que o uso de um receptor plano não reduz os valores de CS, comparados com os obtidos para a distribuição angular, embora o CNRS decresça a medida que h aumenta. Imagens planares, obtidas utilizando o feixe espalhado, mostram que, além de fornecer valores de contraste maiores que o contraste primário, a técnica permite realçar um determinado tipo de tecido na imagem, a partir da seleção de um determinado valor de momentum transferido x. Os resultados obtidos neste trabalho indicam que a radiação espalhada contém informação diagnóstica a respeito da presença de um nódulo na mama. Estudos futuros a respeito da otimização das condições de irradiação da mama e detecção da radiação espalhada devem ser realizados, a fim de se aumentar os valores de CNRS, sem comprometer os valores de CS. / In this work, the potential of forward x-ray scattering for contrast enhancement of malignant nodules in mammography was studied through MC simulations. This work consisted of two objectives: the ¯rst one refers to the development of a Monte Carlo (MC) code for simulation of photon transport in radiodiagnostic, focusing on mammography. The second objective refers to the study of the distribution of scattered photons by the breast, and its diagnostic potential. The geometric model adopted in the simulations consisted of a semi-in¯nite compressed breast, with a spherical nodule inserted within it, simulating a nodule. A planar ideal receptor was positioned under the breast, at a distance h from it. The angular distribution of scattered photons exiting the breast, and its spatial distribution on the receptor, were obtained for a pencil beam impinging normally on the breast surface. These distributions were used to compute values of scatter contrast (CS) and contrast-to-noise ratio (CNRS), by comparing the signal from regions of the breast without and with the nodule. Values of CS and CNRS were studied for di®erent beam energies, nodule size and position, and breast thickness and composition. In°uence of polienergetic spectra on CS and CNRS were also investigated. For the spatial distribution of scattered photons on the receptor, values of CS and CNRS were also studied as a function of the distance of the receptor to the breast. Results show that the distributions of scattered photons present scattering peaks, which are yielded by the elastic scattered photons, and are related to the breast composition. The angular distributions of CS and CNRS showed that maximum values occur close to the adipose scattering peak. Values of CS greater than primary contrast were obtained in all situations analyzed, although the CNRS was considerably lower than primary CNR. The spatial distributions of CS and CNRS indicate that the use of a planar receptor does not reduce the values of CS, compared with those from the angular distribution, although the CNRS decreases as h increases. Planar images, obtained for the scattered beam, showed that, in addiction to contrast enhancement, this technique allows the accentuation of a given tissue in image, by selecting a given value of transfer momentum x. The results obtained in this work indicate that scattered radiation contains diagnostic information about the presence of a nodule within the breast. Further studies, regarding optimization of conditions of breast irradiation and radiation detection, should be performed, in order to increase values of CNRS, without reducing values of CS.
|
132 |
Técnica de colimação para otimizar a aquisição e o processamento de imagens mamográficas / Collimation techniques for optimization of mammography image acquisition and processingRicardo Toshiyuki Irita 16 May 2003 (has links)
Para melhorar a visualização das pequenas estruturas anatômicas importantes para o diagnóstico do câncer de mama e otimizar o processamento das mamografias pelos sistemas computadorizados de auxílio ao diagnóstico (CAD), foi desenvolvido um dispositivo, baseado na tecnologia slit, que melhora a aquisição dos mamogramas. Este dispositivo reduz a radiação espalhada e o tamanho do foco e foi projetado a partir de um modelo computacional. O modelo adotado permite quantificar o valor desses parâmetros para qualquer sistema radiológico, qualquer espessura de tecidos moles radiografada e qualquer tensão aplicada ao tubo de raios-X. O dispositivo foi implementado e testado, quantificando as melhorias obtidas. As imagens geradas foram comparadas com as fornecidas pelos sistemas mamográficos convencionais. O modelo serviu para estudar também a interferência do espalhamento sobre o desempenho dos algoritmos usados nos sistemas de diagnóstico auxiliado por computador (CAD). / In order to improve the visualization of the small and important anatomical structures for breast cancer diagnosis and to optimize the image processing of mammograms by the computer aided diagnosis systems (CAD), a device that comes from slit technology was developed to improve the acquisition of mammograms. This device reduces the amount of scattered radiation and the focus size and it can be projected from a computational model. This model allows quantifying the value of those parameters, scattered radiation and the focus size, for any radiological system, any X-rayed soft tissue thickness and any tension applied to the X-ray tube. The device was implemented and tested quantifying the obtained improvements. The generated images were compared to the supplied for conventional mammography systems. The model was also good for studying interference on the algorithms used in the computer aided diagnosis systems (CAD).
|
133 |
Aplicação da metaheuristica busca dispersa ao problema do ajuste de historico / Application of the scatter serach methaheuristic to the history matching problemSousa, Sergio Henrique Guerra de 13 August 2018 (has links)
Orientadores: Denis Jose Schiozer, Celio Maschio / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica e Instituto de Geociencias / Made available in DSpace on 2018-08-13T03:31:41Z (GMT). No. of bitstreams: 1
Sousa_SergioHenriqueGuerrade_M.pdf: 1592526 bytes, checksum: a308d06cf11fb891b71f7b3396952356 (MD5)
Previous issue date: 2007 / Resumo: O problema do ajuste de histórico é uma das tarefas que mais demandam tempo em um estudo de reservatório baseado em simulações de fluxo, porque é um problema inverso onde os resultados (dados de produção) são conhecidos, porém os valores de entrada (a caracterização do reservatório) não são integralmente conhecidos. Adicionalmente, as funções objetivo que medem a qualidade do ajuste costumam ser expressões compostas por uma série de componentes que tornam a topologia do espaço de soluções complexa e repleta de não linearidades. A metodologia adotada neste trabalho foi a modelagem do problema de ajuste de histórico como um problema de otimização combinatória de modo que ele pudesse ser abordado através de processos metaheurísticos. Em particular, a metaheurística Busca Dispersa (Scatter Search) foi acoplada a um algoritmo de Busca Direta baseado no método de Hooke e Jeeves para resolver o problema do ajuste de histórico. Reservatórios sintéticos de solução conhecida foram utilizados para fazer a validação da metodologia e, em seguida, ela foi aplicada a outro reservatório, também sintético, mas com características de reservatórios reais onde a solução do ajuste é desconhecida. São discutidos ao longo do texto o uso da metodologia de forma automática e assistida e também os benefícios do uso da computação distribuída na execução do método. As maiores contribuições deste trabalho em relação à questão do ajuste de histórico são: a introdução de uma nova metodologia versátil para uso automático ou assistido, a discussão de algumas características que dificultam o processo de ajuste e de que forma eles podem ser contornados e também a abordagem do tema do ajuste automático vs. o ajuste assistido ilustrado com exemplos. / Abstract: The history matching problem is one of the most demanding tasks in a reservoir simulation study; because it's an inverse problem where the results (production data) are known but the input data (the reservoir characterization data) are not entirely known. Moreover, the objective function that guide the match is usually made out of a series of components that make the topology of the objective function both complex and full of non-linearities. The methodology adopted in this work was to model the history matching problem as a combinatorial optimization problem in order for it to be solved by metaheuristic processes. In particular, the Scatter Search metaheuristic was coupled with a direct search method based on Hooke and Jeeve's method to solve the history matching problem. Synthetic reservoirs of known solutions where used to validate the methodology and then the methodology was applied to another reservoir, also synthetic, but with characteristics of real reservoirs where the solution is not known in advance. Throughout the text, the mixed use of the methodology on both an assisted and automatic fashion is discussed along with the benefits attained by the use of distributed computing resources. The greatest contributions of this work related to the history matching problem are: the introduction of a new versatile methodology for both automatic and assisted matches, the discussion of some characteristics that burden the entire process and some ways to overcome the difficulties, and also the discussion of some tradeoffs between automatic versus assisted history matching with examples to illustrate the matter. / Mestrado / Reservatórios e Gestão / Mestre em Ciências e Engenharia de Petróleo
|
134 |
Scattering of internal gravity wavesLeaman Nye, Abigail January 2011 (has links)
Internal gravity waves play a fundamental role in the dynamics of stably stratified regions of the atmosphere and ocean. In addition to the radiation of momentum and energy remote from generation sites, internal waves drive vertical transport of heat and mass through the ocean by wave breaking and the mixing subsequently produced. Identifying regions where internal gravity waves contribute to ocean mixing and quantifying this mixing are therefore important for accurate climate and weather predictions. Field studies report significantly enhanced measurements of turbulence near 'rough' ocean topography compared with those recorded in the ocean interior or near more gradually varying topography (e.g. Toole et al. 1997, J. Geophys. Res. 102). Such observations suggest that interaction of waves with rough topography may act to skew wave energy spectra to high wavenumbers and hence promote wave breaking and fluid mixing. This thesis examines the high wavenumber scatter and spatial partitioning of wave energy at 'rough' topography containing features that are of similar scales to those characterising incident waves. The research presented here includes laboratory experiments using synthetic schlieren and PIV to visualise two-dimensional wavefields produced by small amplitude oscillations of cylinders within linear salt-water stratifications. Interactions of wavefields with planar slopes and smoothly varying sinusoidal topography are compared with those with square-wave, sawtooth and pseudo knife-edge profiles, which have discontinuous slopes. Far-field structures of scattered wavefields are compared with linear analytical models. Scatter to high wavenumbers is found to be controlled predominantly by the relative slopes and characterising length scales of the incident wavefield and topography, as well as the shape and aspect ratio of the topographic profile. Wave energy becomes highly focused and the spectra skewed to higher wavenumbers by 'critical' regions, where the topographic slope is comparable with the slope of the incident wave energy vector, and at sharp corners, where topographic slope is not defined. Contrary to linear geometric ray tracing predictions (Longuet-Higgins 1969, J. Fluid Mech. 37), a significant back-scattered field can be achieved in near-critical conditions as well as a forward scattered wavefield in supercritical conditions, where the slope of the boundary is steeper than that of the incident wave. Results suggest that interaction with rough benthic topography could efficiently convert wave energy to higher wavenumbers and promote fluid mixing in such ocean regions.
|
135 |
Modeled Estimates of Solar Direct Normal Irradiance and Diffuse Horizontal Irradiance in Different Terrestrial LocationsAbyad, Emad January 2017 (has links)
The transformation of solar energy into electricity is starting to impact to overall worldwide energy production mix. Photovoltaic-generated electricity can play a significant role in minimizing the use of non-renewable energy sources. Sunlight consists of three main components: global horizontal irradiance (GHI), direct normal irradiance (DNI) and diffuse horizontal irradiance (DHI). Typically, these components are measured using specialized instruments in order to study solar radiation at any location. However, these measurements are not always available, especially in the case of the DNI and DHI components of sunlight. Consequently, many models have been developed to estimate these components from available GHI data. These models have their own merits. For this thesis, solar radiation data collected at four locations have been analyzed. The data come from Al-Hanakiyah (Saudi Arabia), Boulder (U.S.), Ma’an (Jordan), and Ottawa (Canada). The BRL, Reindl*, DISC, and Perez models have been used to estimate DNI and DHI data from the experimentally measured GHI data. The findings show that the Reindl* and Perez model outcomes offered similar accuracy of computing DNI and DHI values when comparing with detailed experimental data for Al-Hanakiyah and Ma’an. For Boulder, the Perez and BRL models have similar estimation abilities of DHI values and the DISC and Perez models are better estimators of DNI. The Reindl* model performs better when modeling DHI and DNI for Ottawa data. The BRL and DISC models show similar metrics error analyses, except in the case of the Ma’an location where the BRL model shows high error metrics values in terms of MAE, RMSE, and standard deviation (σ). The Boulder and Ottawa locations datasets were not complete and affected the outcomes with regards to the model performance metrics. Moreover, the metrics show very high, unreasonable values in terms of RMSE and σ. It is advised that a global model be developed by collecting data from many locations as a way to help minimize the error between the actual and modeled values since the current models have their own limitations. Availability of multi-year data, parameters such as albedo and aerosols, and one minute to hourly time steps data could help minimize the error between measured and modeled data. In addition to having accurate data, analysis of spectral data is important to evaluate their impact on solar technologies.
|
136 |
Formulação e implementação da versão direta do metodo dos elementos de contorno para tratamento de problemas acusticos estacionarios bidimensionais diretos e inversos / Formulation and implementation of a direct version of the boundary element method to describe stationary bidimensional direct inverse acoustic problemsMenoni, Jose Antonio 07 June 2004 (has links)
Orientador: Euclides de Mesquita Neto / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica / Made available in DSpace on 2018-08-04T01:41:44Z (GMT). No. of bitstreams: 1
Menoni_JoseAntonio_D.pdf: 11918799 bytes, checksum: c09bbd80eae74f22092698eb851e1578 (MD5)
Previous issue date: 2004 / Resumo: Este trabalho trata da formulação e da implementação da versão direta do Método dos Elementos de Contorno (MEC) para tratamento de problemas acústicos bidimensionais estacionários regidos pelo operador diferencial de Helrnholtz. São abordados tanto problemas internos, associados a domínios limitados, quanto problemas externos, associados a domínios ilimitados. A tese ainda aborda a solução de problemas diretos e inversos. A transformação da equação de Helrnholtz em Equação Integral de Contorno, bem como a síntese de sua Solução Fundamental é recuperada de forma detalhada no texto. Para o caso de problemas internos duas técnicas são estudadas para recuperação de grandezas modais de cavidades acústicas. A primeira é baseada na pesquisa direta das raÍzes do polinômio característico e a segunda é baseada na informação obtida a partir de Funções de Resposta em Freqüência sintetizadas pelo MEC. Os problemas da radiação e espalhamento acústico são formulados, implementados e validados. O trabalho apresenta ainda a solução de problemas inversos, no qual as variáveis acústicas em um contorno geométrico conhecido são determinadas a partir de medições em uma superficie fechada e que envolve o corpo radiante. Duas técnicas são utilizadas no processo inverso, a Decomposição em Valores Singulares e a técnica de regularização de Tikhonov. Discute-se a precisão e eficiência destas técnicas em função dos parâmetros que são variáveis presentes nestas técnicas / Abstract: The present Thesis reports a formulation and an implementation of the direct version of the Boundary Element Method (BEM) to model direct and indirect bidimensional stationary acoustic problems governed by the Helrnholz differential operator. Both internal and external problems, associated, respectively to bounded and unbounded domains, are treated in the analysis. The transformation of the Helmholtz differential equation into an equivalent Boundary Integral Equation (BIE) and the synthesis of its Fundamental Solution is recovered in detail. For internal problem two techniques are employed to obtain modal quantities of acoustic cavities. The fIrs is the direct search method of the characteristic polynomial roots. The second strategy is based on numerical Frequency Response Functions, synthesized by the BEM. Radiation and scatter problems are formulated, implemented and validated within the realm of the Boundary Element Method. The present work still addresses the solution of an inverse problem. The inverse problem consists of determining the acoustic variables on the boundary of a radiating or scattering body of known geometry, based on the acoustic fIelds measured over a c10sed surface which embodies the analized body. Two technique to solve the inversion problem are discussed. The fIrst is the Single Value Decomposition strategy and the other is the Tikhonov regularization strategy. The accuracy of this techniques are discussed as functions of the internal parameters which are intrinsic to those strategies / Doutorado / Mecanica dos Sólidos e Projeto Mecanico / Mestre em Engenharia Mecânica
|
137 |
ANALYSIS OF LASER CLAD REPAIRED TI-6AL-4V FATIGUE LIFESamuel John Noone (8081285) 14 January 2021 (has links)
Laser cladding is a more recent approach to repair of aviation components within a damage tolerant framework, with its ability to restore not simply the geometric shape but the static and fatigue strength as well. This research analysed the fatigue performance of Ti-6Al-4V that has undergone a laser clad repair, comparing baseline specimens with laser clad repaired, and repaired and heat treated specimens. First an understanding of the microstructure was achieved by use of BSE imagery of the substrate, clad repaired region and post heat treated regions. The substrate of the material was identified with large grains which compared to a repaired clad region with a much finer grain structure that did not change with heat treatment. Next, performance of the specimens under tensile fatigue loading was conducted, with the clad specimens experiencing unexpectedly high fatigue performance when compared to baseline samples; the post heat treated specimen lasting significantly longer than all other specimens. It is theorised that the clad may have contributed to an increase in fatigue resilience due to its fine microstructure, when compared to the softer, more coarse substrate. The heat treatment is likely to have relaxed any residual stresses in the specimens leading to a reduction in any potential undesirable stresses, without impacting the microstructure. Residual stress analysis using EDD was unproductive due to the unexpected coarse microstructure and did not provide meaningful results. Fractography using the marker-band technique was explored with some success, proving a feesable method for measuring fatigue crack growth through a specimen post failure. Unfortunately fatigue crack growth throughout the entire fatigue life was not possible due to the tortuous fracture surface and potentially due to the fine micro-structure of the clad, resulting in interrupted marker-band formation. Future research shall expand on this work with a greater focus on residual stress analysis and its impact on fatigue.
|
138 |
Development of methods for time efficient scatter correction and improved attenuation correction in time-of-flight PET/MRNikulin, Pavel 06 November 2019 (has links)
In der vorliegenden Dissertation wurden zwei fortdauernde Probleme der Bildrekonstruktion in der time-of-flight (TOF) PET bearbeitet: Beschleunigung der TOF-Streukorrektur sowie Verbesserung der emissionsbasierten Schwächungskorrektur. Aufgrund der fehlenden Möglichkeit, die Photonenabschwächung direkt zu messen, ist eine Verbesserung der Schwächungskorrektur durch eine gemeinsame Rekonstruktion der Aktivitäts- und Schwächungskoeffizienten-Verteilung mittels der MLAA-Methode von besonderer Bedeutung für die PET/MRT, während eine Beschleunigung der TOF-Streukorrektur gleichermaßen auch für TOF-fähige PET/CT-Systeme relevant ist.
Für das Erreichen dieser Ziele wurde in einem ersten Schritt die hochauflösende PET-Bildrekonstruktion THOR, die bereits zuvor in unserer Gruppe entwickelt wurde, angepasst, um die TOF-Information nutzen zu können, welche von allen modernen PET-Systemen zur Verfügung gestellt wird. Die Nutzung der TOF-Information in der Bildrekonstruktion führt zu reduziertem Bildrauschen und zu einer verbesserten Konvergenzgeschwindigkeit. Basierend auf diesen Anpassungen werden in der vorliegenden Arbeit neue Entwicklungen für eine Verbesserung der TOF-Streukorrektur und der MLAA-Rekonstruktion beschrieben. Es werden sodann Ergebnisse vorgestellt, welche mit den neuen Algorithmen am Philips Ingenuity PET/MRT-Gerät erzielt wurden, das gemeinsam vom Helmholtz-Zentrum Dresden-Rossendorf (HZDR) und dem Universitätsklinikum betrieben wird. Eine wesentliche Voraussetzung für eine quantitative TOF-Bildrekonstruktionen ist eine Streukorrektur, welche die TOF-Information mit einbezieht. Die derzeit übliche Referenzmethode hierfür ist eine TOF-Erweiterung des single scatter simulation Ansatzes (TOF-SSS). Diese Methode wurde im Rahmen der TOF-Erweiterung von THOR implementiert. Der größte Nachteil der TOF-SSS ist eine 3–7-fach erhöhte Rechenzeit für die Berechnung der Streuschätzung im Vergleich zur non-TOF-SSS, wodurch die Bildrekonstruktionsdauer deutlich erhöht wird. Um dieses Problem zu beheben, wurde eine neue, schnellere TOF-Streukorrektur (ISA) entwickelt und implementiert. Es konnte gezeigt werden, dass dieser neue Algorithmus eine brauchbare Alternative zur TOF-SSS darstellt, welche die Rechenzeit auf ein Fünftel reduziert, wobei mithilfe von ISA und TOF-SSS rekonstruierte Schnittbilder quantitativ ausgezeichnet übereinstimmen. Die Gesamtrekonstruktionszeit konnte mithilfe ISA bei Ganzkörperuntersuchungen insgesamt um den Faktor Zwei reduziert werden. Dies kann als maßgeblicher Fortschritt betrachtet werden, speziell im Hinblick auf die Nutzung fortgeschrittener Bildrekonstruktionsverfahren im klinischen Umfeld. Das zweite große Thema dieser Arbeit ist ein Beitrag zur verbesserten Schwächungskorrektur in der PET/MRT mittels MLAA-Rekonstruktion. Hierfür ist zunächst eine genaue Kenntnis der tatsächlichen Zeitauflösung in der betrachten PET-Aufnahme zwingend notwendig. Da die vom Hersteller zur Verfügung gestellten Zahlen nicht immer verlässlich sind und zudem die Zählratenabhängigkeit nicht berücksichtigen, wurde ein neuer Algorithmus entwickelt und implementiert, um die Zeitauflösung in Abhängigkeit von der Zählrate zu bestimmen. Dieser Algorithmus (MLRES) basiert auf dem maximum likelihood Prinzip und erlaubt es, die funktionale Abhängigkeit der Zeitauflösung des Philips Ingenuity PET/MRT von der Zählrate zu bestimmen. In der vorliegenden Arbeit konnte insbesondere gezeigt werden, dass sich die Zeitauflösung des Ingenuity PET/MRT im klinisch relevanten Zählratenbereich um mehr als 250 ps gegenüber der vom Hersteller genannten Auflösung von 550 ps verschlechtern kann, welche tatsächlich nur bei extrem niedrigen Zählraten erreicht wird. Basierend auf den oben beschrieben Entwicklungen konnte MLAA in THOR integriert werden. Die MLAA-Implementierung erlaubt die Generierung realistischer patientenspezifischer Schwächungsbilder. Es konnte insbesondere gezeigt werden, dass auch Knochen und Hohlräume korrekt identifiziert werden, was mittels MRT-basierter Schwächungskorrektur sehr schwierig oder sogar unmöglich ist. Zudem konnten wir bestätigen, dass es mit MLAA möglich ist, metallbedingte Artefakte zu reduzieren, die ansonsten in den MRT-basierten Schwächungsbildern immer zu finden sind. Eine detaillierte Analyse der Ergebnisse zeigte allerdings verbleibende Probleme bezüglich der globalen Skalierung und des lokalen Übersprechens zwischen Aktivitäts- und Schwächungsschätzung auf. Daher werden zusätzliche Entwicklungen erforderlich sein, um auch diese Defizite zu beheben. / The present work addresses two persistent issues of image reconstruction for time-of-flight (TOF) PET: acceleration of TOF scatter correction and improvement of emission-based attenuation correction. Due to the missing capability to measure photon attenuation directly, improving attenuation correction by joint reconstruction of the activity and attenuation coefficient distribution using the MLAA technique is of special relevance for PET/MR while accelerating TOF scatter correction is of equal importance for TOF-capable PET/CT systems as well. To achieve the stated goals, in a first step the high-resolution PET image reconstruction THOR, previously developed in our group, was adapted to take advantage of the TOF information delivered by state-of-the-art PET systems. TOF-aware image reconstruction reduces image noise and improves convergence rate both of which is highly desirable. Based on these adaptations, this thesis describes new developments for improvement of TOF scatter correction and MLAA reconstruction and reports results obtained with the new algorithms on the Philips Ingenuity PET/MR jointly operated by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and the University Hospital. A crucial requirement for quantitative TOF image reconstruction is TOF-aware scatter correction. The currently accepted reference method — the TOF extension of the single scatter simulation approach (TOF-SSS) — was implemented as part of the TOF-related modifications of THOR. The major drawback of TOF-SSS is a 3–7 fold increase in computation time required for the scatter estimation, compared to regular SSS, which in turn does lead to a considerable image reconstruction slowdown. This problem was addressed by development and implementation of a novel accelerated TOF scatter correction algorithm called ISA. This new algorithm proved to be a viable alternative to TOF-SSS and speeds up scatter correction by a factor of up to five in comparison to TOF-SSS. Images reconstructed using ISA are in excellent quantitative agreement with those obtained when using TOF-SSS while overall reconstruction time is reduced by a factor of two in whole-body investigations. This can be considered a major achievement especially with regard to the use of advanced image reconstruction in a clinical context. The second major topic of this thesis is contribution to improved attenuation correction in PET/MR by utilization of MLAA reconstruction. First of all, knowledge of the actual time resolution operational in the considered PET scan is mandatory for a viable MLAA implementation. Since vendor-provided figures regarding the time resolution are not necessarily reliable and do not cover count-rate dependent effects at all, a new algorithm was developed and implemented to determine the time resolution as a function of count rate. This algorithm (MLRES) is based on the maximum likelihood principle and allows to determine the functional dependency of the time resolution of the Philips Ingenuity PET/MR on the given count rate and to integrate this information into THOR. Notably, the present work proves that the time resolution of the Ingenuity PET/MR can degrade by more than 250 ps for the clinically relevant range of count rates in comparison to the vendor-provided figure of 550 ps which is only realized in the limit of extremely low count rates. Based on the previously described developments, MLAA could be integrated into THOR. The performed list-mode MLAA implementation is capable of deriving realistic, patient-specific attenuation maps. Especially, correct identification of osseous structures and air cavities could be demonstrated which is very difficult or even impossible with MR-based approaches to attenuation correction. Moreover, we have confirmed that MLAA is capable of reducing metal-induced artifacts which are otherwise present in MR-based attenuation maps. However, the detailed analysis of the obtained MLAA results revealed remaining problems regarding stability of global scaling as well as local cross-talk between activity and attenuation estimates. Therefore, further work beyond the scope of the present work will be necessary to address these remaining issues.
|
139 |
Erkundung taktiler Grafiken: Schulungsunterlagen für blinde und sehbehinderte MenschenBornschein, Denise, Engel, Christin 12 May 2020 (has links)
Die vorliegende Schulung ist vorrangig zum Selbststudium für blinde und sehbehinderte Menschen konzipiert. Neben Grundlagen und Prinzipien der taktilen Grafikerkundung werden effektive Strategien zur systematischen Erkundung taktiler Grafiken anhand konkreter Beispiele (Säulen-, Punkt- und Liniendiagramme) vorgestellt.
Interessierte sehende Personen können die Informationen ebenso verwenden – einerseits um den Umgang blinder Leser und Leserinnen mit taktilen Grafiken besser zu verstehen, andererseits können Sie die Materialien auch als Grundlage verwenden, um selber blinde bzw. sehbehinderte Menschen zu schulen.:1 Einleitung
2 Grundlagen und Prinzipien der taktilen Grafikerkundung
2.1 Was ist eine taktile Grafik?
2.2 Welche Eigenschaften besitzt eine taktile Grafik?
2.3 Welche Grafiktypen gibt es?
2.4 Auf welchen Prinzipien basiert die taktile Erkundung?
3 Effektive Strategien zur Erkundung
3.1 Schritt 1 - Bildbeschreibung lesen
3.2 Schritt 2 - Einen ersten Überblick erhalten
3.3 Schritt 3 - Details erkunden
4 Anwendung der Strategien anhand konkreter Beispiele
4.1 Erkundung eines Säulendiagramms
4.2 Erkundung eines Punktdiagramms
4.3 Erkundung eines Liniendiagramms
5 Zusammenfassung
|
140 |
Studium dynamiky deformačních procesů ve slitinách Mg-RE pomocí in-situ experimentálních metod / Investigation of the dynamics of the deformation processes in Mg-RE alloys using in-situ experimental methodsSzabóová, Andrea January 2021 (has links)
In this present work, the dynamics of deformation mechanisms activated in binary magnesium-gadolinium alloys with respect to amount of Gd were investigated with in-situ experimental methods. Cast alloys are characterized by random texture. Compression test were done at room temperature with simultaneous record of acoustic emission response. The acoustic emission signal was subsequently analysed using advanced clustering method providing information about the dominant deformation mechanisms. High speed camera was used to study the dynamics of twinning, including estimation of the velocity of twin propagation with respect to Gd concentration. The deformation tests were repeated in a chamber of scanning electron microscope (in-situ SEM) with concurrent following the microstructure development using secondary electrons and electron back-scattered diffraction (EBSD) in different stages of the deformation. Main goal of this measurements was to identify active slip systems and the progress of twin volume fraction during deformation. Keywords: magnesium alloy, deformation tests, acoustic emission, high-speed camera, electron microscopy, twinning
|
Page generated in 0.2889 seconds