Spelling suggestions: "subject:"msm"" "subject:"pmsm""
31 |
Numerical Solution Of Nonlinear Reaction-diffusion And Wave EquationsMeral, Gulnihal 01 May 2009 (has links) (PDF)
In this thesis, the two-dimensional initial and boundary value problems (IBVPs)
and the one-dimensional Cauchy problems defined by the nonlinear reaction-
diffusion and wave equations are numerically solved. The dual reciprocity boundary
element method (DRBEM) is used to discretize the IBVPs defined by single
and system of nonlinear reaction-diffusion equations and nonlinear wave equation,
spatially. The advantage of DRBEM for the exterior regions is made use
of for the latter problem. The differential quadrature method (DQM) is used
for the spatial discretization of IBVPs and Cauchy problems defined by the
nonlinear reaction-diffusion and wave equations.
The DRBEM and DQM applications result in first and second order system
of ordinary differential equations in time. These systems are solved with three
different time integration methods, the finite difference method (FDM), the least
squares method (LSM) and the finite element method (FEM) and comparisons
among the methods are made. In the FDM a relaxation parameter is used to
smooth the solution between the consecutive time levels.
It is found that DRBEM+FEM procedure gives better accuracy for the IBVPs
defined by nonlinear reaction-diffusion equation. The DRBEM+LSM procedure
with exponential and rational radial basis functions is found suitable for exterior wave problem.
The same result is also valid when DQM is used for space
discretization instead of DRBEM for Cauchy and IBVPs defined by nonlinear
reaction-diffusion and wave equations.
|
32 |
An explorative study of consumers' attitudes towards generic medicationsTolken, Reinhard 05 November 2012 (has links)
Objective: To explore consumer attitudes towards generic medication. Methods: A quantitative method was used in this explorative study to assess consumer attitudes towards generic medication. A survey design was utilized. A questionnaire was devised that comprised four sections which assessed attitudes towards generic medication. A Living Standards Measure (LSM) assessed socio-economic status. Convenience sampling resulted in the recruitment of 266 respondents. Statistical analysis of the data included non-parametric (Chi-square and correlation analysis) and parametric statistics (factor analysis, MANOVA and regression analysis). Results: More than half the respondents (54.8%) report preference for original medication over generic medication. A large percentage believes there is a place for generic medication (88.9%). The majority (95%) indicate they would purchase generic medication if it proves to be just as effective as the original product. More respondents (91.2%) trust physician over pharmacist recommendations to purchase generics. More than half the respondents (57.9%) would purchase generic medication if recommended by friends but they trust their family members more (68.6%). The findings indicate that respondents generally hold favourable attitudes towards the efficacy of generic medication despite the fact that slightly more than half prefer original medication. Respondents indicate that pricing and branding influence their attitudes towards generic medication. Chi-square analyses indicated that more men would choose original medication and more women would choose generic medication. Age differences revealed that the older consumer is more likely to choose generic medication. White respondents indicated a preference for generic medication, while Black respondents indicated that they prefer original medication. Middle-class (LSM 5-8) and middle-upper class (LSM 9) respondents prefer generic medication, while upper class (LSM 10) respondents prefer original medication. The correlation analysis found no significant relationship between medical aid status and original or generic medication choice. A principle component factor analysis produced nine factors based on the items in the questionnaire, with only eight factors being subjected to further testing. These eight factors were subjected to a MANOVA and tested against gender and race with no significant differences found between men and women and between Black and White respondents. These eight factors were also subjected to further testing by means of regression analysis where it was found that three of the eight factors were statistically significant. These three factors can be productively explored in future research. Implications: This explorative study focused on consumer attitudes towards generic medication, however, it was identified that the consumer valued their physician’s recommendation for type of medication. For future studies, it would be beneficial to explore medical personnel (physician and pharmacist) attitudes towards generic medication as these individuals play an important role in product choice. Copyright / Dissertation (MA)--University of Pretoria, 2012. / Psychology / unrestricted
|
33 |
Quantitative Aspekte der Astrozyten von Wildtyp- und GFAP-/- VIM-/- LabormäusenTackenberg, Mark 28 April 2011 (has links)
Astrozyten erfüllen unverzichtbare Aufgaben im ZNS. Sie sorgen im Normalfall unter anderem für eine ausgewogene K+/H2O-Clearence, regulieren den Gefäßdurchmesser, bilden die Blut-/Hirnschranke, betreiben “Transmitter-Recycling” und modulieren die interneuronale Signalweitergabe durch prä- und postsynaptische Mechanismen.
Die Funktionen und Einflüsse dieser zentralnervösen Gliazellen unter pathologischen Bedingungen im ZNS sind bei weitem nicht so gut untersucht, aber ebenso vielfältig. Eine ganz entscheidende Frage stellt sowohl unter physiologischen wie auch pathologischen Bedingungen das Vorliegen eines Überlappungsfaktors des von benachbarten Astrozyten okkupierten Areals dar. Betrüge ein solcher Faktor ! 1, könnten mehrere Gliazellen das gleiche Areal auch unter pathologischen Bedingungen durch ihre vielfältigen Funktionen unterstützen. Dahingegen würde das Ausbleiben eines Überlappungsgrades ! 1 bedeuten, dass bestimmte Gebiete im Neuropil anfälliger gegen Noxen oder degenerative Veränderungen wären.
Um diesen Überlappungsgrad zu untersuchen, wurden Hirnschnitte von Labormäusen mittels einer geeigneten Methodenkombination untersucht. Dabei wurde das durchschnittliche Volumen der Astrozyten im motorischen Kortex durch Golgi- Färbung, sowie deren Zellzahl pro Volumeneinheit durch immunhistochemische Färbungen untersucht und mittels konfokaler Laserscanning-Mikroskopie dokumentiert. Aus diesen Parametern ließ sich ferner der durchschnittliche Überlappungsfaktor im beschriebenen Areal berechnen.
Im Interesse dieser Arbeit standen dabei neben dem Unterschied im Überlappungsfaktor der Astrozyten zwischen Wildtyp- und GFAP-/- VIM-/- Knockout- Mäusen, als Beispiel für ein genetisch bedingtes Fehlen dieser Intermediärfilamente, auch der Einfluss des fortschreitenden Lebensalters, so dass für beide Genotypen sowohl junge- als auch alte Tiere untersucht wurden.
Folgende Ergebnisse lassen sich zusammenfassen:
1. Das Vorhandensein der Intermediärfilamente GFAP und Vimentin scheint keinen Einfluss auf das Volumen der Astrozyten im motorischen Kortex zu haben.
2. Das Lebensalter der V ersuchstiere steht mit dem V olumen der Astrozyten signifikant in Zusammenhang. Das von Astrozytenfortsätzen der knapp zwei Jahre alten Tiere okkupierte Volumen betrug mit durchschnittlich ca. 61.000 !m3 gut das Doppelte des Volumens in jungen Mäusen (ca. 28.000 !m3).
3. Die Zellzahl der Astrozyten im motorischen Kortex wird offenbar weder vom Lebensalter, noch vom Vorhandensein der Intermediärfilamente GFAP und Vimentin signifikant beeinflusst.
4. Der Überlappungsfaktor der Astrozyten im motorischen Kortex lag bei den jungen Kontroll-Tieren bei 0,87 und bei den jungen DKO-Tieren bei 0,96.
5. Der Überlappungsfaktor der Astrozyten im motorischen Kortex lag bei den alten Kontroll-Tieren bei 2,22 und bei den alten DKO-Tieren bei 2,10.
Die Ergebnisse zeigen, dass das Fehlen der Intermediärfilamente GFAP und Vimentin keinen Einfluss auf den Überlappungsgrad der Astrozyten im motorischen Kortex hat. Die Ursache für phänotypisch manifeste Erkrankungen, wie z.B. der Alexander Krankheit, welche durch ein fehlerhaft exprimiertes GFAP in Astrozyten hervorgerufen wird, ist demnach in anderen Mechanismen zu suchen.
Großen Einfluss auf den Überlappungsfaktor der Astrozyten hatte dagegen das Lebensalter der Versuchstiere, was sich mit neueren Erkenntnissen zur Funktion der Astrozyten im Hinblick auf Lernvorgänge, aber auch auf degenerative Prozesse, in Zusammenhang bringen lässt.:BIBLIOGRAPHISCHE BESCHREIBUNG 1
INHALTSVERZEICHNIS 2
VERZEICHNIS DER ABKÜRZUNGEN 5
1. EINLEITUNG UND FRAGESTELLUNG 6
1.1 Das ZNS / Der Kortex 6
1.2 Gliazellen 9
1.2.1 Astrozyten 10
1.2.1.1 Morphologie / Morphometrie 10
1.2.1.2 Funktionen 12
1.2.1.3 reaktive Astrozyten 13
1.3 Intermediärfilamente 14
1.3.1 Funktionen 17
1.3.2 Intermediärfilamente und Zellwachstum 18
1.4 Ziel der Arbeit 19
2. MATERIAL UND METHODEN 24
2.1 Die Versuchstiere 24
2.2 Golgi-Färbungen 25
2.3 Immunhistochemische Färbungen 26
2.3.1 Die indirekte Nachweismethode 27
2.4 Das konfokale Mikroskop 28
2.5 Mikroskopische Untersuchung 30
2.5.1 Untersuchung der Volumina 30
2.5.2 Untersuchung der Zellzahlen 32
2.6 Bildauswertung 34
2.6.1 Volumenmessung / Golgi-Präparate 34
2.6.2 Zellzahl / Immunhistochemie 37
2.7 Berechnungen / Überlappungsfaktor / Statistische Auswertung 38
3. ERGEBNISSE 40
3.1 Volumina der Astrozyten 40
3.2 Zellzahl der Astrozyten 45
3.3 Der Überlappungsfaktor 48
3.4 Zusammenfassung 51
4. DISKUSSION 52
4.1 Kritik an der Methodik 52
4.1.1 Golgi-Färbungen zur Volumenmessung 52
4.1.2 S100-ß als Marker zur Zellzahl-Bestimmung 53
4.1.3 Schrumpfung der Präparate 54
4.2 Einordnung der Ergebnisse in die Literatur / Schlussfolgerungen 56
5. ZUSAMMENFASSUNG 60
6. LITERATUR 64
SELBSTSTÄNDIGKEITSERKLÄRUNG 68
LEBENSLAUF 69
DANKSAGUNG 71
|
34 |
EFFICIENT LSM SECONDARY INDEXING FOR UPDATE-INTENSIVE WORKLOADSJaewoo Shin (17069089) 29 September 2023 (has links)
<p dir="ltr">In recent years, massive amounts of data have been generated from various types of devices or services. For these data, update-intensive workloads where the data update their status periodically and continuously are common. The Log-Structured-Merge (LSM, for short) is a widely-used indexing technique in various systems, where index structures buffer insert operations into the memory layer and flush them into disk when the data size in memory exceeds a threshold. Despite its noble ability to handle write-intensive (i.e., insert-intensive) workloads, LSM suffers from degraded query performance due to its inefficiency on index maintenance of secondary keys to handle update-intensive workloads.</p><p dir="ltr">This dissertation focuses on the efficient support of update-intensive workloads for LSM-based indexes. First, the focus is on the optimization of LSM secondary-key indexes and their support for update-intensive workloads. A mechanism to enable the LSM R-tree to handle update-intensive workloads efficiently is introduced. The new LSM indexing structure is termed the LSM RUM-tree, an LSM R-tree with Update Memo. The key insights are to reduce the maintenance cost of the LSM R-tree by leveraging an additional in-memory memo structure to control the size of the memo to fit in memory. In the experiments, the LSM RUM-tree achieves up to 9.6x speedup on update operations and up to 2400x speedup on query operations.</p><p dir="ltr">Second, the focus is to offer several significant advancements in the context of the LSM RUM-tree. We provide an extended examination of LSM-aware Update Memo (UM) cleaning strategies, elucidating how effectively each strategy reduces UM size and contributes to performance enhancements. Moreover, in recognition of the imperative need to facilitate concurrent activities within the LSM RUM-Tree, particularly in multi-threaded/multi-core environments, we introduce a pivotal feature of concurrency control for the update memo. The novel atomic operation known as Compare and If Less than Swap (CILS) is introduced to enable seamless concurrent operations on the Update Memo. Experimental results attest to a notable 4.5x improvement in the speed of concurrent update operations when compared to existing and baseline implementations.</p><p dir="ltr">Finally, we present a novel technique designed to improve query processing performance and optimize storage management in any secondary LSM tree. Our proposed approach introduces a new framework and mechanisms aimed at addressing the specific challenges associated with secondary indexing in the structure of the LSM tree, especially in the context of secondary LSM B+-tree (LSM BUM-tree). Experimental results show that the LSM BUM-tree achieves up to 5.1x speedup on update-intensive workloads and 107x speedup on update and query mixed workloads over existing LSM B+-tree implementations.</p>
|
35 |
A Runge Kutta Discontinuous Galerkin-Direct Ghost Fluid (RKDG-DGF) Method to Near-field Early-time Underwater Explosion (UNDEX) SimulationsPark, Jinwon 22 September 2008 (has links)
A coupled solution approach is presented for numerically simulating a near-field underwater explosion (UNDEX). An UNDEX consists of a complicated sequence of events over a wide range of time scales. Due to the complex physics, separate simulations for near/far-field and early/late-time are common in practice. This work focuses on near-field early-time UNDEX simulations. Using the assumption of compressible, inviscid and adiabatic flow, the fluid flow is governed by a set of Euler fluid equations. In practical simulations, we often encounter computational difficulties that include large displacements, shocks, multi-fluid flows with cavitation, spurious waves reflecting from boundaries and fluid-structure coupling. Existing methods and codes are not able to simultaneously consider all of these characteristics.
A robust numerical method that is capable of treating large displacements, capturing shocks, handling two-fluid flows with cavitation, imposing non-reflecting boundary conditions (NRBC) and allowing the movement of fluid grids is required. This method is developed by combining numerical techniques that include a high-order accurate numerical method with a shock capturing scheme, a multi-fluid method to handle explosive gas-water flows and cavitating flows, and an Arbitrary Lagrangian Eulerian (ALE) deformable fluid mesh. These combined approaches are unique for numerically simulating various near-field UNDEX phenomena within a robust single framework. A review of the literature indicates that a fully coupled methodology with all of these characteristics for near-field UNDEX phenomena has not yet been developed.
A set of governing equations in the ALE description is discretized by a Runge Kutta Discontinuous Galerkin (RKDG) method. For multi-fluid flows, a Direct Ghost Fluid (DGF) Method coupled with the Level Set (LS) interface method is incorporated in the RKDG framework. The combination of RKDG and DGF methods (RKDG-DGF) is the main contribution of this work which improves the quality and stability of near-field UNDEX flow simulations. Unlike other methods, this method is simpler to apply for various UNDEX applications and easier to extend to multi-dimensions. / Ph. D.
|
36 |
動態信用風險與PBJD模型下之可轉債評價 / Pricing Convertible Bonds under Dynamic Credit Risk and Pareto-Beta Jump-Diffusion Model姚博文 Unknown Date (has links)
可轉換公司債是一種複雜且擁有許多風險的商品,而對於台灣的可轉債市場來說,信用風險佔了評價裡很重要的一部份。本篇論文使用縮減式評價模型,考慮信用風險及股價跳躍。跳躍模型使用Pareto-Beta Jump-Diffusion模型,並且利用信用價差之動態過程,來對可轉換公司債作評價,而為了解決提前轉換的問題,也使用了最小平方蒙地卡羅法來處理。本篇論文分別對宏碁與新光金之可轉債做實證研究,實證結果顯示,加入了股價跳躍之後,的確可以使理論價格更貼近市場真實價格。
|
37 |
Evaluating enhanced hydrological representations in Noah LSM over transition zones : an ensemble-based approach to model diagnosticsRosero Ramirez, Enrique Xavier 03 June 2010 (has links)
This work introduces diagnostic methods for land surface model (LSM) evaluation that enable developers to identify structural shortcomings in model parameterizations by evaluating model 'signatures' (characteristic temporal and spatial patterns of behavior) in feature, cost-function, and parameter spaces. The ensemble-based methods allow researchers to draw conclusions about hypotheses and model realism that are independent of parameter choice. I compare the performance and physical realism of three versions of Noah LSM (a benchmark standard version [STD], a dynamic-vegetation enhanced version [DV], and a groundwater-enabled one [GW]) in simulating high-frequency near-surface states and land-to-atmosphere fluxes in-situ and over a catchment at high-resolution in the U.S. Southern Great Plains, a transition zone between humid and arid climates. Only at more humid sites do the more conceptually realistic, hydrologically enhanced LSMs (DV and GW) ameliorate biases in the estimation of root-zone moisture change and evaporative fraction. Although the improved simulations support the hypothesis that groundwater and vegetation processes shape fluxes in transition zones, further assessment of the timing and partitioning of the energy and water cycles indicates improvements to the movement of water within the soil column are needed. Distributed STD and GW underestimate the contribution of baseflow and simulate too-flashy streamflow. This work challenges common practices and assumptions in LSM development and offers researchers more stringent model evaluation methods. I show that, because of equifinality, ad-hoc evaluation using single parameter sets provides insufficient information for choosing among competing parameterizations, for addressing hypotheses under uncertainty, or for guiding model development. Posterior distributions of physically meaningful parameters differ between models and sites, and relationships between parameters themselves change. 'Plug and play' of modules and partial calibration likely introduce error and should be re-examined. Even though LSMs are 'physically based,' model parameters are effective and scale-, site- and model-dependent. Parameters are not functions of soil or vegetation type alone: they likely depend in part on climate and cannot be assumed to be transferable between sites with similar physical characteristics. By helping bridge the gap between the model identification and model development, this research contributes to the continued improvement of our understanding and modeling of environmental processes. / text
|
38 |
Modelos numéricos aplicados à análise viscoelástica linear e à otimização topológica probabilística de estruturas bidimensionais: uma abordagem pelo Método dos Elementos de Contorno / Numerical models applied to the analysis of linear viscoelasticity and probabilistic topology optimization of two-dimensional structures: a Boundary Element Method approachOliveira, Hugo Luiz 31 March 2017 (has links)
O presente trabalho trata da formulação e implementação de modelos numéricos baseados no Método dos Elementos de Contorno (MEC). Inspirando-se em problemas de engenharia, uma abordagem multidisciplinar é proposta como meio de representação numérica mais realista. Há materiais de uso corrente na engenharia que possuem resposta dependente do tempo. Nesta tese os fenômenos dependentes do tempo são abordados por meio da Mecânica Viscoelástica Linear associada a modelos reológicos. Neste trabalho, se apresenta a dedução do modelo constitutivo de Maxwell para ser utilizado via MEC. As equações deduzidas são verificadas em problemas de referência. Os resultados mostram que a formulação deduzida pode ser utilizada para representar estruturas compostas, mesmo em casos envolvendo uma junção entre materiais viscoelásticos e não viscoelásticos. Adicionalmente as formulações apresentadas se mantém estáveis na presença de fissuras de domínio e bordo. Verifica-se que a formulação clássica dual pode ser utilizada para simular o comportamento de fissuras com resposta dependente do tempo. Essa constatação serve de base para maiores investigações no campo da Mecânica da Fratura de materiais viscoelásticos. Na sequência, mostra-se como o MEC pode ser aliado a conceitos probabilísticos para fazer estimativas de comportamentos a longo prazo. Estas estimativas incluem as incertezas inerentes nos processos de engenharia. As incertezas envolvem os parâmetros materiais, de carregamento e de geometria. Por meio do conceito de probabilidade de falha, os resultados mostram que as incertezas relacionadas às estimativas das cargas atuantes apresentam maior impacto no desempenho esperado a longo prazo. Esta constatação serve para realizar estudos que colaborem para a melhoria dos processos de concepção estrutural. Outro aspecto de interesse desta tese é a busca de formas otimizadas, por meio da Otimização Topológica. Neste trabalho, um algoritmo alternativo de otimização topológica é proposto. O algoritmo é baseado no acoplamento entre o Método Level Set (MLS) e o MEC. A diferença entre o algoritmo aqui proposto, e os demais presentes na literatura, é forma de obtenção do campo de velocidades. Nesta tese, os campos normais de velocidades são obtidos por meio da sensibilidade à forma. Esta mudança torna o algoritmo propício a ser tratado pelo MEC, pois as informações necessárias para o cálculo das sensibilidades residem exclusivamente no contorno. Verifica-se que o algoritmo necessita de uma extensão particular de velocidades para o domínio a fim de manter a estabilidade. Limitando-se a casos bidimensionais, o algoritmo é capaz de obter os conhecidos casos de referência reportados pela literatura. O último aspecto tratado nesta tese retrata a maneira pela qual as incertezas geométricas podem influenciar na determinação das estruturas otimizadas. Utilizando o MEC, propõe-se um critério probabilístico que permite embasar escolhas levando em consideração a sensibilidade geométrica. Os resultados mostram que os critérios deterministas, nem sempre, conduzem às escolhas mais adequadas sob o ponto de vista de engenharia. Assim, este trabalho contribui para a expansão e difusão das aplicações do MEC em problemas de engenharia de estruturas. / The present work deals with the formulation and implementation of numerical models based on the Boundary Element Method (BEM). Inspired by engineering problems, a multidisciplinary combination is proposed as a more realistic approach. There are common engineering materials that have time-dependent response. In this thesis, time-dependent phenomena are approached through the Linear Viscoelastic Mechanics associated with rheological models. In this work, the formulation of Maxwell\'s constitutive model is presented to be used via MEC. The resultant equations are checked on reference problems. The results show that the presented formulation can be used to represent composite structures, even in cases involving a junction between viscoelastic and non-viscoelastic materials. Additionally the formulations presented remain stable in the presence of cracks. It is found that the classical DUAL-BEM formulation can be used to simulate cracks with time-dependent behaviour. This result serves as the basis for further investigations in the field of Fracture Mechanics of viscoelastic materials. In the sequence, it is shown how the BEM can be associated with probabilistic concepts to make predictions of long-term behaviour. These predictions include the inherent uncertainties in engineering processes. The uncertainties involve the material, loading and geometry parameters. Using the concept of probability of failure, the results show that the uncertainties related to the estimations of loads have important impact on the long-term expected performance. This finding serves to carry out studies that collaborate for the improvement of structural design processes. Another aspect of interest of this thesis is the search for optimized forms through Topological Optimization. In this work, an alternative topological optimization algorithm is proposed. The algorithm is based on the coupling between the Level Set Method (LSM) and BEM. The difference between the algorithm proposed here, and the others present in the literature, is a way of obtaining the velocity field. In this thesis, the normal fields of velocities are obtained by means of shape sensitivity. This change makes the algorithm adequate to be treated by the BEM, since the information necessary for the calculation of the sensitivities resides exclusively in the contour. It is found that the algorithm requires a particular velocity extension in order to maintain stability. Limiting to two-dimensional cases, the algorithm is able to obtain the known benchmark cases reported in the literature. The last aspect addressed in this thesis involves the way in which geometric uncertainties can influence the determination of optimized structures. Using the BEM, it is proposed a probabilistic criterion that takes into consideration the geometric sensitivity. The results show that deterministic criteria do not always lead to the most appropriate choices from an engineering point of view. In summary, this work contributes to the expansion and diffusion of MEC applications in structural engineering problems.
|
39 |
Estudo do comportamento de sinais OSL de BeO e Al2O3:C usando o Modelo OTOR Simplificado e Método dos Mínimos Quadrados / Study of the Behavior of OSL Signals of BeO and Al2O3:C using the Simplified OTOR Model and Least Square MethodSoares, Leonardo dos Reis Leano 02 October 2018 (has links)
A dosimetria das radiações alfa, beta e gama é importante para diversas áreas aplicadas, sendo utilizada na proteção radiológica de pacientes e profissionais que se expõem a esses tipos de radiações. Com estudos dosimétricos pode-se obter melhores estimativas de dose absorvida, e ter mais precisão na estimativa de riscos populacionais. As técnicas de Termoluminescência (TL) e Luminescência Oticamente Estimulada (OSL) são utilizadas para essas aplicações dosimétricas. Estudos recentes têm mostrado que alguns materiais dosimétricos conhecidos como óxido de alumínio dopado com carbono (Al$_2$O$_3$:C) e óxido de berílio (BeO) sofrem mudanças no formato observado dos sinais OSL com relação as taxas de dose e tipos de radiação. O principal objetivo desse trabalho foi analisar os formatos dessas curvas e verificar quantitativamente, se existem ou não mudanças nos formatos dos sinais OSL dos dosímetros irradiados com diferentes tipos de radiação e taxas de dose. Sob o modelo de uma armadilha e um centro de recombinação (OTOR) foram estudados os sinais OSL com estímulo contínuo (CW-OSL). O modelo OTOR é simples, mas não possui solução analítica e as soluções computacionais são custosas pelo número grande de variáveis e parâmetros. Nesse trabalho, foi necessário realizar algumas simplificações para obtenção de um modelo ainda mais simples para ajuste nos dados. O modelo OTOR simples apresenta um comportamento de decaimento exponencial na descrição do sinal CW-OSL. Uma outra abordagem de extensão do modelo OTOR-simples foi a utilização do modelo com duas armadilhas independentes e um centro de recombinação, que resulta em dois decaimentos exponenciais. Para obtenção dos parâmetros que descrevem o sinal CW-OSL com esses modelos, foi utilizado o método dos mínimos quadrados (MMQ), com refinamento dos parâmetros pelo método de Gauss. O modelo de dois decaimentos exponenciais mostrou-se superior em qualidade com análise do parâmetro $\\chi^2$ e do comportamento dos resíduos em relação ao modelo de um decaimento exponencial para ambos os materiais utilizados. Com os ajustes, foi possível verificar diferenças nos comportamentos do sinal CW-OSL das amostras irradiadas em diferentes situações. As diferenças observadas nos comportamentos são apresentadas pelos parâmetros de decaimento ou de sinal inicial, ou pelas relações entre esses. Os parâmetros ajustados mostram que os sinais OSL provenientes do Al$_2$O$_3$:C e do BeO irradiados com alfa, beta e gama apresentam diferenças significativas nos comportamentos. As diferenças verificadas pelos ajustes dos sinais CW-OSL apresentados pelos dosímetros irradiados com beta e gama podem ter sido em parte causadas por efeito de fading, que afeta de maneira distinta os formatos das curvas e parâmetros ajustados. Nas irradiações com radiação gama com faixas de doses (de 22 a 122 mGy) e taxas doses absorvidas (de 0.024 a 1.66 Gy/s) não foram observados diferenças significativas nos sinais OSL. / The dosimetry of alpha, beta and gamma radiation is important in various applied areas, it is used in radiation protection of patients and professionals who are exposed at this kind of radiation. With dosimetric studies, it is possible to better estimate the absorbed dose, and population risks. Thermoluminescence (TL) and Optically Stimulated Luminescence (OSL) techniques are used for these dosimetric applications. Recent studies have shown that some known dosimetric materials as carbon doped aluminum oxide (Al$_2$O$_3$:C) and berilium oxide (BeO) undergo changes in OSL signal behavior related to dose rates and types of radiation. The main objective of this work was to analise the formats of these curves and quantitatively verify whether or not there are changes in OSL signal of the dosimeters irradiated with different types of radiation and dose rates. Under the model of one trap one recombination center (OTOR) the continuous wave OSL (CW-OSL) signals were studied. The OTOR model is the simplest model, but has no analytical solution and the computational solutions are costly by the large number of variables and parameters. In this work, it was necessary to make some simplifications in order to obtain a simple model that could be fitted to the data. The simple-OTOR model shows an exponential decay behavior in the CW-OSL signal description. Another extension approach to the simple-OTOR model was the model with two independent traps and one recombination center, that results in two exponential decays. To obtain the parameters that describe the CW-OSL signal with these models, the least square method (LSM) was used, with parameter refinement by Gauss method. For both the materials the two exponential decay model proved to be superior in quality to the one exponential decay by the analysis of the parameter $\\chi^2$ and the behavior of the residuals. With the fittings, it was possible to verify differences in the behavior of the CW-OSL signal of the samples irradiated in different situations. These differences observed are presented in the decay or initial signal parameters, or in their ratios. Fitted parameters show that OSL signals from Al$_2$O$_3$:C and BeO irradiated with alpha, beta and gamma exhibit significant differences in behavior. The differences verified by the fittings of the CW-OSL signals presented by beta and gamma irradiated dosimeters may in part have been caused by fading effect, which affects in a different way the shapes of the curves and fitted parameters. Gamma irradiation with dose and absorbed dose rate ranges from 22 to 122 mGy and from 0.024 to 1.66Gy/s respectively did not produce significant differences in OSL signals.
|
40 |
Controle de qualidade no ajustamento de observações geodésicasKlein, Ivandro January 2012 (has links)
Após o ajustamento de observações pelo método dos mínimos quadrados (MMQ) ter sido realizado, é possível a detecção e a identificação de erros não aleatórios nas observações, por meio de testes estatísticos. A teoria da confiabilidade faz uso de medidas adequadas para quantificar o menor erro detectável em uma observação, e a sua influência sobre os parâmetros ajustados, quando não detectado. A teoria de confiabilidade convencional foi desenvolvida para os procedimentos de teste convencionais, como o data snooping, que pressupõem que apenas uma observação está contaminada por erros grosseiros por vez. Recentemente foram desenvolvidas medidas de confiabilidade generalizadas, relativas a testes estatísticos que pressupõem a existência, simultânea, de múltiplas observações com erros (outliers). Outras abordagens para o controle de qualidade do ajustamento, alternativas a estes testes estatísticos, também foram propostas recentemente, como por exemplo, o método QUAD (Quasi-Accurate Detection of outliers method). Esta pesquisa tem por objetivo fazer um estudo sobre o controle de qualidade do ajustamento de observações geodésicas, por meio de experimentos em uma rede GPS (Global Positioning System), utilizando tanto os métodos convencionais quanto o atual estado da arte. Desta forma, foram feitos estudos comparativos entre medidas de confiabilidade convencionais e medidas de confiabilidade generalizadas para dois outliers simultâneos, bem como estudos comparativos entre o procedimento data snooping e testes estatísticos para a identificação de múltiplos outliers. Também se investigou como a questão das variâncias e covariâncias das observações, bem como a geometria/configuração da rede GPS em estudo, podem influenciar nas medidas de confiabilidade, tanto na abordagem convencional, quanto na abordagem generalizada. Por fim, foi feito um estudo comparativo entre o método QUAD e os testes estatísticos para a identificação de erros. / After the adjustment of observations has been carried out by Least Squares Method (LSM), it is possible to detect and identify non-random errors in the observations using statistical tests. The reliability theory makes use of appropriate measures to quantify the minimal detectable bias (error) in an observation, and its influence on the adjusted parameters, if not detected. The conventional reliability theory has been developed for conventional testing procedures such as data snooping, which assumes that only one observation is contaminated by errors at a time. Recently, generalized measures of reliability has been developed, relating to statistical tests that assumes the existence, simultaneous, of multiple observations with errors (outliers). Other approaches to the quality control of the adjustment, alternatives to these statistical tests, were also proposed recently, such as the QUAD method (Quasi-Accurate Detection of outliers method). The goal of this research is to make a study about the quality control of the adjustment of geodetic observations, by means of experiments in a GPS (Global Positioning System) network, using both conventional methods and the current state of the art. In this way, comparisons were made between conventional reliability measures and generalized measures of reliability for two outliers, as well as comparisons between the data snooping procedure and statistical tests to identify multiple outliers. It was also investigated how the variances and covariances of the observations, as well as the geometry/configuration of the GPS network in study, can influence the measures of reliability, both in the conventional approach and in the generalized approach. Finally, a comparison was made between the QUAD method and the statistical tests to identify outliers (errors).
|
Page generated in 0.057 seconds