511 |
Dosimetria por imagem para o planejamento específico por paciente em iodoterapia / Patient-Specific Imaging Dosimetry for Radioiodine Treatment PlanningFranzé, Daniel Luis 23 October 2015 (has links)
Pacientes que sofrem de doenças na tireoide, como hipertireoidismo causado pela doença de Graves, ou câncer de tireoide, têm como principal forma de tratamento a chamada terapia por radioiodo. Este tratamento consiste na ingestão de um radionuclídeo, no caso, o isótopo de massa atômica 131 do iodo (131I). A terapia utilizando radioisótopos é aplicada em uma variedade de tumores e, por se tratar de um material radioativo que o paciente recebe por via venosa ou oral, certa quantidade de radionuclídeos chegam a órgãos e tecidos diferentes do esperado e mesmo o acúmulo de material radioativo na região de interesse contribui para dose em tecidos sadios. Logo, é necessário um planejamento prévio. Em 80% dos planejamentos, a atividade a ser administrada no paciente é calculada através de valores pré-determinados, como peso, idade ou altura. Apenas cerca de 20% das terapias são realizadas com um planejamento personalizado, específico para cada paciente. Levando essas informações em consideração, este trabalho tem como objetivo realizar um estudo dosimétrico através de imagens para que no futuro seja utilizado em rotinas clínicas para planejamento de iodoterapia individualizado para cada paciente. Neste trabalho foram adquiridas imagens tomográficas (SPECT-CT) de um fantoma de tireoide preenchido com 131I. O fantoma foi construído com base na literatura, reproduzido de maneira fidedigna, e aperfeiçoado, permitindo a inserção de dosímetros termoluminescentes (TLD) em pequenas cavidades. As imagens foram inseridas no software GATE, baseado na ferramenta GEANT4, que permite a simulação da interação da radiação com a matéria pelo método Monte Carlo. Essas imagens foram convertidas em formato reconhecível pelo GATE e através da elaboração de um script de comandos, foram realizadas simulações com o intuito de estimar a dose em cada região da imagem. Uma vez que o dosímetro permanecia exposto ao material radioativo por alguns dias, para evitar um dispêndio de tempo computacional muito grande e estimar o valor final da dose no mesmo período de tempo em que o dosímetro ficou exposto através da simulação, foi necessário extrapolar uma equação e calcular a dose para este tempo. Foram realizadas duas aquisições diferentes, a primeira com uma distribuição não homogênea da fonte e a segunda com distribuição homogênea. Para a distribuição não homogênea, a comparação dos resultados da simulação com resultados obtidos por TLD mostram que ambos possuem a mesma ordem de grandeza e variam proporcionalmente em relação à distância que se encontram da fonte. A diferença relativa entre eles varia de 1% a viii 39% dependendo do dosímetro. Para a distribuição homogênea, os valores possuem a mesma ordem de grandeza, mas estão muito abaixo do esperado, com uma diferença relativa de até 70% e os valores da dose simulados estão, em sua maioria, duas vezes menores que o real. A técnica ainda não está pronta para ser implementada na rotina clínica, mas através de estudos de fatores de correção e novas aquisições, essa técnica pode, em um futuro próximo, ser utilizada. / Radio-iodine therapy is the main form of treatment for patients with diseases on the thyroid, such as hyperthyroidism caused by Graves\' disease or thyroid cancer. This treatment consists in the intake of a radionuclide, the iodine isotope of atomic mass 131 (131I). The radioisotope therapy is applied in a variety of tumors and since the patient receives it intravenously or orally, certain amount of radionuclide reaches different organs and tissues than the ones expected. Even the radioactive material accumulated in the region of interest contributes to the energy deposition on healthy tissues. Therefore, it is necessary a treatment planning. However, 80% of nuclear medicine therapy the administered activity is based in quantity as patients weight, age or height. The patient-specific therapy planning occurs in less than 20% of applications in nuclear medicine. Considering that information, this work aims to conduct a dosimetric study based on images so that in the future could be used in clinical routines for patient-specific radioiodine therapy. Were acquired tomographic images (SPECT-CT) of a thyroid phantom filled with 131I. The phantom was consistently reproduced according to the literature, with some improvements allowing the placement of thermoluminescent dosimeters into small cavities. Such phantom was used for the acquisition of SPECT-CT images. The images were inserted into the GATE software, based on GEANT4 tool, which allows the simulation of radiation interaction with matter, through the Monte Carlo method. Those images were converted into acceptable format for GATE and through the development of a command script, the simulations were performed in order to estimate the dose in each region of the image. Since the dosimeter remained exposed to the radioactive material for a few days, to reduce computational time and estimate, by simulation, the dose over the same period of time which the dosimeter has been exposed, it was necessary to extrapolate the equation and calculate the dose for this time. Two images acquisitions were made, the first with an inhomogeneous source distribution and the second with a homogeneous distribution. For the inhomogeneous acquisition, the simulation and TLD values have the same magnitude and both of them vary in proportion to the source distance. The relative difference ranges from 1% to 39% depending on the dosimeter. For the homogeneous one, despite being in the same magnitude either, the values are much lower x than expected, with a difference of up to 70%, and the simulated data, in general are half the TLD values. The technique is not yet ready to be implemented in clinical routine, but through studies of correction factors and new acquisitions, this technique may in the near future, be used.
|
512 |
Structural equation models : an application to Namibian macroeconomicsHaufiku, Stetson Homateni 31 January 2013 (has links)
Structural Equations Models (SEMs) are now widely used almost in every discipline of research. Most of the existing materials for the Namibian macroeconomic models are studies of the well documented time series approach. In this study, we provided a statistical approach on modelling the Namibian macroeconomics for the real and fiscal economic sectors using SEMs. The approach is based on testing the theoretical specification laid down by the Namibian Macroeconometrics Model (NAMEX) of 2004. The economic structure and relationships among the variables is evaluated by means of exploratory and confirmatory analysis and the results are congruent to the existing theory in terms of loading patterns. Between Maximum Likelihood (ML) and Generalized Least Square (GLS) estimation methods, we compared the discrepancy of parameter estimates under the commonly encountered problems of sample size, violation of underlying assumptions in the data as well as model misspecifications. GLS estimation methods seem to provide better goodness of fit indices under those conditions. We have also shown that the fiscal sector is not well represented by our SEM. We recommend further studies to employ sufficiently larger samples so that models are correctly specified.
|
513 |
Forensic and Anti-Forensic Techniques for OLE2-Formatted DocumentsDaniels, Jason M. 01 December 2008 (has links)
Common office documents provide significant opportunity for forensic and anti-forensic work. The Object Linking and Embedding 2 (OLE2) specification used primarily by Microsoft’s Office Suite contains unused or dead space regions that can be over written to hide covert channels of communication. This thesis describes a technique to detect those covert channels and also describes a different method of encoding that lowers the probability of detection.
The algorithm developed, called OleDetection, is based on the use of kurtosis and byte frequency distribution statistics to accurately identify OLE2 documents with covert channels. OleDetection is able to correctly identify 99.97 percent of documents with covert channel and only a false positive rate 0.65 percent.
The improved encoding scheme encodes the covert channel with patterns found in unmodified dead space regions. This anti-forensic technique allows the covert channel to masquerade as normal data, lowering the ability probability for any detection tool to is able to detect its presence.
|
514 |
Optimizing life-cycle maintenance cost of complex machinery using advanced statistical techniques and simulation.El Hayek, Mustapha, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Maintenance is constantly challenged with increasing productivity by maximizing up-time and reliability while at the same time reducing expenditure and investment. In the last few years it has become evident through the development of maintenance concepts that maintenance is more than just a non-productive support function, it is a profit- generating function. In the past decades, hundreds of models that address maintenance strategy have been presented. The vast majority of those models rely purely on mathematical modeling to describe the maintenance function. Due to the complex nature of the maintenance function, and its complex interaction with other functions, it is almost impossible to accurately model maintenance using mathematical modeling without sacrificing accuracy and validity with unfeasible simplifications and assumptions. Analysis presented as part of this thesis shows that stochastic simulation offers a viable alternative and a powerful technique for tackling maintenance problems. Stochastic simulation is a method of modeling a system or process (on a computer) based on random events generated by the software so that system performance can be evaluated without experimenting or interfering with the actual system. The methodology developed as part of this thesis addresses most of the shortcomings found in literature, specifically by allowing the modeling of most of the complexities of an advanced maintenance system, such as one that is employed in the airline industry. This technique also allows sensitivity analysis to be carried out resulting in an understanding of how critical variables may affect the maintenance and asset management decision-making process. In many heavy industries (e.g. airline maintenance) where high utilization is essential for the success of the organization, subsystems are often of a rotable nature, i.e. they rotate among different systems throughout their life-cycle. This causes a system to be composed of a number of subsystems of different ages, and therefore different reliability characteristics. This makes it difficult for analysts to estimate its reliability behavior, and therefore may result in a less-than-optimal maintenance plan. Traditional reliability models are based on detailed statistical analysis of individual component failures. For complex machinery, especially involving many rotable parts, such analyses are difficult and time consuming. In this work, a model is proposed that combines the well-established Weibull method with discrete simulation to estimate the reliability of complex machinery with rotable subsystems or modules. Each module is characterized by an empirically derived failure distribution. The simulation model consists of a number of stages including operational up-time, maintenance down-time and a user-interface allowing decisions on maintenance and replacement strategies as well as inventory levels and logistics. This enables the optimization of a maintenance plan by comparing different maintenance and removal policies using the Cost per Unit Time (CPUT) measure as the decision variable. Five different removal strategies were tested. These include: On-failure replacements, block replacements, time-based replacements, condition-based replacements and a combination of time-based and condition-based strategies. Initial analyses performed on aircraft gas-turbine data yielded an optimal combination of modules out of a pool of multiple spares, resulting in an increased machine up-time of 16%. In addition, it was shown that condition-based replacement is a cost-effective strategy; however, it was noted that the combination of time and condition-based strategy can produce slightly better results. Furthermore, a sensitivity analysis was performed to optimize decision variables (module soft-time), and to provide an insight to the level of accuracy with which it has to be estimated. It is imperative as part of the overall reliability and life-cycle cost program to focus not only on reducing levels of unplanned (i.e. breakdown) maintenance through preventive and predictive maintenance tasks, but also optimizing inventory of spare parts management, sometimes called float hardware. It is well known that the unavailability of a spare part may result in loss of revenue, which is associated with an increase in system downtime. On the other hand increasing the number of spares will lead to an increase in capital investment and holding cost. The results obtained from the simulation model were used in a discounted NPV (Net Present Value) analysis to determine the optimal number of spare engines. The benefits of this methodology are that it is capable of providing reliability trends and forecasts in a short time frame and based on available data. In addition, it takes into account the rotable nature of many components by tracking the life and service history of individual parts and allowing the user to simulate different combinations of rotables, operating scenarios, and replacement strategies. It is also capable of optimizing stock and spares levels as well as other related key parameters like the average waiting time, unavailability cost, and the number of maintenance events that result in extensive durations due to the unavailability of spare parts. Importantly, as more data becomes available or as greater accuracy is demanded, the model or database can be updated or expanded, thereby approaching the results obtainable by pure statistical reliability analysis.
|
515 |
Design flood estimation for ungauged catchments in Victoria : ordinary and generalised least squares methods comparedHaddad, Khaled, University of Western Sydney, College of Health and Science, School of Engineering January 2008 (has links)
Design flood estimation in small to medium sized ungauged catchments is frequently required in hydrologic analysis and design and is of notable economic significance. For this task Australian Rainfall and Runoff (ARR) 1987, the National Guideline for Design Flow Estimation, recommends the Probabilistic Rational Method (PRM) for general use in South- East Australia. However, there have been recent developments that indicated significant potential to provide more meaningful and accurate design flood estimation in small to medium sized ungauged catchments. These include the L moments based index flood method and a range of quantile regression techniques. This thesis focuses on the quantile regression techniques and compares two methods: ordinary least squares (OLS) and generalised least squares (GLS) based regression techniques. It also makes comparison with the currently recommended Probabilistic Rational Method. The OLS model is used by hydrologists to estimate the parameters of regional hydrological models. However, more recent studies have indicated that the parameter estimates are usually unstable and that the OLS procedure often violates the assumption of homoskedasticity. The GLS based regression procedure accounts for the varying sampling error, correlation between concurrent flows, correlations between the residuals and the fitted quantiles and model error in the regional model, thus one would expect more accurate flood quantile estimation by this method. This thesis uses data from 133 catchments in the state of Victoria to develop prediction equations involving readily obtainable catchment characteristics data. The GLS regression procedure is explored further by carrying out a 4-stage generalised least squares analysis where the development of the prediction equations is based on relating hydrological statistics such as mean flows, standard deviations, skewness and flow quantiles to catchment characteristics. This study also presents the validation of the two techniques by carrying out a split-sample validation on a set of independent test catchments. The PRM is also tested by deriving an updated PRM technique with the new data set and carrying out a split sample validation on the test catchments. The results show that GLS based regression provides more accurate design flood estimates than the OLS regression procedure and the PRM. Based on the average variance of prediction, standard error of estimate, traditional statistics and new statistics, rankings and the median relative error values, the GLS method provided more accurate flood frequency estimates especially for the smaller catchments in the range of 1-300 km2. The predictive ability of the GLS model is also evident in the regression coefficient values when comparing with the OLS method. However, the performance of the PRM method, particularly for the larger catchments appears to be satisfactory as well. / Master of Engineering (Honours)
|
516 |
Lexical approaches to backoff in statistical parsingLakeland, Corrin, n/a January 2006 (has links)
This thesis develops a new method for predicting probabilities in a statistical parser so that more sophisticated probabilistic grammars can be used. A statistical parser uses a probabilistic grammar derived from a training corpus of hand-parsed sentences. The grammar is represented as a set of constructions - in a simple case these might be context-free rules. The probability of each construction in the grammar is then estimated by counting its relative frequency in the corpus.
A crucial problem when building a probabilistic grammar is to select an appropriate level of granularity for describing the constructions being learned. The more constructions we include in our grammar, the more sophisticated a model of the language we produce. However, if too many different constructions are included, then our corpus is unlikely to contain reliable information about the relative frequency of many constructions.
In existing statistical parsers two main approaches have been taken to choosing an appropriate granularity. In a non-lexicalised parser constructions are specified as structures involving particular parts-of-speech, thereby abstracting over individual words. Thus, in the training corpus two syntactic structures involving the same parts-of-speech but different words would be treated as two instances of the same event. In a lexicalised grammar the assumption is that the individual words in a sentence carry information about its syntactic analysis over and above what is carried by its part-of-speech tags. Lexicalised grammars have the potential to provide extremely detailed syntactic analyses; however, Zipf�s law makes it hard for such grammars to be learned.
In this thesis, we propose a method for optimising the trade-off between informative and learnable constructions in statistical parsing. We implement a grammar which works at a level of granularity in between single words and parts-of-speech, by grouping words together using unsupervised clustering based on bigram statistics. We begin by implementing a statistical parser to serve as the basis for our experiments. The parser, based on that of Michael Collins (1999), contains a number of new features of general interest. We then implement a model of word clustering, which we believe is the first to deliver vector-based word representations for an arbitrarily large lexicon. Finally, we describe a series of experiments in which the statistical parser is trained using categories based on these word representations.
|
517 |
Multi-purpose multi-way data analysisEbrahimi Mohammadi, Diako, Chemistry, Faculty of Science, UNSW January 2007 (has links)
In this dissertation, application of multi-way analysis is extended into new areas of environmental chemistry, microbiology, electrochemistry and organometallic chemistry. Additionally new practical aspects of some of the multi-way analysis methods are discussed. Parallel Factor Analysis Two (PARAFAC2) is used to classify a wide range of weathered petroleum oils using GC-MS data. Various chemical and data analysis issues exist in the current methods of oil spill analysis are discussed and the proposed method is demonstrated to have potential to be employed in identification of source of oil spills. Two important practical aspects of PARAFAC2 are exploited to deal with chromatographic shifts and non-diagnostic peaks.GEneralized Multiplicative ANalysis Of VAriance (GEMANOVA) is applied to assess the bactericidal activity of new natural antibacterial extracts on three species of bacteria in different structure and oxidation forms and different concentrations. In this work while the applicability of traditional ANOVA is restricted due to the high interaction amongst the factors, GEMANOVA is shown to return robust and easily interpretable models which conform to the actual structure of the data. Peptide-modified electrochemical sensors are used to determine three metal cations of Cu2+, Cd2+ and Pb2+ simultaneously. Two sets of experiments are performed using a four-electrode system returning a three-way array of size (sample ?? current ?? electrode) and a single electrode resulting in a two-way data set of size (sample ?? current). The data of former is modeled by N-PLS and that latter using PLS. Despite the presence of highly overlapped voltammograms and several sources of non-linearity N-PLS returns reasonable models while PLS fails. An intramolecular hydroamination reaction is catalyzed by several organometallic catalysts to identify the most effective catalysts. The reaction of starting material in the presence of 72 different catalysts is monitored by UV-Vis at two time points, before and after heating the mixtures in an oven. PARAFAC is applied to the three-way data set of (sample ?? wavelength ?? time) to resolve the overlapped UV-Vis peaks and to identify the effective catalysts using the estimated relative concentration of product (loadings plot of the sample mode).
|
518 |
Multivariate control charts for nonconformitiesChattinnawat, Wichai 05 September 2003 (has links)
When the nonconformities are independent, a multivariate control chart for
nonconformities called a demerit control chart using a distribution approximation
technique called an Edgeworth Expansion, is proposed. For a demerit control chart,
an exact control limit can be obtained in special cases, but not in general. A proposed
demerit control chart uses an Edgeworth Expansion to approximate the distribution of
the demerit statistic and to compute the demerit control limits. A simulation study
shows that the proposed method yields reasonably accurate results in determining the
distribution of the demerit statistic and hence the control limits, even for small sample
sizes. The simulation also shows that the performances of the demerit control chart
constructed using the proposed method is very close to the advertised for all sample sizes.
Since the demerit control chart statistic is a weighted sum of the
nonconformities, naturally the performance of the demerit control chart will depend on
the weights assigned to the nonconformities. The method of how to select weights
that give the best performance for the demerit control chart has not yet been addressed
in the literature. A methodology is proposed to select the weights for a one-sided
demerit control chart with and upper control limit using an asymptotic technique. The
asymptotic technique does not restrict the nature of the types and classification scheme
for the nonconformities and provides an optimal and explicit solution for the weights.
In the case presented so far, we assumed that the nonconformities are
independent. When the nonconformities are correlated, a multivariate Poisson
lognormal probability distribution is used to model the nonconformities. This
distribution is able to model both positive and negative correlations among the
nonconformities. A different type of multivariate control chart for correlated
nonconformities is proposed. The proposed control chart can be applied to
nonconformities that have any multivariate distributions whether they be discrete or
continuous or something that has characteristics of both, e.g., non-Poisson correlated
random variables. The proposed method evaluates the deviation of the observed
sample means from pre-defined targets in terms of the density function value of the
sample means. The distribution of the control chart test statistic is derived using an
approximation technique called a multivariate Edgeworth expansion. For small
sample sizes, results show that the proposed control chart is robust to inaccuracies in
assumptions about the distribution of the correlated nonconformities. / Graduation date: 2004
|
519 |
New non-parametric efficiency measures : an application to the U.S. brewing industryZelenyuk, Valentin 25 June 1999 (has links)
This study focuses on the development of new, non-parametric efficiency measures based on the idea of aggregation via merging functions. We use Shephard's (1970) axiomatic approach of distance functions as the basis for theoretical methodology. In particular, this approach is a background for non-parametric efficiency measures defined on a linearly approximated technology set (Farrell, 1957 and Charnes, et al. 1987, and Fare and Grosskopf, 1985).
Two new concerns are discussed: the ambiguity in Farrell efficiency measures and the inconsistency of aggregated Industry efficiency measures with constant returns to scale assumption. As a result, two types of new measures (based on the idea of aggregation) are developed: the average efficiency measures (that take into account both input and output oriented efficiency information) and the industry structural efficiency measures via Geometric Aggregation. The existing efficiency measures as well as newly introduced measures are applied to a sample of U.S. brewing industry. The data supports the importance of new measures and the obtained results are consistent with previous studies that use similar and different (e.g., parametric) approaches. / Graduation date: 2000
|
520 |
Theory of dipole interaction in crystalsJanuary 1946 (has links)
[by] J.M. Luttinger and L. Tisza. / "Reprinted from the Physical review, vol. 70, nos. 11 and 12, 954-964, Dec. 1 and 15, 1946." / Includes bibliographical references. / Contract OEMsr-262.
|
Page generated in 0.1171 seconds