• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Computational techniques for fast Monte Carlo validation of proton therapy treatment plans

Green, Andrew January 2017 (has links)
Proton therapy is an established radiotherapy technique for the treatment of complex cancers. However, problems exist in the planning of treatments where the use of inaccurate dose modelling may lead to treatments being delivered which are not optimal. Most of the problems with dose modelling tools used in proton therapy treatment planning lie in their treatment of processes such as multiple Coulomb scattering, therefore a technique that accurately models such effects is preferable. Monte Carlo simulation alleviates many of the problems in current dose models but, at present, well-validated full-physics Monte Carlo simulations require more time than is practical in clinical use. Using the well-known and well-validated Monte Carlo toolkit Geant4, an application-called PTMC-has been developed for the simulation of proton therapy treatment plans. Using PTMC, several techniques to improve throughput were developed and evaluated, including changes to the tracking algorithm in Geant4 and application of large scale parallelism using novel computing architectures such as the Intel Xeon Phi co-processor. In order to quantify any differences in the dose-distributions simulated when applying these changes, a new dose comparison tool was also developed which is more suited than current techniques for use with Monte Carlo simulated dose distributions. Using an implementation of the Woodcock algorithm developed in this work, it is possible to track protons through a water phantom up to eight times faster than using the PRESTA algorithm present in Geant4, with negligible loss of accuracy. When applied to a patient simulation, the Woodcock algorithm increases throughput by up to thirty percent, though step limitation was necessary to preserve simulation accuracy. Parallelism was implemented on an Intel Xeon Phi co-processor card, where PTMC was tested with up to 244 concurrent threads. Difficulties imposed by the limited RAM available were overcome through the modification of the Geant4 toolkit and through the use of a novel dose collation technique. Using a single Xeon Phi co-processor, it is possible to validate a proton therapy treatment plan in two hours; with two co-processors that simulation time is halved. For the treatment plan tested, two Xeon Phi co-processors were roughly equivalent to a single 48-core AMD Opteron machine. The relative costs of Xeon Phi co-processors and traditional machines have also been investigated; at present the Intel Xeon Phi co-processor is not cost competitive with standard hardware, costing around twice as much as an AMD machine with comparable performance. Distributed parallelism was also implemented through the use of the Google Compute Engine (GCE). A tool has been developed-called PYPE-which allows users to launch large clusters in the GCE to perform arbitrary compute-intensive work. PYPE was used with PTMC to perform rapid treatment plan validation in the GCE. Using a large cluster, it is possible to validate a proton therapy treatment plan in ten minutes at a cost of roughly $10; the same plan computed locally on a 24-thread Intel Xeon machine required five hours. As an example calculation using PYPE and PTMC, a robustness study is undertaken for a proton therapy treatment plan; this robustness study shows the usefulness of Monte Carlo when computing dose distributions for robustness studies, and the utility of the PYPE tool to make numerous full physics Monte Carlo simulations quickly. Using the tools developed in this work, a complete treatment plan robustness study can be performed in around 26 hours for a cost of less than $500, while using full-physics Monte Carlo for dose distribution calculations.
2

Sistema de tomada de decisão para compra e venda de ativos financeiros utilizando lógica fuzzy. / Decision making system with the purpose to buy and sell equities using Fuzzy logic.

Pereira, Claudio Robinson Tapié 20 August 2008 (has links)
O Sistema Proteu Fuzzy é um sistema de tomada de decisão para compra e venda de ativos financeiros que visa auxiliar a figura do analista técnico (de modo imparcial e racional), informando quando existe uma boa oportunidade para se comprar ou vender um determinado ativo (e.g. ações). Utilizaram-se, como base para as suas decisões, técnicas de inteligência artificial (Lógica Fuzzy) e indicadores técnicos (Médias Móveis, MACD e RSI). As simulações mostram que o sistema conseguiu gerar resultados de forma consistente e com menor volatilidade que o mercado para alguns ativos. / The Proteu Fuzzy System is a decision-making system with the purpose of supporting a technical analyst issuing (impartial and rational) buy and sell signals for a financial asset. The system use, for the decision-making process, an inference engine based on Fuzzy Logic and technical indicators (e.g. Moving Averages, MACD and RSI). The simulation shows that the system is able to generate profits in a consistent manner and with a lower volatility then the market for some assets.
3

Sistema de tomada de decisão para compra e venda de ativos financeiros utilizando lógica fuzzy. / Decision making system with the purpose to buy and sell equities using Fuzzy logic.

Claudio Robinson Tapié Pereira 20 August 2008 (has links)
O Sistema Proteu Fuzzy é um sistema de tomada de decisão para compra e venda de ativos financeiros que visa auxiliar a figura do analista técnico (de modo imparcial e racional), informando quando existe uma boa oportunidade para se comprar ou vender um determinado ativo (e.g. ações). Utilizaram-se, como base para as suas decisões, técnicas de inteligência artificial (Lógica Fuzzy) e indicadores técnicos (Médias Móveis, MACD e RSI). As simulações mostram que o sistema conseguiu gerar resultados de forma consistente e com menor volatilidade que o mercado para alguns ativos. / The Proteu Fuzzy System is a decision-making system with the purpose of supporting a technical analyst issuing (impartial and rational) buy and sell signals for a financial asset. The system use, for the decision-making process, an inference engine based on Fuzzy Logic and technical indicators (e.g. Moving Averages, MACD and RSI). The simulation shows that the system is able to generate profits in a consistent manner and with a lower volatility then the market for some assets.
4

Teste de parâmetros da busca evolutiva por autômatos celulares no problema da paridade, com ênfase nos métodos de avaliação das regras

Silva, Paulo Sérgio Germano da 17 September 2007 (has links)
Made available in DSpace on 2016-03-15T19:38:06Z (GMT). No. of bitstreams: 1 Paulo Sergio Germano da Silva.pdf: 409690 bytes, checksum: d0a755bc0651655440f8954300bd67e8 (MD5) Previous issue date: 2007-09-17 / Fundo Mackenzie de Pesquisa / Problem solving through computation based on cellular automata rules remains a challenging subject, the scarcity of knowledge in the area. Among several problems reported in the literature, the parity problem has generated significant interest, its aim being to ascertain the parity of a quantity of 1s in a binary string, input as an initial configuration to a binary, one-dimensional, cellular automaton. Although a classic problem, no rule able to solve this problem has been found to date. Indeed, it remains uncertain as to whether such a solution exists. In this context, this work presents the results of evolutionary searches for sound rules of one-dimensional, cellular automata of radius 3, in the parity problem. The main focus of the research was to verify the effect of varying the strategies used to generate initial test configuration samples, which implied different ways of evaluating the candidate rules. Such an approach extends the work in [Oliveira e Vaiano, 2005] in the direction of better understanding as well as trying to reproduce some of the results in [Wolz e de Oliveira, 2007], where rules with excellent performance at odd-sised lattices were reported, as well as rules with good performance at even-sised lengths, even though the latter showed lack of robustness. In spite of testing a range of approaches, none led to good rules for the problem, leading to the conclusion that the excellent results obtained in [Wolz e de Oliveira, 2007] were due to the other concepts used therein, distinct from those evaluated in the present study. / A obtenção de soluções de problemas através da computação implícita a regras de autômatos celulares permanece um assunto desafiador, pelo pouco que ainda se conhece a respeito. Entre vários problemas abordados na literatura, um que tem despertado interesse é o problema da paridade, em que a meta é determinar a paridade da quantidade de 1s de uma cadeia binária fornecida como configuração inicial a um autômato celular unidimensional binário. Apesar de clássico, ainda não se tem registro de uma regra capaz de resolver o problema e, de fato, nem se sabe se essa solução existe. Nesse contexto, apresentam-se aqui os resultados experimentais de buscas evolutivas realizadas para encontrar boas regras de autômatos celulares unidimensionais binários de raio 3, no problema da paridade. O foco principal da pesquisa foi verificar o efeito de se variar as estratégias de geração das amostras de configurações iniciais de teste, o que se traduz em diferentes formas de avaliação das regras candidatas. Tal enfoque expande os trabalhos de [de Oliveira e Vaiano, 2005], caminhando na direção de melhor compreender e reproduzir alguns dos resultados obtidos em [Wolz e de Oliveira, 2007], em que regras de excelente desempenho foram achadas para reticulados de comprimento ímpar, bem como regras de bom desempenho, apesar de não robustas, para alguns comprimentos pares de reticulados. A despeito das várias abordagens testadas, nenhuma delas conduziu a boas regras no problema em questão, fazendo crer que a qualidade conseguida em [Wolz e de Oliveira, 2007] deve-se, portanto, aos outros conceitos lá empregados, distintos dos aqui avaliados.
5

Corrected LM goodness-of-fit tests with applicaton to stock returns

Percy, Edward Richard, Jr. 05 January 2006 (has links)
No description available.
6

Studies Of Electronic, Magnetic And Entanglement Properties Of Correlated Models In Low-Dimensional Systems

Sahoo, Shaon 09 1900 (has links) (PDF)
This thesis consists of six chapters. The first chapter gives an introduction to the field of low-dimensional magnetic and electronic systems and relevant numerical techniques. The recent developments in molecular magnets are highlighted. The numerical techniques are reviewed along with their advantages and disadvantages from the present perspective. Study of entanglement of a system can give a great insight into the system. At the last part of this chapter a general overview is given regarding entanglement, its measures and its significance in studying many-body systems. Chapter 2 deals with the technique that has been developed by us for the full symmetry adaptation of non-relativistic Hamiltonians. It is advantageous both computationally and physically/chemically to exploit both spin and spatial symmetries of a system. It has been a long-standing problem to target a state which has definite total spin and also belongs to a definite irreducible representation of a point group, particularly for non-Abelian point groups. A very general technique is discussed in this chapter which is a hybrid method based on valence-bond basis and the basis of the z-component of the total spin. This technique is not only applicable to a system with arbitrary site spins and belonging to any point group symmetry, it is also quite easy to implement computationally. To demonstrate the power of the method, it is applied to the molecular magnetic system, Cu6Fe8, with cubic symmetry. In chapter 3, the extension of the previous hybrid technique to electronic systems is discussed. The power of the method is illustrated by applying it to a model icosahedral half-filled electronic system. This model spans a huge Hilbert space (dimension 1,778,966) and is in the largest non-Abelian point group. All the eigenstates of the model are obtained using our technique. Chapter 4 deals with the thermodynamic properties of an important class of single-chain magnets (SCMs). This class of SCMs has alternate isotropic spin-1/2 units and anisotropic high spin units with the anisotropy axes being non-collinear. Here anisotropy is assumed to be large and negative, as a result, anisotropic units behave like canted spins at low temperatures; but even then simple Ising-type model does not capture the essential physics of the system due to quantum mechanical nature of the isotropic units. A transfer matrix (TM) method is developed to study statistical behavior of this class of SCMs. For the first time, it is also discussed in detail that how weak inter-chain interactions can be treated by a TM method. The finite size effect is also discussed which becomes important for low temperature dynamics. This technique is applied to a real helical chain magnet, which has been studied experimentally. In the fifth chapter a bipartite entanglement entropy of finite systems is studied using exact diagonalization techniques to examine how the entanglement changes in the presence of long-range interactions. The PariserParrPople model with long-range interactions is used for this purpose and corresponding results are com-pared with those for the Hubbard and Heisenberg models with short-range interactions. This study helps understand why the density matrix renormalization group (DMRG) technique is so successful even in the presence of long-range interactions in the PPP model. It is also investigated if the symmetry properties of a state vector have any significance in relation to its entanglement. Finally, an interesting observation is made on the entanglement profiles of different states, across the full energy spectrum, in comparison with the corresponding profile of the density of states. The entanglement can be localized between two noncomplementary parts of a many-body system by performing local measurements on the rest of the system. This localized entanglement (LE) depends on the chosen basis set of measurement (BSM). In this chapter six, an optimality condition for the LE is derived, which would be helpful in finding optimal values of the LE, besides, can also be of use in studying mixed states of a general bipartite system. A canonical way of localizing entanglement is further discussed, where the BSM is not chosen arbitrarily, rather, is fully determined by the properties of a system. The LE obtained in this way, called the localized entanglement by canonical measurement (LECM), is not only easy to calculate practically, it provides a nice way to define the entanglement length. For spin-1/2 systems, the LECM is shown to be optimal in some important cases. At the end of this chapter, some numerical results are presented for j1 −j2 spin model to demonstrate how the LECM behaves.

Page generated in 0.1266 seconds