Spelling suggestions: "subject:"discretization"" "subject:"iscretization""
11 |
Suppressing Discretization Error in Langevin Simulations of (2+1)-dimensional Field TheoriesWojtas, David Heinrich January 2006 (has links)
Lattice simulations are a popular tool for studying the non-perturbative physics of nonlinear field theories. To perform accurate lattice simulations, a careful account of the discretization error is necessary. Spatial discretization error as a result of lattice spacing dependence in Langevin simulations of anisotropic (2 + 1)-dimensional classical scalar field theories is studied. A transfer integral operator (TIO) method and a one-loop renormalization (1LR) procedure are used to formulate effective potentials. The effective potentials contain counterterms which are intended to suppress the lattice spacing dependence. The two effective potentials were tested numerically in the case of a phi-4 model. A high accuracy modified Euler method was used to evolve a phenomenological Langevin equation. Large scale Langevin simulations were performed in parameter ranges determined to be appropriate. Attempts at extracting correlation lengths as a means of determining effectiveness of each method were not successful. Lattice sizes used in this study were not of a sufficient size to obtain an accurate representation of thermal equilibrium. As an alternative, the initial behaviour of the ensemble field average was observed. Results for the TIO method showed that it was successful at suppressing lattice spacing dependence in a mean field limit. Results for the 1LR method showed that it performed poorly.
|
12 |
Discretização e geração de gráficos de dados em aprendizado de máquina / Attribute discretization and graphics generation in machine learningVoltolini, Richardson Floriani 17 November 2006 (has links)
A elevada quantidade e variedade de informações adquirida e armazenada em meio eletrônico e a incapacidade humana de analizá-las, têm motivado o desenvolvimento da área de Mineracão de Dados - MD - que busca, de maneira semi-automática, extrair conhecimento novo e útil de grandes bases de dados. Uma das fases do processo de MD é o pré-processamento dessas bases de dados. O pré-processamento de dados tem como alguns de seus principais objetivos possibilitar que o usuário do processo obtenha maior compreensão dos dados utilizados, bem como tornar os dados mais adequados para as próximas fases do processo de MD. Uma técnica que busca auxiliar no primeiro objetivo citado é a geracão de gráficos de dados, a qual consiste na representação gráfica dos registros (exemplos) de uma base de dados. Existem diversos métodos de geracão de gráficos, cada qual com suas características e objetivos. Ainda na fase de pré-processamento, de modo a tornar os dados brutos mais adequados para as demais fases do processo de MD, diversas técnicas podem ser aplicadas, promovendo transformações nesses dados. Uma delas é a discretização de dados, que transforma um atributo contínuo da base de dados em um atributo discreto. Neste trabalho são abordados alguns métodos de geração de gráficos e de discretização de dados freqüentemente utilizados pela comunidade. Com relação aos métodos de geração de gráficos, foi projetado e implementado o sistema DISCOVERGRAPHICS que provê interfaces para a geração de gráficos de dados. As diferentes interfaces criadas permitem a utilização do sistema por usuários avançados, leigos e por outros sistemas computacionais. Com relação ao segundo assunto abordado neste trabalho, discretização de dados, foram considerados diversos métodos de discretização supervisionados e não-supervisionados, freqüentemente utilizados pela comunidade, e foi proposto um novo método não-supervisionado denominado K-MeansR. Esses métodos foram comparados entre sí por meio da realização de experimentos e analise estatística dos resultados, considerando-se diversas medidas para realizar a avaliação. Os resultados obtidos indicam que o método proposto supera vários dos métodos de discretização considerados / The great quantity and variety of information acquired and stored electronically and the lack of human capacity to analyze it, have motivated the development of Data Mining - DM - a process that attempts to extract new and useful knowledge from databases. One of the steps of the DM process is data preprocessing. The main goals of the data preprocessing step are to enable the user to have a better understanding of the data being used and to transform the data so it is appropriate for the next step of the DM process related to pattern extraction. A technique concerning the first goal consists of the graphic representation of records (examples) of databases. There are various methods to generate these graphic representations, each one with its own characteristics and objectives. Furthermore, still in the preprocessing step, and in order to transform the raw data into a more suitable form for the next step of the DM process, various data discretization technique methods which transform continuous database attribute values into discrete ones can be applied. This work presents some frequently used methods of graph generation and data discretization. Related to the graph generation methods, we have developed a system called DISCOVERGRAPHICS, which offers different interfaces for graph generation. These interfaces allow both advanced and beginner users, as well as other systems, to access the DISCOVERGRAPHICS system facilities. Regarding the second subject of this work, data discretization, we considered various supervised and unsupervised methods and proposed a new unsupervised data discretization method called K-MeansR. Using different evaluation measures and databases, all these methods were experimentally compared to each other and statistical tests were run to analyze the experimental results. These results showed that the proposed method performed better than many of the other data discretization methods considered in this work
|
13 |
Decision support using Bayesian networks for clinical decision makingOgunsanya, Oluwole Victor January 2012 (has links)
This thesis investigates the use of Bayesian Networks (BNs), augmented by the Dynamic Discretization Algorithm, to model a variety of clinical problems. In particular, the thesis demonstrates four novel applications of BN and dynamic discretization to clinical problems. Firstly, it demonstrates the flexibility of the Dynamic Discretization Algorithm in modeling existing medical knowledge using appropriate statistical distributions. Many practical applications of BNs use the relative frequency approach while translating existing medical knowledge to a prior distribution in a BN model. This approach does not capture the full uncertainty surrounding the prior knowledge. Secondly, it demonstrates a novel use of the multinomial BN formulation in learning parameters of categorical variables. The traditional approach requires fixed number of parameters during the learning process but this framework allows an analyst to generate a multinomial BN model based on the number of parameters required. Thirdly, it presents a novel application of the multinomial BN formulation and dynamic discretization to learning causal relations between variables. The idea is to consider competing causal relations between variables as hypotheses and use data to identify the best hypothesis. The result shows that BN models can provide an alternative to the conventional causal learning techniques. The fourth novel application is the use of Hierarchical Bayesian Network (HBN) models, augmented by dynamic discretization technique, to meta-analysis of clinical data. The result shows that BN models can provide an alternative to classical meta analysis techniques. The thesis presents two clinical case studies to demonstrate these novel applications of BN models. The first case study uses data from a multi-disciplinary team at the Royal London hospital to demonstrate the flexibility of the multinomial BN framework in learning parameters of a clinical model. The second case study demonstrates the use of BN and dynamic discretization to solving decision problem. In summary, the combination of the Junction Tree Algorithm and Dynamic Discretization Algorithm provide a unified modeling framework for solving interesting clinical problems.
|
14 |
Discretização e geração de gráficos de dados em aprendizado de máquina / Attribute discretization and graphics generation in machine learningRichardson Floriani Voltolini 17 November 2006 (has links)
A elevada quantidade e variedade de informações adquirida e armazenada em meio eletrônico e a incapacidade humana de analizá-las, têm motivado o desenvolvimento da área de Mineracão de Dados - MD - que busca, de maneira semi-automática, extrair conhecimento novo e útil de grandes bases de dados. Uma das fases do processo de MD é o pré-processamento dessas bases de dados. O pré-processamento de dados tem como alguns de seus principais objetivos possibilitar que o usuário do processo obtenha maior compreensão dos dados utilizados, bem como tornar os dados mais adequados para as próximas fases do processo de MD. Uma técnica que busca auxiliar no primeiro objetivo citado é a geracão de gráficos de dados, a qual consiste na representação gráfica dos registros (exemplos) de uma base de dados. Existem diversos métodos de geracão de gráficos, cada qual com suas características e objetivos. Ainda na fase de pré-processamento, de modo a tornar os dados brutos mais adequados para as demais fases do processo de MD, diversas técnicas podem ser aplicadas, promovendo transformações nesses dados. Uma delas é a discretização de dados, que transforma um atributo contínuo da base de dados em um atributo discreto. Neste trabalho são abordados alguns métodos de geração de gráficos e de discretização de dados freqüentemente utilizados pela comunidade. Com relação aos métodos de geração de gráficos, foi projetado e implementado o sistema DISCOVERGRAPHICS que provê interfaces para a geração de gráficos de dados. As diferentes interfaces criadas permitem a utilização do sistema por usuários avançados, leigos e por outros sistemas computacionais. Com relação ao segundo assunto abordado neste trabalho, discretização de dados, foram considerados diversos métodos de discretização supervisionados e não-supervisionados, freqüentemente utilizados pela comunidade, e foi proposto um novo método não-supervisionado denominado K-MeansR. Esses métodos foram comparados entre sí por meio da realização de experimentos e analise estatística dos resultados, considerando-se diversas medidas para realizar a avaliação. Os resultados obtidos indicam que o método proposto supera vários dos métodos de discretização considerados / The great quantity and variety of information acquired and stored electronically and the lack of human capacity to analyze it, have motivated the development of Data Mining - DM - a process that attempts to extract new and useful knowledge from databases. One of the steps of the DM process is data preprocessing. The main goals of the data preprocessing step are to enable the user to have a better understanding of the data being used and to transform the data so it is appropriate for the next step of the DM process related to pattern extraction. A technique concerning the first goal consists of the graphic representation of records (examples) of databases. There are various methods to generate these graphic representations, each one with its own characteristics and objectives. Furthermore, still in the preprocessing step, and in order to transform the raw data into a more suitable form for the next step of the DM process, various data discretization technique methods which transform continuous database attribute values into discrete ones can be applied. This work presents some frequently used methods of graph generation and data discretization. Related to the graph generation methods, we have developed a system called DISCOVERGRAPHICS, which offers different interfaces for graph generation. These interfaces allow both advanced and beginner users, as well as other systems, to access the DISCOVERGRAPHICS system facilities. Regarding the second subject of this work, data discretization, we considered various supervised and unsupervised methods and proposed a new unsupervised data discretization method called K-MeansR. Using different evaluation measures and databases, all these methods were experimentally compared to each other and statistical tests were run to analyze the experimental results. These results showed that the proposed method performed better than many of the other data discretization methods considered in this work
|
15 |
Modelling queueing networks with blocking using probability mass fittingTancrez, Jean-Sébastien 18 March 2009 (has links)
In this thesis, we are interested in the modelling of queueing networks with finite buffers and with general service time distributions. Queueing networks models have shown to be very useful tools to evaluate the performance of complex systems in many application fields (manufacturing, communication networks, traffic flow, etc.). In order to analyze such networks, the original distributions are most often transformed into tractable distributions, so that the Markov theory can then be applied. Our main originality lies in this step of the modelling process. We propose to discretize the original distributions by probability mass fitting (PMF). The PMF discretization is simple: the probability masses on regular intervals are computed and aggregated on a single value in the corresponding interval. PMF has the advantage to be simple, refinable, and to conserve the shape of the distribution. Moreover, we show that it does not require more phases, and thus more computational effort, than concurrent methods.
From the distributions transformed by PMF, the evolution of the system can then be modelled by a discrete Markov chain, and the performance of the system can be evaluated from the chain. This global modelling method leads to various interesting results. First, we propose two methodologies leading to bounds on the cycle time of the system. In particular, a tight lower bound on the cycle time can be computed. Second, probability mass fitting leads to accurate approximation of the performance measures (cycle time, work-in-progress, flow time, etc.). Together with the bounds, the approximations allow to get a good grasp on the exact measure with certainty. Third, the cycle time distribution can be computed in the discretized time and shows to be a good approximation of the original cycle time distribution. The distribution provides more information on the behavior of the system, compared to the isolated expectation (to which other methods are limited). Finally, in order to be able to analyze larger networks, the decomposition technique can be applied after PMF. We show that the accuracy of the performance evaluation is still good, and that the ability of PMF to accurately estimate the distributions brings an improvement in the application of the decomposition. In conclusion, we believe that probability mass fitting can be considered as a valuable alternative in order to build tractable distributions for the analytical modelling of queueing networks.
|
16 |
Adaptive discrete-ordinates algorithms and strategiesStone, Joseph Carlyle 15 May 2009 (has links)
The approaches for discretizing the direction variable in particle transport
calculations are the discrete-ordinates method and function-expansion methods. Both
approaches are limited if the transport solution is not smooth.
Angular discretization errors in the discrete-ordinates method arise from the inability
of a given quadrature set to accurately perform the needed integrals over the direction
("angular") domain. We propose that an adaptive discrete-ordinate algorithm will be
useful in many problems of practical interest. We start with a "base quadrature set" and
add quadrature points as needed in order to resolve the angular flux function. We
compare an interpolated angular-flux value against a calculated value. If the values are
within a user specified tolerance, the point is not added; otherwise it is. Upon the
addition of a point we must recalculate weights.
Our interpolatory functions map angular-flux values at the quadrature directions to a
continuous function that can be evaluated at any direction. We force our quadrature
weights to be consistent with these functions in the sense that the quadrature integral of
the angular flux is the exact integral of the interpolatory function (a finite-element methodology that determines coefficients by collocation instead of the usual weightedresidual
procedure).
We demonstrate our approach in two-dimensional Cartesian geometry, focusing on
the azimuthal direction The interpolative methods we test are simple linear, linear in
sine and cosine, an Abu-Shumays “base” quadrature with a simple linear adaptive and an
Abu-Shumays “base” quadrature with a linear in sine and cosine adaptive. In the latter
two methods the local refinement does not reduce the ability of the base set to integrate
high-order spherical harmonics (important in problems with highly anisotropic
scattering).
We utilize a variety of one-group test problems to demonstrate that in all cases,
angular discretization errors (including "ray effects") can be eliminated to whatever
tolerance the user requests. We further demonstrate through detailed quantitative
analysis that local refinement does indeed produce a more efficient placement of
unknowns.
We conclude that this work introduces a very promising approach to a long-standing
problem in deterministic transport, and we believe it will lead to fruitful avenues of
further investigation.
|
17 |
A global optimization approach to pooling problems in refineriesPham, Viet 15 May 2009 (has links)
The pooling problem is an important optimization problem that is encountered in
operation and scheduling of important industrial processes within petroleum refineries.
The key objective of pooling is to mix various intermediate products to achieve desired
properties and quantities of products. First, intermediate streams from various processing
units are mixed and stored in intermediate tanks referred to as pools. The stored streams
in pools are subsequently allowed to mix to meet varying market demands. While these
pools enhance the operational flexibility of the process, they complicate the decisionmaking
process needed for optimization. The problem to find the least costly mixing
recipe from intermediate streams to pools and then from pools to sale products is
referred to as the pooling problem. The research objective is to contribute an approach to
solve this problem.
The pooling problem can be formulated as an optimization program whose objective is
to minimize cost or maximize profit while determining the optimal allocation of
intermediate streams to pools and the blending of pools to final products. Because of the
presence of bilinear terms, the resulting formulation is nonconvex which makes it very
difficult to attain the global solution. Consequently, there is a need to develop
computationally-efficient and easy-to-implement global-optimization techniques to solve
the pooling problem. In this work, a new approach is introduced for the global
optimization of pooling problems. The approach is based on three concepts: linearization
by discretizing nonlinear variables, pre-processing using implicit enumeration of the
discretization to form a convex-hull which limits the size of the search space, and
application of integer cuts to ensure compatibility between the original problem and the discretized formulation. The continuous quality variables contributing to bilinear terms
are first discretized. The discretized problem is a mixed integer linear program (MILP)
and can be globally solved in a computationally effective manner using branch and
bound method. The merits of the proposed approach are illustrated by solving test case
studies from literature and comparison with published results.
|
18 |
Discretization and Approximation Methods for Reinforcement Learning of Highly Reconfigurable SystemsLampton, Amanda K. 2009 December 1900 (has links)
There are a number of techniques that are used to solve reinforcement learning
problems, but very few that have been developed for and tested on highly reconfigurable
systems cast as reinforcement learning problems. Reconfigurable systems
refers to a vehicle (air, ground, or water) or collection of vehicles that can change its
geometrical features, i.e. shape or formation, to perform tasks that the vehicle could
not otherwise accomplish. These systems tend to be optimized for several operating
conditions, and then controllers are designed to reconfigure the system from one operating
condition to another. Q-learning, an unsupervised episodic learning technique
that solves the reinforcement learning problem, is an attractive control methodology
for reconfigurable systems. It has been successfully applied to a myriad of control
problems, and there are a number of variations that were developed to avoid or alleviate
some limitations in earlier version of this approach. This dissertation describes the
development of three modular enhancements to the Q-learning algorithm that solve
some of the unique problems that arise when working with this class of systems, such
as the complex interaction of reconfigurable parameters and computationally intensive
models of the systems. A multi-resolution state-space discretization method is developed
that adaptively rediscretizes the state-space by progressively finer grids around
one or more distinct Regions Of Interest within the state or learning space. A genetic
algorithm that autonomously selects the basis functions to be used in the approximation of the action-value function is applied periodically throughout the learning
process. Policy comparison is added to monitor the state of the policy encoded in the
action-value function to prevent unnecessary episodes at each level of discretization.
This approach is validated on several problems including an inverted pendulum, reconfigurable
airfoil, and reconfigurable wing. Results show that the multi-resolution
state-space discretization method reduces the number of state-action pairs, often by
an order of magnitude, required to achieve a specific goal and the policy comparison
prevents unnecessary episodes once the policy has converged to a usable policy. Results
also show that the genetic algorithm is a promising candidate for the selection
of basis functions for function approximation of the action-value function.
|
19 |
Long Characteristic Method in Space and Time for Transport ProblemsPandya, Tara M. 2009 December 1900 (has links)
Discretization and solving of the transport equation has been an area of great
research where many methods have been developed. Under the deterministic transport
methods, the method of characteristics, MOC, is one such discretization and solution
method that has been applied to large-scale problems. Although these MOC,
specifically long characteristics, LC, have been thoroughly applied to discretize and
solve transport problems in the spatial domain, there is a need for an equally adequate
time-dependent discretization. A method has been developed that uses LC discretization
of the time and space variables in solving the transport equation. This space-time long
characteristic, STLC, method is a discrete ordinates method that applies LC
discretization in space and time and employs a least-squares approximation of sources
such as the scattering source in each cell. This method encounters the same problems
that previous spatial LC methods have dealt with concerning achieving all of the
following: particle conservation, exact solution along a ray, and smooth variation in
reaction rate for specific problems. However, quantities that preserve conservation in
each cell can also be produced with this method and compared to the non-conservative results from this method to determine the extent to which this STLC method addresses
the previous problems.
Results from several test problems show that this STLC method produces
conservative and non-conservative solutions that are very similar for most cases and the
difference between them vanishes as track spacing is refined. These quantities are also
compared to the results produced from a traditional linear discontinuous spatial
discretization with finite difference time discretization. It is found that this STLC
method is more accurate for streaming-dominate and scattering-dominate test problems.
Also, the solution from this STLC method approaches the steady-state diffusion limit
solution from a traditional LD method. Through asymptotic analysis and test problems,
this STLC method produces a time-dependent diffusion solution in the thick diffusive
limit that is accurate to O(E) and is similar to a continuous linear FEM discretization
method in space with time differencing. Application of this method in parallel looks
promising, mostly due to the ray independence along which the solution is computed in
this method.
|
20 |
Adaptive discrete-ordinates algorithms and strategiesStone, Joseph Carlyle 10 October 2008 (has links)
The approaches for discretizing the direction variable in particle transport
calculations are the discrete-ordinates method and function-expansion methods. Both
approaches are limited if the transport solution is not smooth.
Angular discretization errors in the discrete-ordinates method arise from the inability
of a given quadrature set to accurately perform the needed integrals over the direction
("angular") domain. We propose that an adaptive discrete-ordinate algorithm will be
useful in many problems of practical interest. We start with a "base quadrature set" and
add quadrature points as needed in order to resolve the angular flux function. We
compare an interpolated angular-flux value against a calculated value. If the values are
within a user specified tolerance, the point is not added; otherwise it is. Upon the
addition of a point we must recalculate weights.
Our interpolatory functions map angular-flux values at the quadrature directions to a
continuous function that can be evaluated at any direction. We force our quadrature
weights to be consistent with these functions in the sense that the quadrature integral of
the angular flux is the exact integral of the interpolatory function (a finite-element methodology that determines coefficients by collocation instead of the usual weightedresidual
procedure).
We demonstrate our approach in two-dimensional Cartesian geometry, focusing on
the azimuthal direction The interpolative methods we test are simple linear, linear in
sine and cosine, an Abu-Shumays â baseâ quadrature with a simple linear adaptive and an
Abu-Shumays â baseâ quadrature with a linear in sine and cosine adaptive. In the latter
two methods the local refinement does not reduce the ability of the base set to integrate
high-order spherical harmonics (important in problems with highly anisotropic
scattering).
We utilize a variety of one-group test problems to demonstrate that in all cases,
angular discretization errors (including "ray effects") can be eliminated to whatever
tolerance the user requests. We further demonstrate through detailed quantitative
analysis that local refinement does indeed produce a more efficient placement of
unknowns.
We conclude that this work introduces a very promising approach to a long-standing
problem in deterministic transport, and we believe it will lead to fruitful avenues of
further investigation.
|
Page generated in 0.1017 seconds