Spelling suggestions: "subject:"[een] DISCRETIZATION"" "subject:"[enn] DISCRETIZATION""
21 |
Simulations of human movements through temporal discretization and optimizationKaphle, Manindra January 2007 (has links)
<p>Study of physical phenomena by means of mathematical models is common in various branches of engineering and science. In biomechanics, modelling often involves studying human motion by treating the body as a mechanical system made of interconnected rigid links. Robotics deals with similar cases as robots are often designed to imitate human behavior. Modelling human movements is a complicated task and, therefore, requires several simplifications and assumptions. Available computational resources often dictate the nature and the complexity of the models. In spite of all these factors, several meaningful results are still obtained from the simulations.</p><p>One common problem form encountered in real life is the movement between known initial and final states in a pre-specified time. This presents a problem of dynamic redundancy as several different trajectories are possible to achieve the target state. Movements are mathematically described by differential equations. So modelling a movement involves solving these differential equations, along with optimization to find a cost effective trajectory and forces or moments required for this purpose.</p><p>In this study, an algorithm developed in Matlab is used to study dynamics of several common human movements. The main underlying idea is based upon temporal finite element discretization, together with optimization. The algorithm can deal with mechanical formulations of varying degrees of complexity and allows precise definitions of initial and target states and constraints. Optimization is carried out using different cost functions related to both kinematic and kinetic variables.</p><p>Simulations show that generally different optimization criteria give different results. To arrive on a definite conclusion on which criterion is superior over others it is necessary to include more detailed features in the models and incorporate more advanced anatomical and physiological knowledge. Nevertheless, the algorithm and the simplified models present a platform that can be built upon to study more complex and reliable models.</p>
|
22 |
Discrete approximations to continuous distributions in decision analysisHammond, Robert Kincaid 01 July 2014 (has links)
In decision analysis, continuous uncertainties (i.e., the volume of oil in a reservoir) must be approximated by discrete distributions for use in decision trees, for example. Many methods of this process, called discretization, have been proposed and used for decades in practice. To the author’s knowledge, few studies of the methods’ accuracies exist, and were of only limited scope. This work presents a broad and systematic analysis of the accuracies of various discretization methods across large sets of distributions. The results indicate the best methods to use for approximating the moments of different types and shapes of distributions. New, more accurate, methods are also presented for a variety of distributional and practical assumptions. This first part of the work assumes perfect knowledge of the continuous distribution, which might not be the case in practice. The distributions are often elicited from subject matter experts, and because of issues such as cognitive biases, may have assessment errors. The second part of this work examines the implications of this error, and shows that differences between some discretization methods’ approximations are negligible under assessment error, whereas other methods’ errors are significantly larger than those because of imperfect assessments. The final part of this work extends the analysis of previous sections to applications to the Project Evaluation and Review Technique (PERT). The accuracies of several PERT formulae for approximating the mean and variance are analyzed, and several new formulae presented. The new formulae provide significant accuracy improvements over existing formulae. / text
|
23 |
Autonomous qualitative learning of distinctions and actions in a developing agentMugan, Jonathan William 23 November 2010 (has links)
How can an agent bootstrap up from a pixel-level representation to autonomously learn high-level states and actions using only domain general knowledge? This thesis attacks a piece of this problem and assumes that an agent has a set of continuous variables describing the environment and a set of continuous motor primitives, and poses a solution for the problem of how an agent can learn a set of useful states and effective higher-level actions through autonomous experience with the environment. There exist methods for learning models of the environment, and there also exist methods for planning. However, for autonomous learning, these methods have been used almost exclusively in discrete environments.
This thesis proposes attacking the problem of learning high-level states and actions in continuous environments by using a qualitative representation to bridge the gap between continuous and discrete variable representations. In this approach, the agent begins with a broad discretization and initially can only tell if the value of each variable is increasing, decreasing, or remaining steady. The agent then simultaneously learns a qualitative representation (discretization) and a set of predictive models of the environment. The agent then converts these models into plans to form actions. The agent then uses those learned actions to explore the environment.
The method is evaluated using a simulated robot with realistic physics. The robot is sitting at a table that contains one or two blocks, as well as other distractor objects that are out of reach. The agent autonomously explores the environment without being given a task. After learning, the agent is given various tasks to determine if it learned the necessary states and actions to complete them. The results show that the agent was able to use this method to autonomously learn to perform the tasks. / text
|
24 |
Value of information and the accuracy of discrete approximationsRamakrishnan, Arjun 03 January 2011 (has links)
Value of information is one of the key features of decision analysis. This work deals with providing a consistent and functional methodology to determine VOI on proposed well tests in the presence of uncertainties. This method strives to show that VOI analysis with the help of discretized versions of continuous probability distributions with conventional decision trees can be very accurate if the optimal method of discrete approximation is chosen rather than opting for methods such as Monte Carlo simulation to determine the VOI. This need not necessarily mean loss of accuracy at the cost of simplifying probability calculations. Both the prior and posterior probability distributions are assumed to be continuous and are discretized to find the VOI. This results in two steps of discretizations in the decision tree. Another interesting feature is that there lies a level of decision making between the two discrete approximations in the decision tree. This sets it apart from conventional discretized models since the accuracy in this case does not follow the rules and conventions that normal discrete models follow because of the decision between the two discrete approximations.
The initial part of the work deals with varying the number of points chosen in the discrete model to test their accuracy against different correlation coefficients between the information and the actual values. The latter part deals more with comparing different methods of existing discretization methods and establishing conditions under which each is optimal. The problem is comprehensively dealt with in the cases of both a risk neutral and a risk averse decision maker. / text
|
25 |
Suppressing Discretization Error in Langevin Simulations of (2+1)-dimensional Field TheoriesWojtas, David Heinrich January 2006 (has links)
Lattice simulations are a popular tool for studying the non-perturbative physics of nonlinear field theories. To perform accurate lattice simulations, a careful account of the discretization error is necessary. Spatial discretization error as a result of lattice spacing dependence in Langevin simulations of anisotropic (2 + 1)-dimensional classical scalar field theories is studied. A transfer integral operator (TIO) method and a one-loop renormalization (1LR) procedure are used to formulate effective potentials. The effective potentials contain counterterms which are intended to suppress the lattice spacing dependence. The two effective potentials were tested numerically in the case of a phi-4 model. A high accuracy modified Euler method was used to evolve a phenomenological Langevin equation. Large scale Langevin simulations were performed in parameter ranges determined to be appropriate. Attempts at extracting correlation lengths as a means of determining effectiveness of each method were not successful. Lattice sizes used in this study were not of a sufficient size to obtain an accurate representation of thermal equilibrium. As an alternative, the initial behaviour of the ensemble field average was observed. Results for the TIO method showed that it was successful at suppressing lattice spacing dependence in a mean field limit. Results for the 1LR method showed that it performed poorly.
|
26 |
Discretização de Euler para controle impulsivo/Porto, Daniella. January 2012 (has links)
Orientador: Geraldo Nunes Silva / Banca: Orizon Pereira Ferreira / Banca: Fernando Manuel F. Lobo Pereira / Resumo: O objetivo deste trabalho é o estudo do sistema de controle impulsivo de [Wolenski e Zabi'c 2007] para o caso em que o sistema é dado por uma igualdade e modificado pela adição de dois controles abstratos. Tal estudo foi feito utilizando duas abordagens. Na primeira, reparametrizamos o sistema inicial a partir da função distribuição relacionada à medida atômica e, através da discretização de Euler do sistema reparametrizado, obtemos uma sequência de soluções que converge no gráfico para a solução do sistema original, sob algumas hipóteses. Na segunda abordagem, definimos um novo sistema associado a uma sequência de medidas absolutamente contínuas que converge no gráfico para a medida atômica. A partir desse novo sistema, obtemos uma sequência de soluções com a propriedade de convergência no gráfico da solução do sistema original / Abstract: The aim of this work is to study the impulsive control system of [ Wolenski e Zabic 2007] to the case where system is given by an equality and modified by addition of two abstract controls. The study was done using two approaches. At first, we've reparameterized the initial system from distribution function related to atomic measure and, through Euler's discretization of reparameterized system, we've obtained a sequence of solutions which graph converge to the solution of original system, under some hypothesis. In the second approach, we've defined a new system associated with a sequence of absolutely continuous measures which graph converge to atomic measure. From this new system, we've obtained a sequence of solutions with the graph convergence prop erty of the solution of the original system / Mestre
|
27 |
Normalization and statistical methods for crossplatform expression array analysisMapiye, Darlington S January 2012 (has links)
>Magister Scientiae - MSc / A large volume of gene expression data exists in public repositories like the NCBI’s Gene Expression Omnibus (GEO) and the EBI’s ArrayExpress and a significant opportunity to re-use data in various combinations for novel in-silico analyses that would otherwise be too costly to perform or for which the equivalent sample numbers would be difficult to collects exists. For example, combining and re-analysing large numbers of data sets from the same cancer type would increase statistical power, while the effects of individual study-specific variability is weakened, which would result in more reliable gene expression signatures. Similarly, as the number of normal control samples associated with various cancer datasets are often limiting, datasets can be combined to establish a reliable baseline for accurate differential expression analysis. However, combining different microarray studies is hampered by the fact that different studies use different analysis techniques, microarray platforms and experimental protocols. We have developed and optimised a method which transforms gene expression measurements from continuous to discrete data points by grouping similarly expressed genes into quantiles on a per-sample basis. After cross mapping each probe on each chip to the gene it represents, thereby enabling us to integrate experiments based on genes they have in common across different platforms. We optimised the quantile discretization method on previously published prostate cancer datasets produced on two different array technologies and then applied it to a larger breast cancer dataset of 411 samples from 8 microarray platforms. Statistical analysis of the breast cancer datasets identified 1371 differentially expressed genes. Cluster, gene set enrichment and pathway analysis identified functional groups that were previously described in breast cancer and we also identified a novel module of genes encoding ribosomal proteins that have not been previously reported, but whose overall functions have been implicated in cancer development and progression. The former indicates that our integration method does not destroy the statistical signal in the original data, while the latter is strong evidence that the increased sample size increases the chances of finding novel gene expression signatures. Such signatures are also robust to inter-population variation, and show promise for translational applications like tumour grading, disease subtype classification, informing treatment selection and molecular prognostics.
|
28 |
Simulations of human movements through temporal discretization and optimizationKaphle, Manindra January 2007 (has links)
Study of physical phenomena by means of mathematical models is common in various branches of engineering and science. In biomechanics, modelling often involves studying human motion by treating the body as a mechanical system made of interconnected rigid links. Robotics deals with similar cases as robots are often designed to imitate human behavior. Modelling human movements is a complicated task and, therefore, requires several simplifications and assumptions. Available computational resources often dictate the nature and the complexity of the models. In spite of all these factors, several meaningful results are still obtained from the simulations. One common problem form encountered in real life is the movement between known initial and final states in a pre-specified time. This presents a problem of dynamic redundancy as several different trajectories are possible to achieve the target state. Movements are mathematically described by differential equations. So modelling a movement involves solving these differential equations, along with optimization to find a cost effective trajectory and forces or moments required for this purpose. In this study, an algorithm developed in Matlab is used to study dynamics of several common human movements. The main underlying idea is based upon temporal finite element discretization, together with optimization. The algorithm can deal with mechanical formulations of varying degrees of complexity and allows precise definitions of initial and target states and constraints. Optimization is carried out using different cost functions related to both kinematic and kinetic variables. Simulations show that generally different optimization criteria give different results. To arrive on a definite conclusion on which criterion is superior over others it is necessary to include more detailed features in the models and incorporate more advanced anatomical and physiological knowledge. Nevertheless, the algorithm and the simplified models present a platform that can be built upon to study more complex and reliable models. / QC 20101110
|
29 |
DATA PREPROCESSING MANAGEMENT SYSTEMAnumalla, Kalyani January 2007 (has links)
No description available.
|
30 |
Nonlinear Vibrations of Doubly Curved Cross-PLy Shallow ShellsAlhazza, Khaled 13 December 2002 (has links)
The objective of this work is to study the local and global nonlinear vibrations of isotropic single-layered and multi-layered cross-ply doubly curved shallow shells with simply supported boundary conditions. The study is based-on the full nonlinear partial-differential equations of motion for shells. These equations of motion are based-on the von K\'rm\'{a}n-type geometric nonlinear theory and the first-order shear-deformation theory, they are developed by using a variational approach. Many approximate shell theories are presented.
We used two approaches to study the responses of shells to a primary resonance: a $direct$ approach and a $discretization$ approach. In the discretization approach, the nonlinear partial-differential equations are discretized using the Galerkin procedure to reduce them to an infinite system of nonlinearly coupled second-order ordinary-differential equations. An approximate solution of this set is then obtained by using the method of multiple scales for the case of primary resonance. The resulting equations describing the modulations of the amplitude and phase of the excited mode are used to generate frequency- and force-response curves. The effect of the number of modes retained in the approximation on the predicted responses is discussed and the shortcomings of using low-order discretization models are demonstrated. In the direct approach, the method of multiple scales is applied directly to the nonlinear partial-differential equations of motion and associated boundary conditions for the same cases treated using the discretization approach. The results obtained from these two approaches are compared.
For the global analysis, a finite number of equations are integrated numerically to calculate the limit cycles and their stability, and hence their bifurcations, using Floquet theory. The use of this theory requires integrating $2n+(2n)^2$ nonlinear first-order ordinary-differential equations simultaneously, where $n$ is the number of modes retained in the discretization. A convergence study is conducted to determine the number of modes needed to obtain robust results.
The discretized system of equation are used to study the nonlinear vibrations of shells to subharmonic resonances of order one-half. The effect of the number of modes retained in the approximation is presented. Also, the effect of the number of layers on the shell parameters is shown.
Modal interaction between the first and second modes in the case of a two-to-one internal resonance is investigated. We use the method of multiple scales to determine the modulation equations that govern the slow dynamics of the response. A pseudo-arclength scheme is used to determine the fixed points of the modulation equations and the stability of these fixed points is investigated. In some cases, the fixed points undergo Hopf bifurcations, which result in dynamic solutions. A combination of a long-time integration and Floquet theory is used to determine the detailed solution branches and chaotic solutions and their stability. The limit cycles may undergo symmetry-breaking, saddle node, and period-doubling bifurcations. / Ph. D.
|
Page generated in 0.0389 seconds