621 |
Investigation of Discontinuous Deformation Analysis for Application in Jointed Rock MassesKhan, Mohammad S. 13 August 2010 (has links)
The Distinct Element Method (DEM) and Discontinuous Deformation Analysis (DDA) are the two most commonly used discrete element methods in rock mechanics. Discrete element
approaches are computationally expensive as they involve the interaction of multiple discrete bodies with continuously changing contacts. Therefore, it is very important to ensure that the method selected for the analysis is computationally efficient. In this research, a general assessment of DDA and DEM is performed from a computational efficiency perspective, and relevant enhancements to DDA are developed.
The computational speed of DDA is observed to be considerably slower than DEM. In order to identify reasons affecting the computational efficiency of DDA, fundamental aspects of DDA and DEM are compared which suggests that they mainly differ in the contact mechanics, and the time integration scheme used. An in-depth evaluation of these aspects revealed that the openclose iterative procedure used in DDA which exhibits highly nonlinear behavior is one of the main reasons causing DDA to slow down. In order to improve the computational efficiency of DDA, an alternative approach based on a more realistic rock joint behavior is developed in this research. In this approach, contacts are assumed to be deformable, i.e., interpenetrations of the blocks in contact are permitted. This
eliminated the computationally expensive open-close iterative procedure adopted in DDA-Shi and enhanced its speed up to four times.
In order to consider deformability of the blocks in DDA, several approaches are reported. The hybrid DDA-FEM approach is one of them, although this approach captures the block deformability quite effectively, it becomes computationally expensive for large-scale problems. An alternative simplified uncoupled DDA-FEM approach is developed in this research. The main idea of this approach is to model rigid body movement and the block internal deformation separately. Efficiency and simplicity of this approach lie in keeping the DDA and the FEM algorithms separate and solving FEM equations individually for each block.
Based on a number of numerical examples presented in this dissertation, it is concluded that from a computational efficiency standpoint, the implicit solution scheme may not be appropriate for discrete element modelling. Although for quasi-static problems where inertia effects are insignificant, implicit schemes have been successfully used for linear analyses, they do not prove to be advantageous for contact-type problems even in quasi-static mode due to the highly nonlinear behavior of contacts.
|
622 |
Dynamic Load Balancing Schemes for Large-scale HLA-based SimulationsDe Grande, Robson E. 26 July 2012 (has links)
Dynamic balancing of computation and communication load is vital for the execution stability and performance of distributed, parallel simulations deployed on shared, unreliable resources of large-scale environments. High Level Architecture (HLA) based simulations can experience a decrease in performance due to imbalances that are produced initially and/or during run-time. These imbalances are generated by the dynamic load changes of distributed simulations or by unknown, non-managed background processes resulting from the non-dedication of shared resources. Due to the dynamic execution characteristics of elements that compose distributed simulation applications, the computational load and interaction dependencies of each simulation entity change during run-time. These dynamic changes lead to an irregular load and communication distribution, which increases overhead of resources and execution delays. A static partitioning of load is limited to deterministic applications and is incapable of predicting the dynamic changes caused by distributed applications or by external background processes. Due to the relevance in dynamically balancing load for distributed simulations, many balancing approaches have been proposed in order to offer a sub-optimal balancing solution, but they are limited to certain simulation aspects, specific to determined applications, or unaware of HLA-based simulation characteristics. Therefore, schemes for balancing the communication and computational load during the execution of distributed simulations are devised, adopting a hierarchical architecture. First, in order to enable the development of such balancing schemes, a migration technique is also employed to perform reliable and low-latency simulation load transfers. Then, a centralized balancing scheme is designed; this scheme employs local and cluster monitoring mechanisms in order to observe the distributed load changes and identify imbalances, and it uses load reallocation policies to determine a distribution of load and minimize imbalances. As a measure to overcome the drawbacks of this scheme, such as bottlenecks, overheads, global synchronization, and single point of failure, a distributed redistribution algorithm is designed. Extensions of the distributed balancing scheme are also developed to improve the detection of and the reaction to load imbalances. These extensions introduce communication delay detection, migration latency awareness, self-adaptation, and load oscillation prediction in the load redistribution algorithm. Such developed balancing systems successfully improved the use of shared resources and increased distributed simulations' performance.
|
623 |
Adapting Evolutionary Approaches for Optimization in Dynamic EnvironmentsYounes, Abdunnaser January 2006 (has links)
Many important applications in the real world that can be modelled as combinatorial optimization problems are actually dynamic in nature. However, research on dynamic optimization focuses on continuous optimization problems, and rarely targets combinatorial problems. Moreover, dynamic combinatorial problems, when addressed, are typically tackled within an application context. <br /><br /> In this thesis, dynamic combinatorial problems are addressed collectively by adopting an evolutionary based algorithmic approach. On the plus side, their ability to manipulate several solutions at a time, their robustness and their potential for adaptability make evolutionary algorithms a good choice for solving dynamic problems. However, their tendency to converge prematurely, the difficulty in fine-tuning their search and their lack of diversity in tracking optima that shift in dynamic environments are drawbacks in this regard. <br /><br /> Developing general methodologies to tackle these conflicting issues constitutes the main theme of this thesis. First, definitions and measures of algorithm performance are reviewed. Second, methods of benchmark generation are developed under a generalized framework. Finally, methods to improve the ability of evolutionary algorithms to efficiently track optima shifting due to environmental changes are investigated. These methods include adapting genetic parameters to population diversity and environmental changes, the use of multi-populations as an additional means to control diversity, and the incorporation of local search heuristics to fine-tune the search process efficiently. <br /><br /> The methodologies developed for algorithm enhancement and benchmark generation are used to build and test evolutionary models for dynamic versions of the travelling salesman problem and the flexible manufacturing system. Results of experimentation demonstrate that the methods are effective on both problems and hence have a great potential for other dynamic combinatorial problems as well.
|
624 |
LES modelling of non-premixed and partially premixed turbulent flamesSadasivuni, S. K. January 2009 (has links)
A large eddy simulation (LES) model has been developed and validated for turbulent non-premixed and partially premixed combustion systems. LES based combustion modelling strategy has the ability to capture the detailed structure of turbulent flames and account for the effects of radiation heat loss. Effects of radiation heat loss is modelled by employing an enthalpy-defect based non-adiabatic flamelet model (NAFM) in conjunction with a steady non-adiabatic flamelet approach. The steady laminar flamelet model (SLFM) is used with multiple flamelet solutions through the development of pre-integrated look up tables. The performance of the non-adiabatic model is assessed against experimental measurements of turbulent CH4/H2 bluff-body stabilized and swirl stabilized jet flames carried out by the University of Sydney combustion group. Significant enhancements in the predictions of mean thermal structure have been observed with both bluff body and swirl stabilized flames by the consideration of radiation heat loss through the non-adiabatic flamelet model. In particular, mass fractions of product species like CO2 and H2O have been improved with the consideration of radiation heat loss. From the Sydney University data the HM3e flame was also investigated with SLFM using multiple flamelet strategy and reasonably fair amount of success has been achieved. In this work, unsteady flamelet/progress variable (UFPV) approach based combustion model which has the potential to describe both non-premixed and partially premixed combustion, has been developed and incorporated in an in-house LES code. The probability density function (PDF) for reaction progress variable and scalar dissipation rate is assumed to follow a delta distribution while mixture fraction takes the shape of a beta PDF. The performance of the developed model in predicting the thermal structure of a partially premixed lifted turbulent jet flame in vitiated co-flow has been evaluated. The UFPV model has been found to successfully predict the flame lift-off, in contrast SLFM results in a false attached flame. The mean lift-off height is however over-predicted by UFPV-δ function model by ~20% for methane based flame and under-predicted by ~50% for hydrogen based flame. The form of the PDF for the reaction progress variable and inclusion of a scalar dissipation rate thus seems to have a strong influence on the predictions of gross characteristics of the flame. Inclusion of scalar dissipation rate in the calculations appears to be successful in predicting the flame extinction and re-ignition phenomena. The beta PDF distribution for the reaction progress variable would be a true prospect for extending the current simulation to predict the flame characteristics to a higher degree.
|
625 |
A theoretical framework for hybrid simulation in modelling complex patient pathwaysZulkepli, Jafri January 2012 (has links)
Providing care services across several departments and care givers creates the complexity of the patient pathways, as it deals with different departments, policies, professionals, regulations and many more. One example of complex patient pathways (CPP) is one that exists in integrated care, which most literature relates to health and social care integration. The world population and demand for care services have increased. Therefore, necessary actions need to be taken in order to improve the services given to patients in maintaining their quality of life. As the complexity arises due to different needs of stakeholders, it creates many problems especially when it involves complex patient pathways (CPP). To reduce the problems, many researchers tried using several decision tools such as Discrete Event Simulation (DES), System Dynamic (SD), Markov Model and Tree Diagram. This also includes Direct Experimentation, one of techniques in Lean Thinking/Techniques, in their efforts to help simplify the system complexity and provide decision support tools. However, the CPP models were developed using a single tools which makes the models have some limitations and not capable in covering the entire needs and features of the CPP system. For example, lack of individual analysis, feedback loop as well as lack of experimentation prior to the real implementation. As a result, ineffective and inefficient decision making was made. The researcher also argues that by combining the DES and SD techniques, named the hybrid simulation, the CPP model would be enhanced and in turn will help to provide decision support tools and consequently, will reduce the problems in CPP to the minimum level. As there is no standard framework, a framework of a hybrid simulation for modelling the CPP system is proposed in this research. The researcher is much concerned with the framework development rather than the CPP model itself, as there is no standard model that can represent any type of CPP since it is different in term of its regulations, policies, governance and many more. The framework is developed based on several literatures, selected among developed framework/models that have used combinations of DES and SD techniques simultaneously, applied in a large system or in healthcare sectors. This is due to the condition of the CPP system which is a large healthcare system. The proposed framework is divided into three phases, which are Conceptual, Modelling and Models Communication Phase, and each phase is decomposed into several steps. To validate the suitability of the proposed framework that provides guidance in developing CPP models using hybrid simulation, the inductive research methodology will be used with the help of case studies as a research strategy. Two approaches are used to test the suitability of the framework – practical and theoretical. The practical approach involves developing a CPP model (within health and social care settings) assisted by the SD and DES simulation software which was based on several case studies in health and social care systems that used single modelling techniques. The theoretical approach involves applying several case studies within different care settings without developing the model. Four case studies with different areas and care settings have been selected and applied towards the framework. Based on suitability tests, the framework will be modified accordingly. As this framework provides guidance on how to develop CPP models using hybrid simulation, it is argued that it will be a benchmark to researchers and academicians, as well as decision and policy makers to develop a CPP model using hybrid simulation.
|
626 |
Simulation modelling of distributed-shared memory multiprocessorsMarurngsith, Worawan January 2006 (has links)
Distributed shared memory (DSM) systems have been recognised as a compelling platform for parallel computing due to the programming advantages and scalability. DSM systems allow applications to access data in a logically shared address space by abstracting away the distinction of physical memory location. As the location of data is transparent, the sources of overhead caused by accessing the distant memories are difficult to analyse. This memory locality problem has been identified as crucial to DSM performance. Many researchers have investigated the problem using simulation as a tool for conducting experiments resulting in the progressive evolution of DSM systems. Nevertheless, both the diversity of architectural configurations and the rapid advance of DSM implementations impose constraints on simulation model designs in two issues: the limitation of the simulation framework on model extensibility and the lack of verification applicability during a simulation run causing the delay in verification process. This thesis studies simulation modelling techniques for memory locality analysis of various DSM systems implemented on top of a cluster of symmetric multiprocessors. The thesis presents a simulation technique to promote model extensibility and proposes a technique for verification applicability, called a Specification-based Parameter Model Interaction (SPMI). The proposed techniques have been implemented in a new interpretation-driven simulation called DSiMCLUSTER on top of a discrete event simulation (DES) engine known as HASE. Experiments have been conducted to determine which factors are most influential on the degree of locality and to determine the possibility to maximise the stability of performance. DSiMCLUSTER has been validated against a SunFire 15K server and has achieved similarity of cache miss results, an average of +-6% with the worst case less than 15% of difference. These results confirm that the techniques used in developing the DSiMCLUSTER can contribute ways to achieve both (a) a highly extensible simulation framework to keep up with the ongoing innovation of the DSM architecture, and (b) the verification applicability resulting in an efficient framework for memory analysis experiments on DSM architecture.
|
627 |
Interactions between fine particlesLi, Fan January 2009 (has links)
Computer simulation using the Discrete Element Method (DEM) has emerged as a powerful tool in studying the behaviour of particulate systems during powder flow and compaction. Contact law between particles is the most important input to the Discrete Element simulation. However, most of the present simulations employ over-simplistic contact laws which cannot capture the real behaviour of particulate systems. For example, plastic yielding, material brittleness, sophisticated particle geometry, surface roughness, and particle adhesion are all vitally important factors affecting the behaviour of particle interactions, but have been largely ignored in most of the DEM simulations. This is because it is very difficult to consider these factors in an analytical contact law which has been the characteristic approach in DEM simulations. This thesis presents a strategy for obtaining the contact laws numerically and a comprehensive study of all these factors using the numerical approach. A numerical method, named as the Material Point Method (MPM) in the literature, is selected and shown to be ideal to study the particle interactions. The method is further developed in this work in order to take into account all the factors listed above. For example, to study the brittle failure during particle impact, Weibull’s theory is incorporated into the material point method; to study the effect of particle adhesion, inter-atomic forces are borrowed from the Molecular Dynamic model and incorporated into the method. These developments themselves represent a major progress in the numerical technique, enabling the method to be applied to a much wider range of problems. The focus of the thesis is however on the contact laws between extremely fine particles. Using the numerical technique as a tool, the entire existing theoretical framework for particle contact is re-examined. It is shown that, whilst the analytical framework is difficult to capture the real particle behaviour, numerical contact laws should be used in its place.
|
628 |
Essays on long memory time series and fractional cointegrationAlgarhi, Amr Saber Ibrahim January 2013 (has links)
The dissertation considers an indirect approach for the estimation of the cointegrating parameters, in the sense that the estimators are jointly constructed along with estimating other nuisance parameters. This approach was proposed by Robinson (2008) where a bivariate local Whittle estimator was developed to jointly estimate a cointegrating parameter along with the memory parameters and the phase parameters (discussed in chapter 2). The main contributions of this dissertation is to establish, similar to Robinson (2008), a joint estimation of the memory, cointegrating and phase parameters in stationary and nonstationary fractionally cointegrated models in a multivariate framework. In order to accomplish such task, a general shape of the spectral density matrix, first noted in Davidson and Hashimzade (2008), is utilised to cover multivariate jointly dependent stationary long memory time series allowing more than one cointegrating relation (discussed in chapter 3). Consequently, the notion of the extended discrete Fourier transform is adopted based on the work of Phillips (1999) to allow for the multivariate estimation to cover the non stationary region (explained in chapter 4). Overall, the estimation methods adopted in this dissertation follows the semiparametric approach, in that the spectral density is only specified in a neighbourhood of zero frequency. The dissertation is organised in four self-contained chapters that are connected to each other, in additional to this introductory chapter: • Chapter 1 discusses the univariate long memory time series analysis covering different definitions, models and estimation methods. Consequently, parametric and semiparametric estimation methods were applied to a univariate series of the daily Egyptian stock returns to examine the presence of long memory properties. The results show strong and significant evidence of long memory in the Egyptian stock market which refutes the hypothesis of market efficiency. • Chapter 2 expands the analysis in the first chapter using a bivariate framework first introduced by Robinson (2008) for long memory time series in stationary system. The bivariate model presents four unknown parameters, including two memory parameters, a phase parameter and a cointegration parameter, which are jointly estimated. The estimation analysis is applied to a bivariate framework includes the US and Canada inflation rates where a linear combination between the US and Canada inflation rates that has a long memory less than the two individual series has been detected. • Chapter 3 introduces a semiparametric local Whittle (LW) estimator for a general multivariate stationary fractional cointegration using a general shape of the spectral density matrix first introduced by Davidson and Hashimzade (2008). The proposed estimator is used to jointly estimate the memory parameters along with the cointegrating and phase parameters. The consistency and asymptotic normality of the proposed estimator is proved. In addition, a Monte Carlo study is conducted to examine the performance of the new proposed estimator for different sample sizes. The multivariate local whittle estimation analysis is applied to three different relevant examples to examine the presence of fractional cointegration relationships. • In the first three chapters, the estimation procedures focused on the stationary case where the memory parameter is between zero and half. On the other hand, the analysis in chapter 4, which is a natural progress to that in chapter 3, adjusts the estimation procedures in order to cover the non-stationary values of the memory parameters. Chapter 4 expands the analysis in chapter 3 using the extended discrete Fourier transform and periodogram to extend the local Whittle estimation to non stationary multivariate systems. As a result, the new extended local Whittle (XLW) estimator can be applied throughout the stationary and non stationary zones. The XLW estimator is identical to the LW estimator in the stationary region, introduced in chapter 3. Application to a trivariate series of US money aggregates is employed.
|
629 |
Emergence at the Fundamental Systems Level: Existence Conditions for Iterative SpecificationsZeigler, Bernard, Muzy, Alexandre 09 November 2016 (has links)
Conditions under which compositions of component systems form a well-defined system-of-systems are here formulated at a fundamental level. Statement of what defines a well-defined composition and sufficient conditions guaranteeing such a result offers insight into exemplars that can be found in special cases such as differential equation and discrete event systems. For any given global state of a composition, two requirements can be stated informally as: (1) the system can leave this state, i.e., there is at least one trajectory defined that starts from the state; and (2) the trajectory evolves over time without getting stuck at a point in time. Considered for every global state, these conditions determine whether the resultant is a well-defined system and, if so, whether it is non-deterministic or deterministic. We formulate these questions within the framework of iterative specifications for mathematical system models that are shown to be behaviorally equivalent to the Discrete Event System Specification (DEVS) formalism. This formalization supports definitions and proofs of the afore-mentioned conditions. Implications are drawn at the fundamental level of existence where the emergence of a system from an assemblage of components can be characterized. We focus on systems with feedback coupling where existence and uniqueness of solutions is problematic.
|
630 |
The Discrete Logarithm Problem in Finite Fields of Small Characteristic / Das diskrete Logarithmusproblem in endlichen Körpern kleiner CharakteristikZumbrägel, Jens 14 March 2017 (has links) (PDF)
Computing discrete logarithms is a long-standing algorithmic problem, whose hardness forms the basis for numerous current public-key cryptosystems. In the case of finite fields of small characteristic, however, there has been tremendous progress recently, by which the complexity of the discrete logarithm problem (DLP) is considerably reduced.
This habilitation thesis on the DLP in such fields deals with two principal aspects. On one hand, we develop and investigate novel efficient algorithms for computing discrete logarithms, where the complexity analysis relies on heuristic assumptions. In particular, we show that logarithms of factor base elements can be computed in polynomial time, and we discuss practical impacts of the new methods on the security of pairing-based cryptosystems.
While a heuristic running time analysis of algorithms is common practice for concrete security estimations, this approach is insufficient from a mathematical perspective. Therefore, on the other hand, we focus on provable complexity results, for which we modify the algorithms so that any heuristics are avoided and a rigorous analysis becomes possible. We prove that for any prime field there exist infinitely many extension fields in which the DLP can be solved in quasi-polynomial time.
Despite the two aspects looking rather independent from each other, it turns out, as illustrated in this thesis, that progress regarding practical algorithms and record computations can lead to advances on the theoretical running time analysis -- and the other way around. / Die Berechnung von diskreten Logarithmen ist ein eingehend untersuchtes algorithmisches Problem, dessen Schwierigkeit zahlreiche Anwendungen in der heutigen Public-Key-Kryptographie besitzt. Für endliche Körper kleiner Charakteristik sind jedoch kürzlich erhebliche Fortschritte erzielt worden, welche die Komplexität des diskreten Logarithmusproblems (DLP) in diesem Szenario drastisch reduzieren.
Diese Habilitationsschrift erörtert zwei grundsätzliche Aspekte beim DLP in Körpern kleiner Charakteristik. Es werden einerseits neuartige, erheblich effizientere Algorithmen zur Berechnung von diskreten Logarithmen entwickelt und untersucht, wobei die Laufzeitanalyse auf heuristischen Annahmen beruht. Unter anderem wird gezeigt, dass Logarithmen von Elementen der Faktorbasis in polynomieller Zeit berechnet werden können, und welche praktischen Auswirkungen die neuen Verfahren auf die Sicherheit paarungsbasierter Kryptosysteme haben.
Während heuristische Laufzeitabschätzungen von Algorithmen für die konkrete Sicherheitsanalyse üblich sind, so erscheint diese Vorgehensweise aus mathematischer Sicht unzulänglich. Der Aspekt der beweisbaren Komplexität für DLP-Algorithmen konzentriert sich deshalb darauf, modifizierte Algorithmen zu entwickeln, die jegliche heuristische Annahme vermeiden und dessen Laufzeit rigoros gezeigt werden kann. Es wird bewiesen, dass für jeden Primkörper unendlich viele Erweiterungskörper existieren, für die das DLP in quasi-polynomieller Zeit gelöst werden kann.
Obwohl die beiden Aspekte weitgehend unabhängig voneinander erscheinen mögen, so zeigt sich, wie in dieser Schrift illustriert wird, dass Fortschritte bei praktischen Algorithmen und Rekordberechnungen auch zu Fortentwicklungen bei theoretischen Laufzeitabschätzungen führen -- und umgekehrt.
|
Page generated in 0.0544 seconds