211 |
A comparison of computational cognitive models : agent-based systems versus rule-based architecturesOeltjen, Craig L. 03 1900 (has links)
Approved for public release; distribution is unlimited / Increased operational costs and reductions in force size are two of the major factors driving the need for improved computer simulations within the military community. Human performance models are used in various aspects of simulation, including controlling computer generated forces, tactical decision aides, intelligent tutoring systems and new system design. This research makes a comparison between two categories of human performance models, multi-agent systems and rule-based architectures. Each type of model has its own strengths and weaknesses, and is therefore better suited for certain applications. Complex military simulations need human performance models that take advantage of the strengths of more than one type of model. The purpose of this research is to compare the implementation and performance of these two models, and to demonstrate the need for hybrid systems that employ the best aspects of models for a given situation. / Lieutenant Commander, United States Navy
|
212 |
FLOW SIMULATION OF AN OUTPUT SHAFT LINE: Capacity analysis and optimization of the overall process efficiencyÅström, Hannah, Sjöholm, Paulina January 2019 (has links)
Scania CV AB is a world leading provider of sustainable transport solutions. This includes trucks and buses, as well as an extensive offering of product-related services. This project takes place at the Södertälje plant and is carried out under the Transmissions Manufacturing department, with the output shaft line as the main focus. Due to a higher scale of product demand, the department aims to extend the line's production capacity to include both additional components and a higher production volume. The line under investigation consists of eleven serial processes. The project sets out to investigate the line's dynamic, find a theoretical maximum capacity for different product types and derive a reasonable OPE goal. Moreover, the project's final objective refers to how un-utilized capacity can be revealed. The project's results are delivered in three distinct parts. Firstly, a complex and thorough simulation model is delivered to the department alongside usage instructions. The model in its entirety is found in appendix A3. It is crucial to ensure that the model describes the real system. Therefore, the second result is an extensive model verification and validation. Lastly, results to where un-utilized capacity can be found is presented. General findings are that, reducing the cycle times to the times stated in the buy-in contract alongside with reducing the length (but not frequency) of shutdowns gives a considerable capacity enhancement. All further optimization endeavours assumes that the line's cycle times have already been reduced to the purchased times. Moreover, it is possible to increase product type one's manufacturing capacity by 11,4%, product type two's capacity by 13,1% and product types three and four's capacity with 7,2%. The capacity enhancements depends on several parameters for the different product types. / Scania CV AB är världsledande leverantörer av hållbara transportlösningar. Dessa involverar lastbilar och bussar, tillsammans med ett stort utbud av produktrelaterade tjänster. Detta projekt har genomförts på Scanias produktionsenhet i Södertälje, närmare bestämt på Transmissionsbearbetningen, med fokus på bearbetningslinan för utgående axel. På grund av högre produktefterfrågan önskar avdelningen att utöka linans kapacitet genom att tillverka fler produktertyper samt öka produktionsvolymen. Bearbetningslinan som undersöks består av elva stycken seriella processer. Projektet önskar att undersöka linans dynamik, hitta ett teoretiskt maximum för de olika produkttyperna samt presentera ett rimligt OPE mål för dessa. Slutligen önskar projektet att undersöka hur outnyttjad kapacitet kan återfinnas. Projektets resultat presenteras i tre olika delar. Först överlämnas simuleringsmodellen till avdelningen tillsammans med en grundlig genomgång av uppbyggnad och funktion. Modellen i dess helhet återfinns i appendix A3. Det är avgörande att säkerställa modellen beskriver det verkliga systemet. Därför innehåller den andra delen av resultatet en omfattande verifiering och validering. Slutligen presenteras resultat rörande var outnyttjad kapacitet kan återfinnas. Projektets övergripande fynd är att en reducering av cykeltiderna till maskinernas inköpta cykeltider tillsammans med en reducering i maskinstoppens längd (inte frekvens) skulle öka linans kapacitet avsevärt. Alla vidare beskrivna optimeringsförsök förutsätter att linans cykeltider redan reducerats till de inköpta tiderna. I och med detta kan kapaciteten för produkttyp ett ökas med totalt 11,4%, produkttyp två med 13,1% och produkttyperna tre och fyra med 7,9%. kapacitetsökningarna beror på flertalet skilda parametrar för de olika produkttyperna.
|
213 |
Crowd behavioural simulation via multi-agent reinforcement learningLim, Sheng Yan January 2016 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of requirements for the degree of Master of Science. Johannesburg, 2015. / Crowd simulation can be thought of as a group of entities interacting with one another. Traditionally,
an animated entity would require precise scripts so that it can function in a virtual
environment autonomously. Previous studies on crowd simulation have been used in real world
applications but these methods are not learning agents and are therefore unable to adapt and
change their behaviours. The state of the art crowd simulation methods include flow based, particle
and strategy based models. A reinforcement learning agent could learn how to navigate,
behave and interact in an environment without explicit design. Then a group of reinforcement
learning agents should be able to act in a way that simulates a crowd. This thesis investigates
the believability of crowd behavioural simulation via three multi-agent reinforcement learning
methods. The methods are Q-learning in multi-agent markov decision processes model, joint
state action Q-learning and joint state value iteration algorithm. The three learning methods are
able to produce believable and realistic crowd behaviours.
|
214 |
Instrumentos de avaliação e gestão de impactos gerados por rupturas de barragens. / Instruments for assessment and management of dambreak impacts.Uemura, Sandra 26 May 2009 (has links)
Barragens são estruturas geralmente construídas transversalmente a um rio, tendo como objetivos, a geração de energia elétrica, a captação de água para abastecimento público, o controle de cheias e a navegação. Para atender a estes objetivos, as barragens elevam o nível dágua à montante de seu eixo e, em algumas obras, acumulam um significativo volume hídrico para garantir a regularização do corpo dágua afetado. Devido às grandes dimensões envolvidas, dos impactos provocados e dos investimentos necessários, as barragens devem ser sempre seguras, pois acidentes a ela relacionados, geralmente ligados a liberação dos volumes de água acumulados, afetam fortemente o meio ambiente e a sociedade em geral, incluindo vidas humanas. Desta forma, ferramentas que permitam a previsão destes impactos, e a subseqüente organização de planos de ações preventivas e emergenciais, fazem parte das rotinas de projeto, construção e operação destes empreendimentos. Neste trabalho apresenta-se um estudo metodológico, aplicado à Barragem Guarapiranga, voltado para a gestão de emergências ocasionadas por rupturas de barragens, visando o estabelecimento de rotinas para a avaliação dos impactos, através de ferramentas capazes de simular o efluente de um acidente hidrológico ou estrutural, seu desenvolvimento na forma de uma onda de cheia que se propaga pelo vale a jusante e finalmente, a proposição de uma seqüência de atividades relacionadas a interpretação dos resultados das simulações, que permitam a formação dos planos de ações preventivas e emergenciais. / Dams are structures usually built across a river with goals of, generation of electricity, public water supply, flood control and navigation. In order to achieve these goals, dams promote the raise of water level upstream of its axis and, in some cases, it accumulates a significant volume of water to ensure the regularization of the affected water mass. Due to their large dimensions, in associated impacts and the necessary investments, dams must be safe because accidents usually release big amount reserved water and strongly affect the environment and the society, including human lives. Thus, tools that are able to predict such impacts and then, construct plans for the preventive and emergency actions, must be part of the routine of design, built and operation of those dams. This work presents a methodological study, applied in Guarapiranga Dam, for management of emergencies caused by dam-breaks, with the purpose of establishing routines to evaluate the impacts, by using tools that are able to simulate the discharges related to a hydrologic or structural failure, and its wave propagation through the downstream. Finally this work proposes a sequence of activities related to the interpretation of the simulation results that allow the construction of plans for preventive and emergency plans.
|
215 |
Generation of the steady state for Markov chains using regenerative simulation.January 1993 (has links)
by Yuk-ka Chung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 73-74). / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Regenerative Simulation --- p.5 / Chapter § 2.1 --- Discrete time discrete state space Markov chain --- p.5 / Chapter § 2.2 --- Discrete time continuous state space Markov chain --- p.8 / Chapter Chapter 3 --- Estimation --- p.14 / Chapter § 3.1 --- Ratio estimators --- p.14 / Chapter § 3.2 --- General method for generation of steady states from the estimated stationary distribution --- p.17 / Chapter § 3.3 --- Bootstrap method --- p.22 / Chapter § 3.4 --- A new approach: the scoring method --- p.26 / Chapter § 3.4.1 --- G(0) method --- p.29 / Chapter § 3.4.2 --- G(1) method --- p.31 / Chapter Chapter 4 --- Bias of the Scoring Sampling Algorithm --- p.34 / Chapter § 4.1 --- General form --- p.34 / Chapter § 4.2 --- Bias of G(0) estimator --- p.36 / Chapter § 4.3 --- Bias of G(l) estimator --- p.43 / Chapter § 4.4 --- Estimation of bounds for bias: stopping criterion for simulation --- p.51 / Chapter Chapter 5 --- Simulation Study --- p.54 / Chapter Chapter 6 --- Discussion --- p.70 / References --- p.73
|
216 |
Computational study of type II supernova explosion. / 第二類超新星爆發之電算模擬硏究 / Computational study of type II supernova explosion. / Di er lei chao xin xing bao fa zhi dian suan mo ni yan jiuJanuary 2000 (has links)
Lee Tak Wah = 第二類超新星爆發之電算模擬硏究 / 李德華. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaf 78). / Text in English; abstracts in English and Chinese. / Lee Tak Wah = Di er lei chao xin xing bao fa zhi dian suan mo ni yan jiu / Li Dehua. / Abstract --- p.i / Acknowledgments --- p.iii / Contents --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- A brief history of supernova --- p.1 / Chapter 1.2 --- Simulation of type II supernova --- p.3 / Chapter 2 --- Hydrodynamics --- p.6 / Chapter 2.1 --- Lagrangian hydrodynamics --- p.6 / Chapter 2.2 --- Finite difference representation --- p.7 / Chapter 2.3 --- Shock handling --- p.10 / Chapter 2.4 --- Stability and gravitation --- p.11 / Chapter 2.5 --- Test of the hydrodynamic code --- p.12 / Chapter 3 --- Equation of state --- p.17 / Chapter 3.1 --- Electrons and photons --- p.17 / Chapter 3.2 --- Nuclear equation of state --- p.20 / Chapter 3.2.1 --- Bulk energy --- p.21 / Chapter 3.2.2 --- Surface and Coulomb energy --- p.22 / Chapter 3.2.3 --- Nuclei to bubbles phase --- p.23 / Chapter 3.2.4 --- Translational energy --- p.24 / Chapter 3.2.5 --- Nucleons outside nuclei and α particles --- p.24 / Chapter 3.2.6 --- Nuclear statistical equilibrium --- p.25 / Chapter 3.3 --- Results along the isentropes --- p.26 / Chapter 4 --- Electrons and neutrinos behaviour --- p.40 / Chapter 4.1 --- Electron capture --- p.40 / Chapter 4.1.1 --- Electron capture on heavy nuclei --- p.41 / Chapter 4.1.2 --- Electron capture on free protons --- p.42 / Chapter 4.2 --- Neutrino leakage scheme --- p.43 / Chapter 5 --- Core collapse simulation --- p.45 / Chapter 5.1 --- Initial models --- p.45 / Chapter 5.2 --- General features of core collapse --- p.52 / Chapter 5.3 --- After the core bounce --- p.63 / Chapter 6 --- Discussion --- p.72 / Chapter 6.1 --- Conclusion --- p.72 / Chapter 6.2 --- Further development --- p.73 / Chapter A --- Artificial viscosity --- p.75 / Bibliography --- p.78
|
217 |
A Building Evaluation Technique for Fire Department SuppressionTill, Robert 20 December 2000 (has links)
"Building design and site features have an influence on helping or hindering fire fighting operations. Traditional studies relating to building performance evaluation for fire department operations do not address the influence of building site and architectural design on local fire department suppression techniques. These studies also do not relate fire fighting analysis to anticipated fire size. The goal of this dissertation is to develop an analytical procedure by which the size of a specified design fire can be predicted for the time at which fire fighting attack water application is likely to occur. The delays encountered due to building configuration and specified design fire conditions are incorporated in the analysis. Discrete Event Simulation is used to compute time durations for fire fighting operations. The results of this dissertation may be used as a stand alone technical analysis for any office building or as a part of a more complete building performance evaluation. "
|
218 |
An Affordable Portable Obstetric Ultrasound Simulator for Synchronous and Asynchronous Scan TrainingLiu, Li 13 January 2016 (has links)
The increasing use of Point of Care (POC) ultrasound presents a challenge in providing efficient training to new POC ultrasound users. In response to this need, we have developed an affordable, compact, laptop-based obstetric ultrasound training simulator. It offers freehand ultrasound scan on an abdomen-sized scan surface with a 5 degrees of freedom sham transducer and utilizes 3D ultrasound image volumes as training material. On the simulator user interface is rendered a virtual torso, whose body surface models the abdomen of a particular pregnant scan subject. A virtual transducer scans the virtual torso, by following the sham transducer movements on the scan surface. The obstetric ultrasound training is self-paced and guided by the simulator using a set of tasks, which are focused on three broad areas, referred to as modules: 1) medical ultrasound basics, 2) orientation to obstetric space, and 3) fetal biometry. A learner completes the scan training through the following three steps: (i) watching demonstration videos, (ii) practicing scan skills by sequentially completing the tasks in Modules 2 and 3, with scan evaluation feedback and help functions available, and (iii) a final scan exercise on new image volumes for assessing the acquired competency. After each training task has been completed, the simulator evaluates whether the task has been carried out correctly or not, by comparing anatomical landmarks identified and/or measured by the learner to reference landmark bounds created by algorithms, or pre-inserted by experienced sonographers. Based on the simulator, an ultrasound E-training system has been developed for the medical practitioners for whom ultrasound training is not accessible at local level. The system, composed of a dedicated server and multiple networked simulators, provides synchronous and asynchronous training modes, and is able to operate with a very low bit rate. The synchronous (or group-learning) mode allows all training participants to observe the same 2D image in real-time, such as a demonstration by an instructor or scan ability of a chosen learner. The synchronization of 2D images on the different simulators is achieved by directly transmitting the position and orientation of the sham transducer, rather than the ultrasound image, and results in a system performance independent of network bandwidth. The asynchronous (or self-learning) mode is described in the previous paragraph. However, the E-training system allows all training participants to stay networked to communicate with each other via text channel. To verify the simulator performance and training efficacy, we conducted several performance experiments and clinical evaluations. The performance experiment results indicated that the simulator was able to generate greater than 30 2D ultrasound images per second with acceptable image quality on medium-priced computers. In our initial experiment investigating the simulator training capability and feasibility, three experienced sonographers individually scanned two image volumes on the simulator. They agreed that the simulated images and the scan experience were adequately realistic for ultrasound training; the training procedure followed standard obstetric ultrasound protocol. They further noted that the simulator had the potential for becoming a good supplemental training tool for medical students and resident doctors. A clinic study investigating the simulator training efficacy was integrated into the clerkship program of the Department of Obstetrics and Gynecology, University of Massachusetts Memorial Medical Center. A total of 24 3rd year medical students were recruited and each of them was directed to scan six image volumes on the simulator in two 2.5-hour sessions. The study results showed that the successful scan times for the training tasks significantly decreased as the training progressed. A post-training survey answered by the students found that they considered the simulator-based training useful and suitable for medical students and resident doctors. The experiment to validate the performance of the E-training system showed that the average transmission bit rate was approximately 3-4 kB/s; the data loss was less than 1% and no loss of 2D images was visually detected. The results also showed that the 2D images on all networked simulators could be considered to be synchronous even though inter-continental communication existed.
|
219 |
Simulation results of a sequential fixed-width confidence interval for a function of parametersPaik, Chang Soo January 2010 (has links)
Photocopy of typescript. / Digitized by Kansas Correctional Industries
|
220 |
Efficient cross-architecture hardware virtualisationSpink, Thomas January 2017 (has links)
Hardware virtualisation is the provision of an isolated virtual environment that represents real physical hardware. It enables operating systems, or other system-level software (the guest), to run unmodified in a “container” (the virtual machine) that is isolated from the real machine (the host). There are many use-cases for hardware virtualisation that span a wide-range of end-users. For example, home-users wanting to run multiple operating systems side-by-side (such as running a Windows® operating system inside an OS X environment) will use virtualisation to accomplish this. In research and development environments, developers building experimental software and hardware want to prototype their designs quickly, and so will virtualise the platform they are targeting to isolate it from their development workstation. Large-scale computing environments employ virtualisation to consolidate hardware, enforce application isolation, migrate existing servers or provision new servers. However, the majority of these use-cases call for same-architecture virtualisation, where the architecture of the guest and the host machines match—a situation that can be accelerated by the hardware-assisted virtualisation extensions present on modern processors. But, there is significant interest in virtualising the hardware of different architectures on a host machine, especially in the architectural research and development worlds. Typically, the instruction set architecture of a guest platform will be different to the host machine, e.g. an ARM guest on an x86 host will use an ARM instruction set, whereas the host will be using the x86 instruction set. Therefore, to enable this cross-architecture virtualisation, each guest instruction must be emulated by the host CPU—a potentially costly operation. This thesis presents a range of techniques for accelerating this instruction emulation, improving over a state-of-the art instruction set simulator by 2:64x. But, emulation of the guest platform’s instruction set is not enough for full hardware virtualisation. In fact, this is just one challenge in a range of issues that must be considered. Specifically, another challenge is efficiently handling the way external interrupts are managed by the virtualisation system. This thesis shows that when employing efficient instruction emulation techniques, it is not feasible to arbitrarily divert control-flow without consideration being given to the state of the emulated processor. Furthermore, it is shown that it is possible for the virtualisation environment to behave incorrectly if particular care is not given to the point at which control-flow is allowed to diverge. To solve this, a technique is developed that maintains efficient instruction emulation, and correctly handles external interrupt sources. Finally, modern processors have built-in support for hardware virtualisation in the form of instruction set extensions that enable the creation of an abstract computing environment, indistinguishable from real hardware. These extensions enable guest operating systems to run directly on the physical processor, with minimal supervision from a hypervisor. However, these extensions are geared towards same-architecture virtualisation, and as such are not immediately well-suited for cross-architecture virtualisation. This thesis presents a technique for exploiting these existing extensions, and using them in a cross-architecture virtualisation setting, improving the performance of a novel cross-architecture virtualisation hypervisor over state-of-the-art by 2:5x.
|
Page generated in 0.0912 seconds