• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 7
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 65
  • 65
  • 20
  • 19
  • 18
  • 15
  • 15
  • 15
  • 14
  • 11
  • 10
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Chaos and adaptation in duopolistic competition

Whitby, Simon Mark January 1999 (has links)
No description available.
2

Aplikace diskrétní simulace v oblasti podpory projektování dopravníkové techniky / Application of discrete simulation in support of the conveyor technology design

Prokop, Aleš January 2009 (has links)
This diploma thesis deals with analysis of current state of the paint shops M11A and M11B in Mladá Boleslav and following creation of the universal simulation model of the paint shop M11A in the company Škoda Auto a.s. by which availability of daily production in different input parameters will be verify. The model of the paint shop M11A will be created in system SimPro which comes under the programs working with discreet simulation. The model input parameters will be defined in the program Excel. According to these experiments will be possible realized a detailed analysis of results of the simulation.
3

Odvození vhodných informací o surface shader a displacement map ze simulací eroze terénu / Deriving suitable surface shader and displacement map information from terrain erosion simulations

Lanza, Dario January 2020 (has links)
Realistic models of landscapes are frequently needed for 3D renderings, VFX work or video games. However, modelling landscapes can be a complicated and labour-intensive task, and for this reason many algorithms have been proposed to automate the process. Among the many possible ways to create a synthetic landscape, the most common one is to simulate the various types of erosions (e.g. erosion caused by glaciers and rivers) that create real eroded landscapes, like the Grand Canyon. Many solutions have been published to simulate such terrain erosion processes in computer graphics. However the authors usually only focus on recreating a landscape at geometry level, and ignore the shading level. But surface colours and textures that match the coarse geometric features created by the erosion simulation are also essential ingredients for a believable result. And obtaining detailed surface textures by running a simulation that is able to catch all the micro-details involved is usually technically infeasible due to the involved complexity. The method that we propose attempts to get around this barrier by applying suitable detail shaders to the results of a coarse-grid erosion simulation. Specifically, we will work with a dictionary of pre-generated shaders for landscape appearance: these will be both "plain" colour...
4

Investigating Attacks on Industrial Control Systems Using Deterministic Replay Simulation

Gregory Walkup (6623090) 10 June 2019 (has links)
From factories to power grids, industrial systems are increasingly being digitally controlled and networked. While networking these systems together improves their efficiency and convenience, it also opens them up to attack by malicious actors. When these attacks occur, forensic investigators need to quickly be able to determine what was compromised and which corrective actions should be taken. In this thesis, a method is proposed for investigating attacks on industrial control systems by simulating the logged inputs of the system over time using a model constructed from the control programs that make up the system. When evaluated, this led to the detection of attacks which perturbed the normal operation of the system by comparing the simulated output to the actual output. It also allowed for dependency tracing between the inputs and outputs of the system, so that attacks could be traced from their unwanted effects to their source and vice-versa. This method can thus greatly aid investigators in recovering the complete attack story using only logs of inputs and outputs to an industrial control system.
5

From genotypes to phenotypes and back again: modelling the interation between individual behaviour and evolution

Watson, J. R. Unknown Date (has links)
No description available.
6

Software-centric and interaction-oriented system-on-chip verification.

Xu, Xiao Xi January 2009 (has links)
As the complexity of very-large-scale-integrated-circuits (VLSI) soars, the complexity of verifying them increases even faster. Design verification becomes the biggest bottleneck in VLSI design, consuming around 70% of the effort and time in a typical design cycle. The problem is even more severe as the system-on-chip (SoC) design paradigm is gaining popularity. Unfortunately, the development in verification techniques has not kept up with the growth of the design capability, and is being left further behind in the SoC era. In recent years, a new generation of hardware-modelling-languages alongside the best practices to use them have emerged and evolved in an attempt to productively build an intelligent stimulationobservation environment referred to as the test-bench. Ironically, as test-benches are becoming more powerful and sophisticated under these best practices known as verification methodologies, the overall verification approaches today are still officially described as ad hoc and experimental and are in great need of a methodological breakthrough. Our research was carried out to seek the desirable methodological breakthrough, and this thesis presents the research outcome: a novel and holistic methodology that brings an opportunity to address the SoC verification problems. Furthermore, our methodology is a solution completely independent of the underlying simulation technologies; therefore, it could extend its applicability into future VLSI designs. Our methodology presents two ideas. (a) We propose that system-level verification should resort to the SoC-native languages rather than the test-bench construction languages; the software native to the SoC should take more critical responsibilities than the test-benches. (b) We challenge the fundamental assumption that “objects-under-test” and “tests” are distinct entities; instead, they should be understood as one type of entities – the interactions; interactions, together with the interference between interactions, i.e., the parallelism and resource-competitions, should be treated as the focus in system-level verification. The above two ideas, namely, software-centric verification and interaction-oriented verification have yielded practical techniques. This thesis elaborates on these techniques, including the transfer-resource-graph based test-generation method targeting the parallelism, the coverage measures of the concurrency completeness using Petri-nets, the automation of the test-programs which can execute smartly in an event-driven manner, and a software observation mechanism that gives insights into the system-level behaviours. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1363926 / Thesis (Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2009
7

AGENT-BASED MODELING TO ASSESS THE EFFECTIVENESS OFRUN HIDE FIGHT

Jae Yong Lee (6618806) 15 May 2019 (has links)
<div>The 1999 Columbine High School shooting was a bold reminder which emphasized the importance of active shooter preparedness for the first responder communities and the general public. Since Columbine, the preparedness for active shooter incidents (ASIs) both in the public and private sectors proactively took place. Currently, the RUN.HIDE.FIGHT.{\textregistered} (RHF) response for unarmed individuals is implemented as part of the emergency response throughout the United States. Despite the RHF's nationwide implementation, there is a lack of literature that supports the effectiveness of RHF to lower casualty rates during ASIs. </div><div></div><div>This thesis examined casualty differences of RHF and the shooters' discharge interval by creating the incidents that have occurred in the library during the 1999 Columbine High School shooting with AnyLogic's agent-based modeling capabilities. Recreating ASI in a virtual environment naturally removes the participants’ physiological and psychological implications. Additionally, the flexibility of agent-based modeling allows validating the model based on the historical events than to run various what-if scenarios. The outcome of this thesis examines the effectiveness of RHF by comparing the output data from the actual event to models with RHF implementation.</div>
8

Parametrization of Crop Models Using UAS Captured Data

Bilal Jamal Abughali (11851874) 17 December 2021 (has links)
<div> <p>Calibration of crop models is an expensive and time intensive procedure, which is essential to accurately predict the possible crop yields given changing climate conditions. One solution is the utilization of unmanned aircraft systems (UAS) deployed with Red Green Blue Composite (RGB), and multispectral sensors, which has the potential to measure and collect in field biomass and yield in a cost and time effective manner. The objective of this project was to develop a relationship between remotely sensed data and crop indices, similar to biomass, to improve the ability to parametrize crop models for local conditions, which in turn could potentially improve the quantification of the effect of hydrological extremes on predicted yield. An experiment consisting of 750 plots (350 varieties) was planted in 2018, and a subset of 18 plots (9 varieties) were planted in 2019. The in-situ above ground biomass along with multispectral and RGB imagery was collected for both experiments throughout the growing season. The imagery was processed through a custom software pipeline to produce spectrally corrected imagery of individual plots. A model was fit between spectral data and sampled biomass resulting in an R-square of 0.68 and RMSE of 160 g when the model was used to estimate biomass for multiple flight dates flights. The VIC-CropSyst model, a coupled hydrological and agricultural system model, was used to simulate crop biomass and yield for multiple years at the experiment location. Soybean growth was parametrized for the location using CropSyst’s Crop Calibrator tool. Biomass values generated from UAS imagery, along with the in-situ collected biomass values were used separately to parametrize soybean simulations in CropSyst resulting in very similar parameter sets that were distinct from the default parameter values. The parametrized crop files along with the default files were used separately to run the VIC-CropSyst model and results were evaluated by comparing simulated and observed values of yield and biomass values. Both parametrized crop files (using in-situ samples and UAS imagery) produced approximately identical results with a max difference of 0.03 T/Ha for any one year, compared to a base value of 3.6 T/Ha, over a 12-year period in which the simulation was ran. The parametrized runs produced yield estimates that were closer to in-situ measured yield, as compared to unparametrized runs, for both bulk varieties and the run experiments, with the exception of 2011, which was a flooding year. The parametrized simulations consistently produced simulated yield results that were higher than the measured bulk variety yields, whereas the default parameters produced consistently lower yields. Biomass was only assessed for 2019, and the results indicate that the biomass after parametrization is lower than the default, which is attributed to the radiation use efficiency parameter being lower in the parametrized files, 2.5 g/MJ versus 2.25 g/MJ. The improved accuracy of predicting yield is evidence that the UAS based methodology is a suitable substitute for the more labor intensive in-situ sampling of biomass for soybean studies under similar environmental conditions.</p> </div><p> <br></p>
9

SIMULATION ANALYSIS OF END-AROUND TAXIWAY OPERATIONS

Yilin Feng (9159608) 23 July 2020 (has links)
<p>Runway and taxiway configuration could affect airport capacity and safety, and airline taxiing time and fuel consumption. In this study, a discrete-event stochastic simulation model is created to explore the impact of four different runway and taxiway choices on a fictional airport with parallel runways that have End-Around Taxiways (EAT) at each end. Scenario 1 represent the conventional runway and taxiway choices used in parallel runway systems, while Scenarios 2, 3, and 4 mimic three new choices that become possible because of the usage of the EAT. Three designed experiments are used to explore the influence of the four scenarios in terms of taxi time, fuel consumption, and number of runway crossings during high traffic periods, as well as the ability to cope with increases in the load level. </p> <p>Some main findings are: 1) using the outboard runway to land and the EAT as the taxi-in path would yield the shortest average taxi-out time, while the average taxi-in time is similar or longer than that in the conventional choice; 2) if arrival aircraft are allowed to land over an active EAT, using the outboard runway to take off and the EAT as the taxi-out path would show advantages in both the average taxi-in time and the average taxi-out time; 3) if the EAT is operated under current FAA regulation, using the outboard runway to take off and the EAT as the taxi-out path could still show advantages in the average taxi-in time, while the average taxi-out time is the longest during high arrival period; 4) the results of the average fuel consumption indicate similar trends with the results of the average taxi time; 5) using the EATs could either eliminate the number of runway crossings or reduce it significantly; 6) the taxi times with the use of EATs are more stable against the increases in the load level in comparison with the conventional choice.</p> <p>Safety and human factor issues related to allowing arrival aircraft to land over an active EAT are discussed, as well as some future research topics. This study may encourage airport operators and researchers to explore how to make full use of existing EATs. This study, along with future cost-benefit analyses based on the results of this research, would be a valuable reference for airports that consider constructing EATs in the future. </p>
10

COMPUTATIONAL MODELING OF A SCALABLE HUMAN BODY AND DEVELOPMENT OF A HELMET TESTING DIGITAL TWIN

Sean Bucherl (12463827) 26 April 2022 (has links)
<p>Human body models (HBMs) have been present in the automotive industry for simulating automotive related injury since the turn of the century and have in recent years found a place in assessment of soldier and sports related injury prediction and assessment. This issue is the lack of models that lie outside of the 50th percentile. By a simple application of physics, it is evident that acceleration or force will affect people of varying weights differently. To this end, having the ability to scale a 50th percentile HBM to targets for weight and stature would allow for better characterization on how an impact or acceleration event will affect people of differing size, especially when ~90% of males can fall outside the 50th percentile for weight and stature and HBMs models from vendors exist in only a few variations outside the 50th percentile [1]. Using Corvid Technologies’ 50th percentile model CAVEMAN (capable of being repositioned) as a base, scaled model from the 5th to 95th percentiles of stature and weight were generated based on ANSURII metrics, using a combination of 1D and 3D scaling transformations. These models met their stature and weight metrics when standing and weight metrics when positioned. </p> <p>After creation of a framework to scale the CAVEMAN HMB, creation of a digital twin to the HIRRT Lab helmet testing model commenced. With the HIRRT Lab’s history of experimental testing of football helmets, a natural turn of events was to bring helmet performance testing into the computational space. This digital twin was a natural evolution and addition to the HIRRT Lab’s helmet testing as it would enable manipulation of helmets that would be infeasible experimentally. After calibration of the barehead using experimental data, helmeted simulation began. Angle of impact, while it was found to effect peak translational acceleration, was found to profoundly effect peak rotational acceleration. With this in mind, various angles of impact were simulated to produce curves similar to experimental results. Helmeted simulations were qualitatively dissimilar to experimental data, prompting a modification of the padding material used by the models. Following various modifications of the padding material model, these inconsistencies between simulated helmets and experimentally tested helmets persisted. These inconsistencies highlight a need for better characterization of material, such as foam, and more thorough validation of simulated helmet models. The results of the helmeted simulations are difficult to quantify, as the evaluation criteria used for the BioCore model did not include rotational acceleration, indicating a need for further research and simulation is necessary. </p>

Page generated in 0.1703 seconds