• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 38
  • 18
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 204
  • 204
  • 33
  • 29
  • 29
  • 25
  • 18
  • 17
  • 16
  • 16
  • 14
  • 14
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

The Virginia Beef Cattle Simulation Model: A bio-economic simulation program modeling the interactions among reproduction, forage availability, nutrition, growth, and marketing in beef cattle

Schick, James Henry 28 April 1999 (has links)
The Virginia Beef Cattle Simulation Model (VBCSM) is a user-friendly, dynamic, stochastic computer program whose objective is to serve as a decision-aid for Virginia cattlemen dealing with complex management issues such as whether to retain weaned calves through the stocker growth stage. Its five source-code modules are reproduction, forage, nutrition, marketing, and a tool that randomly assigns values to variables from appropriate statistical distributions. The VBCSM contains production statistics for 12 breeds, 21 forage species, and three Virginia agro-ecological zones. It simulates at the animal level using information obtained from program dialog. Help can be activated on each dialog page. It is event-driven on a daily time increment. The reproduction module simulates puberty, conception, abortion, parturition, dystocia, lactation, pregnancy testing, culling, within-herd replacement female selection, open or pregnant replacement female purchases, cow and calf mortality, and weaning. The forage module simulates daily pasture growth dependent upon month, precipitation, erosion, pasture maintenance, grazing system, farm location, weed infestation, and slope. This module interacts with the nutrition module to calculate each animal's forage intake, supplemental feed requirements, and daily gain or loss using National Research Council equations. The marketing routine sells the weanling calves to the stocker herd and sells stocker calves, orphan calves, and cull cows through user-specified markets, including the Virginia Tel-O-Market auction. After simulating for eight years to achieve equilibrium conditions, the VBCSM provides an income statement for the cow-calf operation and a partial budget for net income or loss from the stocker herd for up to three years. VBCSM was rigorously tested using a mathematical model with two calving seasons, three lengths of breeding season, four culling policies, and a year effect. Descriptive statistics suggest that the program code works in a consistent manner. However, several potential programming inconsistencies were discovered. Simulation results indicate that fall calving may be more profitable for Virginia cattle producers than spring calving for weanling calf production, but a spring calved stocker program may be more profitable that a fall calved stocker program. Perhaps, VBCSM will help cattlemen to enhance their profits by more efficient market planning and utilization of production resources. / Ph. D.
32

A Dynamic Enrollment Simulation Model For Planning And Decision-making In A University

Robledo, Luis 01 January 2013 (has links)
Decision support systems for university management have had limited improvement in the incorporation of new cutting-edge techniques. Most decisionmakers use traditional forecasting methods to base their decisions in order to maintain financially affordable programs and keep universities competitive for the last few decades. Strategic planning for universities has always been related to enrollment revenues, and operational expenses. Enrollment models in use today are able to represent forecasting based on historical data, considering usual variables like student headcount, student credit, among others. No consideration is given to students’ preferences. Retention models, associated to enrollment, deal with average retention times leaving off preferences as well. Preferences play a major role at institutions where students are not required to declare their intentions (major) immediately. Even if they do, they may change it if they find another, more attractive major, or they may even decide to leave college for external reasons. Enrollment models have been identified to deal with three main purposes: prediction of income from tuition (in-state, out-of-state), planning of future courses and curriculum, and allocation of resources to academic departments, This general perspective does not provide useful information to faculty and Departments for iv detailed planning and allocation of resources for the next term or year. There is a need of new metrics to help faculty and Departments to reach a detailed and useful level in order to effectively plan this allocation of resources. The dynamics in the rate-of-growth, the preferences students have for certain majors at a specific point of time, or economic hardship make a difference when decisions have to be made for budgets requests, hiring of faculty, classroom assignment, parking, transportation, or even building new facilities. Existing models do not make difference between these variables. This simulation model is a hybrid model that considers the use of System Dynamics, Discrete-event and Agent-based simulation, which allows the representation of the general enrollment process at the University level (strategic decisions), and enrollment, retention and major selection at the College (tactical decisions) and Department level (operational decisions). This approach allows lower level to more accurately predict the number of students retained for next term or year, while allowing upper levels to decide on new students to admit (first time in college and transfers) and results in recommendations on faculty hiring, class or labs assignment, and resource allocation. This model merges both high and low levels of student’s enrollment models into one application, allowing not only representation of the current overall enrollment, but also prediction at the College and Department level. This provides information on optimal classroom assignments, faculty and student resource allocation.
33

Simulating urban growth for Baltimore-Washington metropolitan area by coupling SLEUTH model and population projection

Zhao, Suwen 18 June 2015 (has links)
This study used two modelling approaches to predict future urban landscape for the Baltimore-Washington metropolitan areas. In the first approach, we implemented traditional SLEUTH urban simulation model by using publicly available and locally-developed land cover and transportation data. Historical land cover data from 1996, 2001, 2006, and 2011 were used to calibrate SLEUTH model and predict urban growth from 2011 to 2070. SLEUTH model achieved 94.9% of overall accuracy for a validation year of 2014. For the second modelling approach, we predicted future county-level population (e.g., 2050) using historical population data and time-series forecasting. We then used future population projection of 2050, aided by strong population-imperviousness statistical relationship (R2, 0.78-0.86), to predict total impervious surface area for each county. These population-predicted total impervious surface areas were compared to SLEUTH model output, at the county-aggregated spatial scale. For most counties, SLEUTH generated substantially higher number of impervious pixels. An annual urban growth rate of 6.24% for SLEUTH model was much higher than the population-based approach (1.33%), suggesting a large discrepancy between these two modelling approaches. The SLEUTH simulation model, although achieved high accuracy for 2014 validation, may have over-predicted urban growth for our study area. For population-predicted impervious surface area, we further developed a lookup table approach to integrate SLEUTH out and generated spatially explicit urban map for 2050. This lookup table approach has high potential to integrate population-predicted and SLEUTH-predicted urban landscape, especially when future population can be predicted with reasonable accuracy. / Master of Science
34

BLAST LOAD SIMULATION USING SHOCK TUBE SYSTEMS

Ismail, Ahmed January 2017 (has links)
With the increased frequency of accidental and deliberate explosions, the response of civil infrastructure systems to blast loading has become a research topic of great interest. However, with the high cost and complex safety and logistical issues associated with live explosives testing, North American blast resistant construction standards (e.g. ASCE 59-11 & CSA S850-12) recommend the use of shock tubes to simulate blast loads and evaluate relevant structural response. This study aims first at developing a 2D axisymmetric shock tube model, implemented in ANSYS Fluent, a computational fluid dynamics (CFD) software, and then validating the model using the classical Sod’s shock tube problem solution, as well as available shock tube experimental test results. Subsequently, the developed model is compared to a more complex 3D model in terms of the pressure, velocity and gas density. The analysis results show that there is negligible difference between the two models for axisymmetric shock tube performance simulation. However, the 3D model is necessary to simulate non-axisymmetric shock tubes. The design of a shock tube depends on the intended application. As such, extensive analyses are performed in this study, using the developed 2D axisymmetric model, to evaluate the relationships between the blast wave characteristics and the shock tube design parameters. More specifically, the blast wave characteristics (e.g. peak reflected pressure, positive phase duration and the reflected impulse), were compared to the shock tube design parameters (e.g. the driver section pressure and length, the driven v section length, and perforation diameter and their locations). The results show that the peak reflected pressure increases as the driver pressure increases, while a decrease of the driven length increases the peak reflected pressure. In addition, the positive phase duration increases as both the driver length and driven length are increased. Finally, although shock tubes generally generate long positive phase durations, perforations located along the expansion section showed promising results in this study to generate short positive durations. Finally, the developed 2D axisymmetric model is used to optimize the dimensions of a proposed large-scale conical shock tube system developed for civil infrastructure blast response evaluation applications. The capabilities of this proposed shock tube system are further investigated by correlating its design parameters to a range of explosion threats identified by different hemispherical TNT charge weight and distance scenarios. / Thesis / Master of Applied Science (MASc)
35

Putting theory into practice: Predicting the invasion and stability of Wolbachia using simulation models and empirical studies

Crain, Philip R. 01 January 2013 (has links)
A new strategy to fight mosquito-borne disease is based on infections of the maternally-transmitted, intracellular bacterium Wolbachia pipientis. Estimates predict that Wolbachia infects nearly half of all insect species, as well as other arthropods and some nematodes. Wolbachia manipulates the reproduction of its host to promote infection, most commonly causing a form of conditional sterility known as cytoplasmic incompatibility. Generally, Wolbachia infections are benign and do not inflict significant costs upon its host. However, studies demonstrate that some infections are associated with substantial costs to its host. These same infections can also induce pathogen interference and decrease vector competency of important disease vectors. Theory predicts that organisms that incur costs relative to conspecifics are less competitive and their competitive exclusion is expected. In the case of Wolbachia, the bacterium can influence reproduction such that phenotypes with lower fitness may still reach fixation in natural populations. In this dissertation, I describe theoretical and empirical experiments that aim to understand the invasion and stability of Wolbachia infections that impose costs on their host. Particular attention is paid to immature insect lifestages, which have been previously marginalized. These results are discussed in relation to ongoing vector control strategies that would use Wolbachia to manipulate vector populations. Specifically, I discuss the cost of novel Wolbachia infections in Aedespolynesiensis, which decreases larval survival and overall fitness relative to wild-type mosquitoes. Then, a theoretical framework was developed to determine the significance of reductions in larval viability in relation to the population replacement disease control strategy. Further theoretical studies determined that Wolbachia infections, once established, resist re-invasion by uninfected individuals despite relatively high costs associated with infection so long as the infection produces reproductive manipulations. Additional studies determined that larvae hatched from old eggs experience reduced survival in mosquito strains with novel Wolbachia infections when compared to the wild-type. To validate the theoretical studies, model predictions were tested empirically to determine the importance of the larval viability. Finally, a COPAS PLUS machine was evaluated and its role in understanding early larval development in mosquitoes is discussed. The importance of integrated research in disease control is highlighted.
36

Analysis of Synchronous machine dynamics using a novel equivalent circuit model

Danielsson, Christer January 2009 (has links)
<p>This thesis investigates simulation of synchronous machines using a novel Magnetic Equivalent Circuit (MEC) model. The proposed model offers sufficient detail richness for design calculations, while still keeping the simulation time acceptably short.</p><p>Different modeling methods and circuit alternatives are considered. The selected approach is a combination of several previous methods added with some new features. A detailed description of the new model is given. The flux derivative is chosen as the magnetic flow variable which enables a description with standard circuit elements. The model is implemented in dq-coordinates to reduce complexity and simulation time. A new method to reflect winding harmonics is introduced.</p><p>Extensive measurements have been made to estimate the traditional dq-model parameters. These in combination with analytical calculations are used to determine the parameters for the new MEC model.</p><p>The model is implemented using the Dymola simulation program. The results are evaluated by comparison with measurements and FEM simulations. Three different operation cases are investigated; synchronous operation, asynchronous start and inverter fed operation. The agreement with measurements and FEM simulations varies, but it is believed that it can be improved by more work on the parameter determination.</p><p>The overall conclusion is that the MEC method is a useful approach for detailed simulation of synchronous machines. It enables proper modeling of magnetic saturation, and promises sufficiently detailed results to enable accurate loss calculations. However, the experience is that the complexity of the circuits should be kept at a reasonable low level. It is believed that the practical problems with model structure, parameter determination and the simulation itself will otherwise be difficult to master.</p>
37

Stand dynamics of mixed-Nothofagus forest

Hurst, Jennifer Megan January 2014 (has links)
Sustainable management of mixed-species forests for timber is underpinned by research on forest stand dynamics and quantification of tree recruitment, growth and mortality rates. Different performance among species across light gradients theoretically prevents more shade-tolerant species from excluding shade-intolerant species, driving succession and allowing species coexistence. This research investigates stand dynamics and performance trade-offs for co-occurring tree species: Nothofagus fusca (red beech) and Nothofagus menziesii (silver beech), which together dominate extensive areas of New Zealand’s indigenous forest. Using permanent plot data, measurements of permanently tagged individuals are used to quantify recruitment, growth and mortality rates for each species, across size classes and life-history stages (i.e. seedlings, trees). First, seedling growth and mortality is examined in relation to microhabitats (e.g., light, substrate type) and contrasted with patterns of seedling abundance. Second, spatially explicit permanent plot data are used to examine tree growth in relation to competition, local disturbance and tree size over a 23-year period. Third, the influence of competition and disturbance on tree mortality and spatial patterns of tree mortality are examined. Fourth, a simulation model for tree population dynamics is parameterised for mixed-Nothofagus forest and used to evaluate long term consequences of disturbances (e.g. alternate harvesting regimes) on structure and composition. Small-scale disturbance favoured each species at different life stages and for different measures of performance (i.e. recruitment, growth or mortality). N. fusca seedlings and trees grew fast in high light microhabitats, such as those created by small-scale disturbances, but adult N. fusca mortality was elevated near sites of recent disturbance. By contrast, N. menziesii trees grew faster near sites of recent disturbance, which may help this species persist. Consequently, simulation results showed that small-scale disturbance frequency was a major determinant of forest composition and structure, determining whether N. fusca or N. menziesii is dominant. The simulation model could be developed further and used to inform the sustainable management of mixed-Nothofagus forests.
38

A theoretical framework for hybrid simulation in modelling complex patient pathways

Zulkepli, Jafri January 2012 (has links)
Providing care services across several departments and care givers creates the complexity of the patient pathways, as it deals with different departments, policies, professionals, regulations and many more. One example of complex patient pathways (CPP) is one that exists in integrated care, which most literature relates to health and social care integration. The world population and demand for care services have increased. Therefore, necessary actions need to be taken in order to improve the services given to patients in maintaining their quality of life. As the complexity arises due to different needs of stakeholders, it creates many problems especially when it involves complex patient pathways (CPP). To reduce the problems, many researchers tried using several decision tools such as Discrete Event Simulation (DES), System Dynamic (SD), Markov Model and Tree Diagram. This also includes Direct Experimentation, one of techniques in Lean Thinking/Techniques, in their efforts to help simplify the system complexity and provide decision support tools. However, the CPP models were developed using a single tools which makes the models have some limitations and not capable in covering the entire needs and features of the CPP system. For example, lack of individual analysis, feedback loop as well as lack of experimentation prior to the real implementation. As a result, ineffective and inefficient decision making was made. The researcher also argues that by combining the DES and SD techniques, named the hybrid simulation, the CPP model would be enhanced and in turn will help to provide decision support tools and consequently, will reduce the problems in CPP to the minimum level. As there is no standard framework, a framework of a hybrid simulation for modelling the CPP system is proposed in this research. The researcher is much concerned with the framework development rather than the CPP model itself, as there is no standard model that can represent any type of CPP since it is different in term of its regulations, policies, governance and many more. The framework is developed based on several literatures, selected among developed framework/models that have used combinations of DES and SD techniques simultaneously, applied in a large system or in healthcare sectors. This is due to the condition of the CPP system which is a large healthcare system. The proposed framework is divided into three phases, which are Conceptual, Modelling and Models Communication Phase, and each phase is decomposed into several steps. To validate the suitability of the proposed framework that provides guidance in developing CPP models using hybrid simulation, the inductive research methodology will be used with the help of case studies as a research strategy. Two approaches are used to test the suitability of the framework – practical and theoretical. The practical approach involves developing a CPP model (within health and social care settings) assisted by the SD and DES simulation software which was based on several case studies in health and social care systems that used single modelling techniques. The theoretical approach involves applying several case studies within different care settings without developing the model. Four case studies with different areas and care settings have been selected and applied towards the framework. Based on suitability tests, the framework will be modified accordingly. As this framework provides guidance on how to develop CPP models using hybrid simulation, it is argued that it will be a benchmark to researchers and academicians, as well as decision and policy makers to develop a CPP model using hybrid simulation.
39

Simulation modelling of distributed-shared memory multiprocessors

Marurngsith, Worawan January 2006 (has links)
Distributed shared memory (DSM) systems have been recognised as a compelling platform for parallel computing due to the programming advantages and scalability. DSM systems allow applications to access data in a logically shared address space by abstracting away the distinction of physical memory location. As the location of data is transparent, the sources of overhead caused by accessing the distant memories are difficult to analyse. This memory locality problem has been identified as crucial to DSM performance. Many researchers have investigated the problem using simulation as a tool for conducting experiments resulting in the progressive evolution of DSM systems. Nevertheless, both the diversity of architectural configurations and the rapid advance of DSM implementations impose constraints on simulation model designs in two issues: the limitation of the simulation framework on model extensibility and the lack of verification applicability during a simulation run causing the delay in verification process. This thesis studies simulation modelling techniques for memory locality analysis of various DSM systems implemented on top of a cluster of symmetric multiprocessors. The thesis presents a simulation technique to promote model extensibility and proposes a technique for verification applicability, called a Specification-based Parameter Model Interaction (SPMI). The proposed techniques have been implemented in a new interpretation-driven simulation called DSiMCLUSTER on top of a discrete event simulation (DES) engine known as HASE. Experiments have been conducted to determine which factors are most influential on the degree of locality and to determine the possibility to maximise the stability of performance. DSiMCLUSTER has been validated against a SunFire 15K server and has achieved similarity of cache miss results, an average of +-6% with the worst case less than 15% of difference. These results confirm that the techniques used in developing the DSiMCLUSTER can contribute ways to achieve both (a) a highly extensible simulation framework to keep up with the ongoing innovation of the DSM architecture, and (b) the verification applicability resulting in an efficient framework for memory analysis experiments on DSM architecture.
40

Sensitivity analysis for an assignment incentive pay in the United States Navy enlisted personnel assignment process in a simulation environment

Logemann, Karsten 03 1900 (has links)
Approved for public release, distribution is unlimited / The enlisted personnel assignment process is a major part in the United States Navy's Personnel Distribution system. It ensures warfighters and supporting activities receive the right sailor with the right training to the right billet at the right time (R4) and is a critical element in meeting the challenges of Seapower 21 and Global CONOPS. In order to attain these optimal goals the ways-to-do-it need to be customer-centered and should optimize both, the Navy's needs and the sailor's interests. Recent studies and a detailing pilot in 2002 used a web-based marketplace with two-sided matching mechanisms to accomplish this vision. This research examines the introduction of an Assignment Incentive Pay (AIP) as part of the U.S. Navy's enlisted personnel assignment process in a simulation environment. It uses a previously developed simulation tool, including the Deferred Acceptance (DA) and the Linear Programming (LP) matching algorithm to simulate the assignment process. The results of the sensitivity analysis suggested that the Navy should mainly emphasize sailor quality rather than saving AIP funds in order to maximize utility and the possible matches. When adopting such an introduction policy also the percentage of unstable matches under the LP as the matching algorithm was reduced. / Commander, German Navy

Page generated in 0.0853 seconds