511 |
Activity-based product costing in a hardwood sawmill through the use of discrete-event simulationRappold, Patrick M. 31 July 2006 (has links)
The purpose of this research was to quantify the impact of the log variables: length, grade, and scaling diameter, on the cost of producing hardwood lumber, using the activity-based costing technique. The usual technique of calculating hardwood lumber product costs is based upon traditional cost accounting, where manufacturing costs are allocated to the products based upon the volume of each product that is produced. With the traditional cost accounting procedure, the variation in the resources used to process the logs is not taken into consideration. As a result, when the cost to manufacture the products is subtracted from the market value of the products, the resulting profit levels of the products may not be truly representative of the actual resources consumed in manufacturing the product.
Using discrete-event simulation, two hardwood sawmills were modeled and a series of experiments were conducted which would not have been feasible to conduct on the mill floors. Results from the simulation experiments illustrated that the activity-based and traditional cost accounting techniques allocated different amounts of manufacturing costs to the products. The largest difference between the two cost accounting techniques was found to be the amount of raw material costs allocated to the products. For one of the sawmills modeled, log grade was identified as having the greatest influence on determining product costs and total manufacturing costs. Results from the model of the second sawmill however demonstrated that log diameter had a greater impact on determining product costs and total manufacturing costs. The commonality of the results from the two simulation models was that the differences in the volume of lumber produced, between the logs that were studied, was a critical component in determining which log parameter had the most effect on changing the dynamics of the sawmill system.
To enable hardwood managers a more precise method of allocating raw material costs to the lumber products, a methodology was developed that uses the principles of activity-based costing to allocate raw material costs. The proposed methodology, termed the lumber yield method, uses lumber yield values from logs with similar characteristics to allocate raw material costs to the lumber products. Analysis of the output from the simulation models illustrated that with the lumber yield method, the amount of raw material costs allocated to the products was not significantly different than the amount allocated by the activity-based costing method. The calculated raw material costs of the products were however, found to be significantly different between the lumber yield method and the traditional volume costing method. / Ph. D.
|
512 |
A Risk-Based Pillar Design Approach for Improving Safety in Underground Stone MinesMonsalve Valencia, Juan Jose 07 July 2022 (has links)
The collapse of a mine pillar is a catastrophic event with great consequences for a mining operation. These events are not uncommon, and have been reported to produce air blasts able to knock down, seriously injury or kill miners; cause cascade pillar failures which involve the collapse of neighboring pillars; produce surface subsidence; and sterilize valuable reserves. In spite of the low probability of occurrence for a pillar collapse in comparison to other ground control instability issues, these consequences make these events high risk. Therefore, the design of these structures should be considered from a risk perspective rather than from a factor-of-safety deterministic approach, as it has been traditionally done. Discontinuities are one of the main failure drivers in underground stone pillars. Regardless of this, traditional pillar strength equations do not consider the effect of these. Recently, the NIOSH pillar strength equation introduced a Large Discontinuity Factor that acknowledges the effect of discontinuities in pillar strength. However, this parameter only considers "averaged" parameters in a deterministic way, failing to account for the spatial variability of fracture networks. This work presents a risk-based pillar design framework that enables to characterize the effect of discontinuities in pillar strength, as well as account for the possible range of stresses that will be acting on pillars. The proposed method was evaluated in an underground dipping stone mine. Discontinuities were characterized by integrating Laser Scanning and virtual discontinuity mapping. Information obtained from the discontinuity mapping process was used to generate discrete fracture networks (DFNs) for each discontinuity set. The Discrete Element Modeling Software 3DEC was used along with the DFNs to simulate fractured rock pillars. Different fractured pillar strength modeling approaches were evaluated, and the most adequate in terms of pillar strength values, failure mechanisms representation, and processing times, was selected. The selected model was tested stochastically, and these results were used to characterize pillar strength variability due to the presence of discontinuities. Pillar stress distributions were estimated using an stochastic finite volume continuous numerical model that accounted for the dipping nature of the deposit and the case study mine design. A pillar probability of failure baseline was defined by contrasting resulting pillar strength and stress distributions using the reliability method. Results from this design framework provide additional decision-making tools to prevent pillar failure from the design stages by reducing the uncertainty. The proposed method enables the integration of pillar design into the risk analysis framework of the mining operation, ultimately improving safety by preventing future pillar collapses. / Doctor of Philosophy / Underground mining operations involve the removal of rock material from the ground. Engineers are required to design structural elements to ensure the stability of the openings as the material is extracted. These structural elements are known as pillars, and are usually carefully-designed regular chunks of rock left unmined. The pressures that the mined rock was carrying are shed to these pillars, which sizes and dimensions must provide enough strength to ensure the overall stability of the mine and avoid a collapse. Failure of mine pillars are events that have occurred, causing serious consequences such as injuring and killing mine workers, producing ground surface sinking affecting neighboring communities, and halting the regular mine operation. Due to the severity of the consequences of pillar collapses, these events are classified as high risk. Therefore, pillar design should be addressed from a perspective that estimates the likelihood of pillar failure given all possible hazards during their design process. The rock material that composes mine pillars present fractures and weakness planes that have an influence on pillar strength. Even though it has been widely demonstrated that these features have a direct impact on pillar strength, most of the commonly used pillar design methods fail to consider such effect, producing uncertainty about the possible range of values for the actual strength of the pillars. This work introduces a pillar design framework that enables to characterize the effect of discontinuities in pillar strength, as well as account for the possible range of stresses that will be acting on pillars. The proposed method was evaluated in an underground inclined stone mine. Laser scanning was used to map and characterize rock fractures. Fracturing information was used to generate virtual three-dimensional fracture models referred to as discrete fracture networks (DFNs). A computational mechanical model of the mine pillar was done using the software 3DEC to evaluate the compressive strength of the fractured pillar. Multiple fracturing scenarios were tested and distributions of possible pillar strengths were estimated from these tests. An additional computational model to estimate the distribution of the stresses in the pillar was performed considering the mine designs and geological conditions. Results from both analyses allowed to estimate a pillar probability of failure baseline. This design framework provides additional decision-making tools to prevent pillar failure from the design stages by reducing uncertainty. The proposed method enables the integration of pillar design into the risk analysis framework of the mining operation, ultimately improving safety by preventing future pillar collapses.
|
513 |
Discrete Event Simulation of Mobility and Spatio-Temporal Spectrum DemandChandan, Shridhar 05 February 2014 (has links)
Realistic mobility and cellular traffic modeling is key to various wireless networking applications and have a significant impact on network performance. Planning and design, network resource allocation and performance evaluation in cellular networks require realistic traffic modeling. We propose a Discrete Event Simulation framework, Diamond - (Discrete Event Simulation of Mobility and Spatio-Temporal Spectrum Demand) to model and analyze realistic activity based mobility and spectrum demand patterns. The framework can be used for spatio-temporal estimation of load, in deciding location of a new base station, contingency planning, and estimating the resilience of the existing infrastructure. The novelty of this framework lies in its ability to capture a variety of complex, realistic and dynamically changing events effectively. Our initial results show that the framework can be instrumental in contingency planning and dynamic spectrum allocation. / Master of Science
|
514 |
Use of the Discrete Vortex Method to Calculate Wind Loads over a Surface-Mounted Prism and a Bridge Cross-Section with FlapsMaines, Nathan Louis 15 June 2005 (has links)
This thesis aims at presenting the Discrete Vortex Method (DVM) as a tool to determine the flow field and associated wind loads over structures. Two structures are considered: the first is a surface-mounted prism and is used to simulate wind loads over low-rise structures. The second is a bridge section with attached flaps that can be oriented to vary the moment coefficient. Advantages and disadvantages of using DVM for these applications are discussed. For the surface-mounted prism, the results show that the developed code correctly predicts the flow separation around the corners. As for the surface pressures, it is concluded that parallel processing, which could be easily implemented for DVM, should be used to correctly predict surface pressures and their variations. This is due to the required slow time advancement of the computations. The results on attaching flaps to bridge sections yield required orientations to minimize moments under different angles of attack. / Master of Science
|
515 |
Prolate Spheroidal Sequence Based Transceivers for Time-Frequency Dispersive ChannelsSaid, Karim A. 12 July 2017 (has links)
Most existing transceivers are Fourier-centric where complex sinusoids play a central role in the internals of the core building blocks. From the channel perspective, complex sinusoids constitute the fundamental effects in the wireless baseband equivalent channel model; exemplified by the time-invariant and time-varying transfer functions in static and time-varying channel conditions respectively. In addition, complex sinusoids are used as signaling waveforms for data transmission through the channel.
The dominant mode of transmission in modern communications is in the form of finite time duration blocks having approximately finite bandwidth. As a result, the time-frequency space becomes projected to a time-frequency subspace having essentially limited support where complex sinusoids suffer from leakage effects due to the finite time extent of a block. In addition, Kronecker delta signals (duals of complex sinusoids) suffer from the same vulnerability due to the finite extent bandwidth. Gabor signaling bases using non-rectangular pulse shapes can attain good confinement in the time-frequency space, however, at the expense of completeness which reduces the utilization efficiency of the time-frequency signaling resources.
Over a signaling block period, a doubly dispersive (DD) channel is projected onto an essentially limited time-frequency subspace. In this subspace, the Discrete Prolate Spheroidal (DPS) basis matched to the channel parameters is known to be optimally compact in representing the channel using a basis expansion decomposition. Unlike the Discrete Fourier Transform (DFT) basis which lacks compactness due to the leakage effect.
Leakage in the expansion coefficients of a particular channel using the DFT basis has a direct correspondence with the Inter-Symbol Interference (ISI) between the DFT signaling components when transmitted through the same channel. For the DPS basis, however, the correspondence is not as obvious. Nevertheless, DPS when used for signaling results in ISI compactness in the form of an exponential decay of distant ISI components.
The efficacy of DPS signaling in DD channels in addition to its efficiency in modeling DD channels motivates the investigation of a new transceiver baseband architecture where DFT is supplanted by DPS. / Ph. D. / Radio communication technology is undeniably a vital organ in modern societies, witnessed by its compelling socio-economic impact. Social media terms such as Facebook and Twitter, etc., have spurred a trans-geographical neologism in the vernacular of nations across the globe. This is all thanks to the seamless ubiquity afforded by untethered wireless communication technology.
High data rate wireless communication for nomadic modes of operation, movement across locations with intermittent dwelling, has been an uncontested success. However, the quality of communications while on the move at ambitiously high speeds, up to 500Km/h, is a completely different state of affairs.
Orthogonal Frequency Division Multiplexing (OFDM) is the working horse technology driving all modern communication systems including Bluetooth, WiFi, 4G Long Term Evolution (LTE), High Definition TV broadcast (HDTV) and more.
As the adage goes “no one size fits all”, OFDM so far has been the size that fits nomadic and relatively slow mobility modes of operation which correspond to the majority of behavior patterns of communicating entities. However, scenarios that rely on high mobility modes are gradually moving out of the fringes and into the center scene, examples being Wide-band Vehicle-to-Vehicle (V2V) and Vehicle-to-Anything (V2X) communication.
Because of OFDM’s inadequacy in such high mobility conditions, both academic and industrial bodies have embarked on their research efforts to investigate signaling schemes resilient to hostile channel effects that arise in high mobility conditions. The thesis of this work is that Discrete Prolate Spheroidal (DPS) Sequences is the most suitable candidate from the list of competitors, DPS being our discovery, that has been presented by the v research community so far. We provide both theoretical arguments to demonstrate the essential merits of DPS as well as case-specific simulations to demonstrate its efficacy.
|
516 |
High performance, scalable, and expressive modeling environment to study mobile malware in large dynamic networksChannakeshava, Karthik 18 October 2011 (has links)
Advances in computing and communication technologies are blurring the distinction between today's PCs and mobile phones. With expected smart phones sales to skyrocket, lack of awareness regarding securing them, and access to personal and proprietary information, has resulted in the recent surge of mobile malware. In addition to using traditional social-engineering techniques such as email and file-sharing, malware unique to Bluetooth, Short Messaging Service (SMS) and Multimedia Messaging Service (MMS) messages are being used. Large scale simulations of malware on wireless networks have becomes important and studying them under realistic device deployments is important to obtain deep insights into their dynamics and devise ways to control them.
In this dissertation, we present EpiNet: an individual-based scalable high-performance oriented modeling environment for simulating the spread of mobile malware over large, dynamic networks. EpiNet can be used to undertake comprehensive studies during both planning and response phase of a malware epidemic in present and future generation wireless networks. Scalability is an important design consideration and the current EpiNet implementation can scale to 3-5 million device networks and case studies show that large factorial designs on million device networks can be executed within a day on 100 node clusters. Beyond compute time, EpiNet has been designed for analysts to easily represent a range of interventions and evaluating their efficacy.
The results indicate that Bluetooth malware with very low initial infection size will not result in a major wireless epidemic. The dynamics are dependent on the network structure and, activity-based mobility models or their variations can yield realistic spread dynamics. Early detection of the malware is extremely important in controlling the spread. Non-adaptive response strategies using static graph measures such as degree and betweenness are not effective. Device-based detection mechanisms provide a much better means to control the spread and only effective when detection occurs early on. Automatic signature generation can help in detecting newer strains of the malware and signature distributions through a central server results in better control of the spread. Centralized dissemination of patches are required to reach a large proportion of devices to be effective in slowing the spread. Non-adaptive dynamic graph measures such as vulnerability are found to be more effective.
Our studies of SMS and hybrid malware show that SMS-only malware spread slightly faster than Bluetooth-only malware and do not spread to all devices. Hybrid malware spread orders of magnitude faster than either SMS-only or Bluetooth-only malware and can cause significant damage. Bluetooth-only malware spread faster than SMS-only malware in cases where density of devices in the proximity of an infected device is higher. Hybrid malware can be much more damaging than Bluetooth-only or SMS-only malware and we need mechanisms that can prevent such an outbreak. EpiNet provide a means to propose, implement and evaluate the response mechanisms in realistic and safe settings. / Ph. D.
|
517 |
Conditional, Structural and Unobserved Heterogeneity: three essays on preference heterogeneity in the design of financial incentives to increase weight loss program reachYuan, Yuan Clara 27 August 2015 (has links)
This dissertation consists of three essays on forms of preference heterogeneity in discrete choice models.
The first essay uses a model of heterogeneity conditional on observed individual-specific characteristics to tailor financial incentives to enhance weight loss program participation among target demographics. Financial incentives in weight loss programs have received attention mostly with respect to effectiveness rather than participation and representativeness. This essay examines the impact of financial incentives on participation with respect to populations vulnerable to obesity and understudied in the weight loss literature. We found significant heterogeneity across target sub-populations and suggest a strategy of offering multiple incentive designs to counter the dispersive effects of preference heterogeneity.
The second essay investigates the ability of a novel elicitation format to reveal decision strategy heterogeneity. Attribute non-attendance, the behaviour of ignoring some attributes when performing a choice task, violates fundamental assumptions of the random utility model. However, self-reported attendance behaviour on dichotomous attendance scales has been shown to be unreliable. In this essay, we assess the ability of a polytomous attendance scale to ameliorate self-report unreliability. We find that the lowest point on the attendance scale corresponds best to non-attendance, attendance scales need be no longer than two or three points, and that the polytomous attendance scale had limited success in producing theoretically consistent results.
The third essay explores available approaches to model different features of unobserved heterogeneity. Unobserved heterogeneity is popularly modelled using the mixed logit model, so called because it is a mixture of standard conditional logit models. Although the mixed logit model can, in theory, approximate any random utility model with an appropriate mixing distribution, there is little guidance on how to select such a distribution. This essay contributes to suggestions on distribution selection by describing the heterogeneity features which can be captured by established parametric mixing distributions and more recently introduced nonparametric mixing distributions, both of a discrete and continuous nature. We provide empirical illustrations of each feature in turn using simple mixing distributions which focus on the feature at hand. / Ph. D.
|
518 |
A study of the lower moments of order statistics of discrete uniform distributionsBombara, Elwood L. 08 September 2012 (has links)
Throughout this thesis, we will talk about samples taken with replacement from the discrete uniform population f(x) -1/N where x = l, 2, 3,..., N. All samples will be of size n except in the case of the median, where the sample size will be 2n + l, an odd number. / Master of Science
|
519 |
Model Reduction of Nonlinear Fire Dynamics ModelsLattimer, Alan Martin 28 April 2016 (has links)
Due to the complexity, multi-scale, and multi-physics nature of the mathematical models for fires, current numerical models require too much computational effort to be useful in design and real-time decision making, especially when dealing with fires over large domains. To reduce the computational time while retaining the complexity of the domain and physics, our research has focused on several reduced-order modeling techniques. Our contributions are improving wildland fire reduced-order models (ROMs), creating new ROM techniques for nonlinear systems, and preserving optimality when discretizing a continuous-time ROM. Currently, proper orthogonal decomposition (POD) is being used to reduce wildland fire-spread models with limited success. We use a technique known as the discrete empirical interpolation method (DEIM) to address the slowness due to the nonlinearity. We create new methods to reduce nonlinear models, such as the Burgers' equation, that perform better than POD over a wider range of input conditions. Further, these ROMs can often be constructed without needing to capture full-order solutions a priori. This significantly reduces the off-line costs associated with creating the ROM. Finally, we investigate methods of time-discretization that preserve the optimality conditions in a certain norm associated with the input to output mapping of a dynamical system. In particular, we are able to show that the Crank-Nicholson method preserves the optimality conditions, but other single-step methods do not. We further clarify the need for these discrete-time ROMs to match at infinity in order to ensure local optimality. / Ph. D.
|
520 |
Discrete Element Method (DEM) Contact Models Applied to Pavement SimulationPeng, Bo 20 August 2014 (has links)
Pavement is usually composed of aggregate, asphalt binder, and air voids; rigid pavement is built with hydraulic cement concrete; reinforced pavement contains steel. With these wide ranges of materials, different mechanical behaviors need to be defined in the pavement simulation. But so far, there is no research providing a comprehensive introduction and comparison between various contact models. This paper will give a detail exploration on the contact models that can be potentially used in DEM pavement simulation; in the analysis, it includes both a theoretical part, simulation results and computational time cost, which can reveal the fundamental mechanical behaviors for the models, and that can be a reference for researchers to choose a proper contact model. A new contact model—the power law viscoelastic contact model is implemented into software PFC 3D and is numerically verified. Unlike existing linear viscoelastic contact models, the approach presented in this thesis provides a detailed exploration of the contact model for thin film power-law creeping materials based on C.Y Chueng's work. This model is aimed at simulating the thin film asphalt layer between two aggregates, which is a common structure in asphalt mixtures. Experiments with specimens containing a thin film asphalt between two aggregates are employed to validate the new contact model. / Master of Science
|
Page generated in 0.0401 seconds