Spelling suggestions: "subject:"deterministic"" "subject:"eterministic""
101 |
Analyzing Metacommunity Models with Statistical Variance Partitioning: A Review and Meta-AnalysisLamb, Kevin Vieira 03 August 2020 (has links)
The relative importance of deterministic processes versus chance is one of the most important questions in science. We analyze the success of variance partitioning methods used to explain variation in β-diversity and partition it into environmental, spatial, and spatially structured environmental components. We test the hypotheses that 1) the number of environmental descriptors in a study would be positively correlated with the percentage of β-diversity explained by the environment, and that the environment would explain more variation in β-diversity than spatial or shared factors in VP analyses, 2) increasing the complexity of environmental descriptors would help account for more of the total variation in β-diversity, and 3) studies based on functional groups would account for more of the total variation in β-diversity than studies based on taxonomic data. Results show that the amount of unexplained β-diversity is on average 65.6%. There was no evidence showing that the number of environmental descriptors, increased complexity of environmental descriptors, or utilizing functional diversity allowed researchers to account for more variation in β-diversity. We review the characteristics of studies that account for a large percentage of variation in β-diversity as well as explanations for studies that accounted for little variation in β-diversity.
|
102 |
Systém řízení zásob / Inventory Management SystemReif, Luděk January 2015 (has links)
My thesis involves in characteristics and implementation of inventory management system. Theoretical part contains particular systems, both deterministic and stochastic. Several examples are used for implementation, which are then algorithmized. Trivial equations are used in deterministic systems. In stochastic systems I used an element of randomness, which is simulated by pseudo random number generator. The result is a complex computer program, in which there is possible to evaluate all models mentioned here, with their own input values. The program then yields both numerical values and graphical output.
|
103 |
Risk management of groundwater drawdown in settlement sensitive areas.Tisell, Victoria January 2013 (has links)
A new method for estimation of risks at settlement calculations is presented. By quantifying uncertainties of settlement calculations, it is possible to make risk analysis and to compare the costs of risk reducing efforts with the benefit these efforts would lead to. The settlement estimations are done by combining uncertainties about soil data, groundwater drawdown and settlement calculations. This master degree thesis describes how the groundwater drawdown is estimated using a numerical model. The model reflects the groundwater decrease around a drainage well with respect to estimated groundwater recharge, dependent on the geology and precipitation. There are four parameters in the model which are connected to soil properties and precipitation; hydraulic conductivity for clay, hydraulic conductivity for till, hydraulic conductivity for sand and mean annual net precipitation. Drawdown is estimated in a deterministic and a probabilistic model, where the probabilistic model uses stochastic parameter values in a Monte Carlo simulation. The risks concerning settlements are found when the groundwater model is integrated with a soil model and a settlement model. When integrated, the new model estimates risks related to all three separate models. Results of groundwater drawdown and ground settlement estimations are spatially presented in a sensitivity and risk analysis. By finding and comparing the most influencing parameters of the settlement, project decision makers will have an easier task deciding on what further measures should be focused on.
|
104 |
Deterministic Quadrature Formulae for the Black–Scholes ModelSaadat, Sajedeh, Kudljakov, Timo January 2021 (has links)
There exist many numerical methods for numerical solutions of the systems of stochastic differential equations. We choose the method of deterministic quadrature formulae proposed by Müller–Gronbach, and Yaroslavtseva in 2016. The idea is to apply a simplified version of the cubature in Wiener space. We explain the method and check how good it works in the simplest case of the classical Black–Scholes model.
|
105 |
Difuze částic z tokamaku vlivem stochastizace magnetických siločar / Diffusion of Particles from Tokamak by Stochastization of Magnetic Field LinesCahyna, Pavel January 2010 (has links)
The thesis summarizes the current state of research of thermonuclear fusion with magnetic confinement and decribes the possible role of stochastization of magnetic field lines and magnetic perturbations in solving some of the problems that are encountered on the road to the exploitation of fusion. It presents a theoretical introduction to deterministic chaos and explains the connection of this theory to magnetic perturbations in tokamak. The results are presented mainly in the form of publications in journals and conference proceedings. Among them are: the comparison of chaotic diffusion of particles and field lines, where significant differences were found; the application of chaotic diffusion of particles to the problem of runaway electrons originating in disruptions, where our simulations contributed to explaining the experimental results from the JET tokamak; the calculation of spectra of perturbations for the COMPASS tokamak, done as a preparation for the upcoming experiments; and modelling of screening of perturbations by plasma, where the observations of divertor footprints show as a promising method to detect the screening.
|
106 |
DYNAMIC TASK OFFLOADING FOR LATENCY MINIMIZATION IN IOT EDGE-CLOUD ENVIRONMENTSHaimin Ku (12457464) 26 April 2022 (has links)
<p>With the exponential growth and diversity of Internet of Things (IoT) devices, computational-intensive and delay-sensitive applications, such as object detection, smart homes, and smart grids, are emerging constantly. We can adopt the paradigm of cloud computing to offload computation-heavy tasks from IoT devices to a cloud server which can break through the limitation of IoT devices with more powerful resources. However, cloud computing architecture can cause high latency which is not suitable for IoT devices that have limited computing and storage capabilities. Edge computing has been introduced to improve this situation by deploying an edge device nearby IoT devices that can provide IoT devices computing resources with low latency compared to cloud computing. Nevertheless, the edge server may not be able to complete all the offloaded tasks from the devices in time when the requests are flooding. In such cases, the edge server can offload some of the requested tasks to a cloud server to further speed up the offloading process with more powerful cloud resources. In this paper, we aim to minimize the average completion time of tasks in an IoT edge-cloud environment, by optimizing the task offloading ratio from edge to cloud, based on Deep Deterministic Policy Gradient (DDPG), a type of Reinforcement Learning (RL) approach. We propose a dynamic task offloading decision mechanism deployed on the edge that can determine the amounts of computational resources to be processed in the cloud server considering multiple factors to complete a task. Simulation results demonstrate that our dynamic task offloading decision mechanism can improve the overall completion time of tasks than naïve approaches. </p>
|
107 |
Conceptualization and Development of a Dam Break Life-Loss Estimation ModelAboelata, Maged A. 01 May 2005 (has links)
Catastrophic events such as dam failures or severe floods are considered to be of low probability, although their consequences can be extremely high and might include loss of life. Earlier studies have linked circumstances surrounding historical darn failure events to actual loss of life and produced formulations using statistical analysis of these events. Shortcomings of these methods include the inability to adjust life-loss estimates based on the type of darn failure, global averaging of population at risk, and ignoring the dynamics of the evacuation process.
The main objective of this research is to develop a practical and improved life-loss estimation approach for use in dam safety risk assessment and emergency planning. The methodology is specifically formulated to overcome the limitations of previous. purely empirical, approaches. The approach takes into account the spatial and temporal distribution of flood water depth and velocity, fate of buildings, simulation of warning diffusion, and tracking the movement of people from their original location towards safe shelters.
The model created, called LlFESim, is designed to serve multiple function s. First, it can be used in a Deterministic Mode using best estimate inputs to obtain point estimates, or to test different policies for evacuation as well as different times of the day and for different dam breach flooding scenarios. Second, the Uncertainty Mode represents input and parameter uncertainties to provide estimates of life loss, and other variables relating to warning and evacuation effectiveness, as probability distributions. These distributions of life loss can be combined with estimates of the uncertainties in other risk assessment inputs, to obtain estimates of uncertainties in risk assessment results, including evaluations against tolerable risk guidelines.
Two communities were used to demonstrate the model performance. Deterministic Mode results display the various possible model outputs. Sensitivity analysis for the Deterministic Mode shows that the effect of warning issuance time is the dominant factor in the estimated life loss. However, other factors play an important role such as the time of day, effectiveness of the warning system, and shelter location. Uncertainty Mode results demonstrate the effect of uncertainties in model parameters and inputs on the model results.
|
108 |
Investigating and Improving Bridge Management System Methodologies Under UncertaintyChang, Minwoo 01 December 2016 (has links)
This dissertation presents a novel procedure to select explanatory variables, without the influence of human bias, for deterioration model development using National Bridge Inventory (NBI) data. Using NBI information, including geometric data and climate information, candidate explanatory variables can be converted into normalized numeric values and analyzed prior to the development of deterministic or stochastic deterioration models. The prevailing approach for explanatory variable selection is to use expert opinion solicited from experienced engineers. This may introduce human influenced biases into the deterioration modeling process. A framework using Least Absolute Shrinkage and Selection Operator (LASSO) penalized regression and covariance analysis are combined to compensate for this potential bias. Additionally, the cross validation analysis and solution path is used as a standard for the selection of minimum number of explanatory variables.
The proposed method is demonstrated through the creation of deterministic deterioration models for deck, superstructure, and substructure for Wyoming bridges and compared to explanatory variables using the expert selection method. The comparison shows a significant decrease in error using the presented framework based on the L2 relative error norm.
The final chapter presents a new method to develop stochastic deterioration models using logistic regression. The relative importance amongst explanatory variables is used to develop a classification tree for Wyoming bridges. The bridges in a subset are commonly associated with several explanatory variables, so that the deterioration models can be more representative and accurate than using a single explanatory variable. The logistic regression is used to introduce the stochastic contribution into the deterioration models. In order to avoid missing data problems, the binary categories condition rating, either remaining the same or decreased, are considered for logistic regression. The probability of changes in bridges’ condition rating is obtained and the averages for same condition ratings are used to create transition probability matrix for each age group.
The deterioration model based on Markov chain are developed for Wyoming bridges and compared with the previous model based on percentage prediction and optimization approach. The prediction error is analyzed, which demonstrates the considerable performance of the proposed method and is suitable for relatively small data samples.
|
109 |
Drivers of plant diversity and distribution in a northern hardwood forest - interacting effects of biotic and abiotic factorsRicart, Raleigh Dean 03 July 2019 (has links)
No description available.
|
110 |
PERFORMANCE AND ENDURANCE CONTROL IN EMERGING STORAGE TECHNOLOGIESRoy, Tanaya, 0000-0003-4545-9299 January 2021 (has links)
The current diverse and wide range of computing moves towards the cloud and de- mands high performance in low latency and high throughput. Facebook reported that 3.3 billion people monthly and 2.6 billion people daily use their data centers over the network. Many emerging user-facing applications require strict control over the stor- age latency’s tail to provide a quality user experience. The low-latency requirement triggers the ongoing replacement of hard drives (HDDs) by solid-state drives (SSDs) in the enterprise, enabling much higher performance and lower end-to-end storage latencies. It becomes more challenging to ensure low latency while maintaining the device’s endurance ratings. We address this challenge in the following ways: 1. Enhance the overall storage system’s performance and maintain the SSD endurance using emerging Non-volatile memory (ENVM) technology. 2. Implement deterministic la- tency in the storage path for latency-sensitive applications. 3. Provide low-latency and differentiated services when write-intensive workloads are present in a shared environment. We have proposed the performance and endurance-centric mechanisms to evaluate the tradeoffs between performance and endurance. In the first approach, our goal is to achieve low storage latency and a long lifetime of the SSD simultane- ously, even for a write-heavy workload. Incorporating a significantly smaller amount of ENVM with SSD as a cache helps to achieve the said goal.SSDs using the NVMe (Non-Volatile Memory Express) interface can achieve low latency as the interface provides several advanced features. The second approach has
iii
explored such features to control the storage tail latency in a distributed environment. The ”predictable latency mode (PLM)” advanced feature helps to achieve determinis- tic storage latency. SSDs need to perform many background management operations to deal with the underlying flash technology traits, the most time-consuming ones be- ing garbage collection and wear leveling. The latency requirement of latency-sensitive applications violates when the I/O requests fall behind such management activities. PLM leverages SSD controllers to perform the background operations during a win- dow, called a ”non-deterministic window (NDWin)”. Whereas during the ”determin- istic window (DTWin)”, applications will experience no such operations. We have extended this feature in the distributed environment and showed how it helps achieve low storage latency when the proposed ”PLM coordinator (PLMC)” is incorporated. In a shared environment with write-intensive workloads present, result in latency peak for Read IO. Moreover, it is required to provide differentiated services with multiple QoS classes present in the workload mixture. We have extended the PLM concept on hybrid storage to realize the deterministic latency for tight tail-controlled appli- cations and assure differentiated services among multiple QoS applications. Since nearly all of the storage access in a data center is over the network, an end-to-end path consists of three components: The host component, Network component, and Storage Component. For latency-sensitive applications, the overall tail latency needs to consider all these components. In a NAS (Network Attached Storage) architecture, it is worth studying the QoS class aware services present at the different components to provide an overall low request-response latency. Therefore, it helps future research to embrace the gaps that have not been considered yet. / Computer and Information Science
|
Page generated in 0.0559 seconds