Spelling suggestions: "subject:"markovchain model"" "subject:"markovchains model""
1 |
Enhanced IEEE 802.11.p-Based MAC Protocols for Vehicular Ad hoc NetworksNasrallah, Yamen January 2017 (has links)
The Intelligent Transportation System (ITS) is a cooperative system that relies on reliable and robust communication schemes among vehicles and between vehicles and their surroundings. The main objective of the ITS is to ensure the safety of vehicle drivers and pedestrians. It provides an efficient and reliable transportation system that enhances traffic management, reduces congestion time, enables smooth traffic re-routing, and avoids economic losses.
An essential part of the ITS is the Vehicular Ad hoc Network (VANET). VANET enables the setup of Vehicle-to-Vehicle (V2V) as well as Vehicle-to-Infrastructure (V2I) communication platforms: the two key components in the ITS. The de-facto standard used in wireless V2V and V2I communication applications is the Dedicated Short Range Communication (DSRC). The protocol that defines the specifications for the Medium Access Control (MAC) layer and the physical layer in the DSRC is the IEEE 802.11p protocol. The IEEE 802.11p protocol and its Enhanced Distributed Channel Access (EDCA) mechanism are the main focus of this thesis. Our main objective is to develop new IEEE 802.11p-based protocol for V2V and V2I communication systems, to improve the performance of safety-related applications. These applications are of paramount importance in ITS, because their goal is to decrease the rate of vehicle collisions, and hence reduce the enormous costs associated with them. In fact, large percentage of vehicle collisions can be easily avoided with the exchange of relevant information between vehicles and the Road Side Units (RSUs) installed on the sides of the roads.
In this thesis, we propose various enhancements to the IEEE 802.11p protocol to improve its performance by lowering the average end-to-end delay and increasing the average network throughput. We introduce multiple adaptive algorithms to promote the QoS support across all the Access Categories (AC) in IEEE 802.11p. We propose two adaptive backoff algorithms and two algorithms that adaptively change the values of the Arbitrary Inter-Frame Space (AIFS). Then we extend our model to be applied in a large-scale vehicular network. In this context, a multi-layer cluster-based architecture is adopted, and two new distributed time synchronization mechanisms are developed.
|
2 |
Modeling, Analysis and Design of Wireless Sensor Network ProtocolsPark, Pangun January 2011 (has links)
Wireless sensor networks (WSNs) have a tremendous potential to improve the efficiencyof many systems, for instance, in building automation and process control.Unfortunately, the current technology does not offer guaranteed energy efficiencyand reliability for closed-loop stability. The main contribution of this thesis is toprovide a modeling, analysis, and design framework for WSN protocols used in controlapplications. The protocols are designed to minimize the energy consumption ofthe network, while meeting reliability and delay requirements from the applicationlayer. The design relies on the analytical modeling of the protocol behavior.First, modeling of the slotted random access scheme of the IEEE 802.15.4medium access control (MAC) is investigated. For this protocol, which is commonlyemployed in WSN applications, a Markov chain model is used to derive theanalytical expressions of reliability, delay, and energy consumption. By using thismodel, an adaptive IEEE 802.15.4 MAC protocol is proposed. The protocol designis based on a constrained optimization problem where the objective function is theenergy consumption of the network, subject to constraints on reliability and packetdelay. The protocol is implemented and experimentally evaluated on a test-bed. Experimentalresults show that the proposed algorithm satisfies reliability and delayrequirements while ensuring a longer lifetime of the network under both stationaryand transient network conditions.Second, modeling and analysis of a hybrid IEEE 802.15.4 MAC combining theadvantages of a random access with contention with a time division multiple access(TDMA) without contention are presented. A Markov chain is used to model thestochastic behavior of random access and the deterministic behavior of TDMA.The model is validated by both theoretical analysis and Monte Carlo simulations.Using this new model, the network performance in terms of reliability, averagepacket delay, average queueing delay, and throughput is evaluated. It is shown thatthe probability density function of the number of received packets per superframefollows a Poisson distribution. Furthermore, it is determined under which conditionsthe time slot allocation mechanism of the IEEE 802.15.4 MAC is stable.Third, a new protocol for control applications, denoted Breath, is proposedwhere sensor nodes transmit information via multi-hop routing to a sink node. Theprotocol is based on the modeling of randomized routing, MAC, and duty-cycling.Analytical and experimental results show that Breath meets reliability and delayrequirements while exhibiting a nearly uniform distribution of the work load. TheBreath protocol has been implemented and experimentally evaluated on a test-bed.Finally, it is shown how the proposed WSN protocols can be used in controlapplications. A co-design between communication and control application layers isstudied by considering a constrained optimization problem, for which the objectivefunction is the energy consumption of the network and the constraints are thereliability and delay derived from the control cost. It is shown that the optimaltraffic load when either the communication throughput or control cost are optimizedis similar. / QC 20110217
|
3 |
Modeling and Analysis of Two-Part Type Manufacturing SystemsJang, Young Jae, Gershwin, Stanley B. 01 1900 (has links)
This paper presents a model and analysis of a synchronous tandem flow line that produces different part types on unreliable machines. The machines operate according to a static priority rule, operating on the highest priority part whenever possible, and operating on lower priority parts only when unable to produce those with higher priorities. We develop a new decomposition method to analyze the behavior of the manufacturing system by decomposing the long production line into small analytically tractable components. As a first step in modeling a production line with more than one part type, we restrict ourselves to the case where there are two part types. Detailed modeling and derivations are presented with a small two-part-type production line that consists of two processing machines and two demand machines. Then, a generalized longer flow line is analyzed. Furthermore, estimates for performance measures, such as average buffer levels and production rates, are presented and compared to extensive discrete event simulation. The quantitative behavior of the two-part type processing line under different demand scenarios is also provided. / Singapore-MIT Alliance (SMA)
|
4 |
Improved cement quality and grinding efficiency by means of closed mill circuit modelingMejeoumov, Gleb Gennadievich 15 May 2009 (has links)
Grinding of clinker is the last and most energy-consuming stage of the cement
manufacturing process, drawing on average 40% of the total energy required to produce
one ton of cement. During this stage, the clinker particles are substantially reduced in
size to generate a certain level of fineness as it has a direct influence on such
performance characteristics of the final product as rate of hydration, water demand,
strength development, and other. The grinding objectives tying together the energy and
fineness requirements were formulated based on a review of the state of the art of clinker
grinding and numerical simulation employing the Markov chain theory.
The literature survey revealed that not only the specific surface of the final
product, but also the shape of its particle size distribution (PSD) is responsible for the
cement performance characteristics. While it is feasible to engineer the desired PSD in
the laboratory, the process-specific recommendations on how to generate the desired
PSD in the industrial mill are not available.
Based on a population balance principle and stochastic representation of the
particle movement within the grinding system, the Markov chain model for the circuit
consisting of a tube ball mill and a high efficiency separator was introduced through the
matrices of grinding and classification. The grinding matrix was calculated using the
selection and breakage functions, whereas the classification matrix was defined from the
Tromp curve of the separator. The results of field experiments carried out at a pilot
cement plant were used to identify the model's parameters. The retrospective process data pertaining to the operation of the pilot grinding circuit was employed to validate the
model and define the process constraints.
Through numerical simulation, the relationships between the controlled (fresh
feed rate; separator cut size) and observed (fineness characteristics of cement;
production rate; specific energy consumption) parameters of the circuit were defined.
The analysis of the simulation results allowed formulation of the process control
procedures with the objectives of decreasing the specific energy consumption of the mill,
maintaining the targeted specific surface area of the final product, and governing the
shape of its PSD.
|
5 |
ARAVQ for discretization of radar data : An experimental study on real world sensor dataLarsson, Daniel January 2015 (has links)
The aim of this work was to investigate if interesting patterns could be found in time series radar data that had been discretized by the algorithm ARAVQ into symbolic representations and if the ARAVQ thus might be suitable for use in the radar domain. An experimental study was performed where the ARAVQ was used to create symbolic representations of data sets with radar data. Two experiments were carried out that used a Markov model to calculate probabilities used for discovering potentially interesting patterns. Some of the most interesting patterns were then investigated further. Results have shown that the ARAVQ was able to create accurate representations for several time series and that it was possible to discover patterns that were interesting and represented higher level concepts. However, the results also showed that the ARAVQ was not able to create accurate representations for some of the time series.
|
6 |
A review of two financial market models: the Black--Scholes--Merton and the Continuous-time Markov chain modelsAyana, Haimanot, Al-Swej, Sarah January 2021 (has links)
The objective of this thesis is to review the two popular mathematical models of the financialderivatives market. The models are the classical Black–Scholes–Merton and the Continuoustime Markov chain (CTMC) model. We study the CTMC model which is illustrated by themathematician Ragnar Norberg. The thesis demonstrates how the fundamental results ofFinancial Engineering work in both models.The construction of the main financial market components and the approach used for pricingthe contingent claims were considered in order to review the two models. In addition, the stepsused in solving the first–order partial differential equations in both models are explained.The main similarity between the models are that the financial market components are thesame. Their contingent claim is similar and the driving processes for both models utilizeMarkov property.One of the differences observed is that the driving process in the BSM model is the Brownianmotion and Markov chain in the CTMC model.We believe that the thesis can motivate other students and researchers to do a deeper andadvanced comparative study between the two models.
|
7 |
Towards Data-Driven I/O Load Balancing in Extreme-Scale Storage SystemsBanavathi Srinivasa, Sangeetha 15 June 2017 (has links)
Storage systems used for supercomputers and high performance computing (HPC) centers exhibit load imbalance and resource contention. This is mainly due to two factors: the bursty nature of the I/O of scientific applications; and the complex and distributed I/O path without centralized arbitration and control. For example, the extant Lustre parallel storage system, which forms the backend storage for many HPC centers, comprises numerous components, all connected in custom network topologies, and serve varying demands of large number of users and applications. Consequently, some storage servers can be more loaded than others, creating bottlenecks, and reducing overall application I/O performance. Existing solutions focus on per application load balancing, and thus are not effective due to the lack of a global view of the system.
In this thesis, we adopt a data-driven quantitative approach to load balance the I/O servers at extreme scale. To this end, we design a global mapper on Lustre Metadata Server (MDS), which gathers runtime statistics collected from key storage components on the I/O path, and applies Markov chain modeling and a dynamic maximum flow algorithm to decide where data should be placed in a load-balanced fashion. Evaluation using a realistic system simulator shows that our approach yields better load balancing, which in turn can help yield higher end-to-end performance. / Master of Science / Critical jobs such as meteorological prediction are run at exa-scale supercomputing facilities like Oak Ridge Leadership Computing Facility (OLCF). It is necessary for these centers to provide an optimally running infrastructure to support these critical workloads. The amount of data that is being produced and processed is increasing rapidly necessitating the need for these High Performance Computing (HPC) centers to design systems to support the increasing volume of data.
Lustre is a parallel filesystem that is deployed in HPC centers. Lustre being a hierarchical filesystem comprises of a distributed layer of Object Storage Servers (OSSs) that are responsible for I/O on the Object Storage Targets (OSTs). Lustre employs a traditional capacity-based Round-Robin approach for file placement on the OSTs. This results in the usage of only a small fraction of OSTs. Traditional Round-Robin approach also increases the load on the same set of OSSs which results in a decreased performance. Thus, it is imperative to have a better load balanced file placement algorithm that can evenly distribute the load across all OSSs and the OSTs in order to meet the future demands of data storage and processing.
We approach the problem of load imbalance by splicing the whole system into two views: filesystem and applications. We first collect the current usage statistics of the filesystem by means of a distributed monitoring tool. We then predict the applications’ I/O request pattern by employing a Markov Chain Model. Finally, we make use of both these components to design a load balancing algorithm that eventually evens out the load on both the OSSs and OSTs.
We evaluate our algorithm on a custom-built simulator that simulates the behavior of the actual filesystem.
|
8 |
Analysis of Hybrid CSMA/CA-TDMA Channel Access Schemes with Application to Wireless Sensor NetworksShrestha, Bharat 27 November 2013 (has links)
A wireless sensor network consists of a number of sensor devices and coordinator(s) or sink(s). A coordinator collects the sensed data from the sensor devices for further processing. In such networks, sensor devices are generally powered by batteries. Since wireless transmission of packets consumes significant amount of energy, it is important for a network to adopt a medium access control (MAC) technology which is energy efficient and satisfies the communication performance requirements. Carrier sense multiple access with collision avoidance (CSMA/CA), which is a popular access technique because of its simplicity, flexibility and robustness, suffers poor throughput and energy inefficiency performance in wireless sensor networks. On the other hand, time division multiple access (TDMA) is a collision free and delay bounded access technique but suffers from the scalability problem. For this reason, this thesis focuses on design and analysis of hybrid channel access schemes which combine the strengths of both the CSMA/CA and TDMA schemes.
In a hybrid CSMA/CA-TDMA scheme, the use of the CSMA/CA period and the TDMA period can be optimized to enhance the communication performance in the network. If such a hybrid channel access scheme is not designed properly, high congestion during the CSMA/CA period and wastage of bandwidth during the TDMA period result in poor communication performance in terms of throughput and energy efficiency. To address this issue, distributed and centralized channel access schemes are proposed to regulate the activities (such as transmitting, receiving, idling and going into low power mode) of the sensor devices. This regulation during the CSMA/CA period and allocation of TDMA slots reduce traffic congestion and thus improve the network performance. In this thesis work, time slot allocation methods in hybrid CSMA/CA-TDMA schemes are also proposed and analyzed to improve the network performance. Finally, such hybrid CSMA/CA-TDMA schemes are used in a cellular layout model for the multihop wireless sensor network to mitigate the hidden terminal collision problem.
|
9 |
Quantitative tool for in vivo analysis of DNA-binding proteins using High Resolution Sequencing DataFilatenkova, Milana S. January 2016 (has links)
DNA-binding proteins (DBPs) such as repair proteins, DNA polymerases, re- combinases, transcription factors, etc. manifest diverse stochastic behaviours dependent on physiological conditions inside the cell. Now that multiple independent in vitro studies have extensively characterised different aspects of the biochemistry of DBPs, computational and mathematical tools that would be able to integrate this information into a coherent framework are in huge demand, especially when attempting a transition to in vivo characterisation of these systems. ChIP-Seq is the method commonly used to study DBPs in vivo. This method generates high resolution sequencing data { population scale readout of the activity of DBPs on the DNA. The mathematical tools available for the analysis of this type of data are at the moment very restrictive in their ability to extract mechanistic and quantitative details on the activity of DBPs. The main trouble that researchers experience when analysing such population scale sequencing data is effectively disentangling complexity in these data, since the observed output often combines diverse outcomes of multiple unsynchronised processes reflecting biomolecular variability. Although being a static snapshot ChIP-Seq can be effectively utilised as a readout for the dynamics of DBPs in vivo. This thesis features a new approach to ChIP-Seq analysis { namely accessing the concealed details of the dynamic behaviour of DBPs on DNA using probabilistic modelling, statistical inference and numerical optimisation. In order to achieve this I propose to integrate previously acquired assumptions about the behaviour of DBPs into a Markov- Chain model which would allow to take into account their intrinsic stochasticity. By incorporating this model into a statistical model of data acquisition, the experimentally observed output can be simulated and then compared to in vivo data to reverse engineer the stochastic activity of DBPs on the DNA. Conventional tools normally employ simple empirical models where the parameters have no link with the mechanistic reality of the process under scrutiny. This thesis marks the transition from qualitative analysis to mechanistic modelling in an attempt to make the most of the high resolution sequencing data. It is also worth noting that from a computer science point of view DBPs are of great interest since they are able to perform stochastic computation on DNA by responding in a probabilistic manner to the patterns encoded in the DNA. The theoretical framework proposed here allows to quantitatively characterise complex responses of these molecular machines to the sequence features.
|
10 |
Probabilistic causes in Markov chainsZiemek, Robin, Piribauer, Jakob, Funke, Florian, Jantsch, Simon, Baier, Christel 22 April 2024 (has links)
By combining two of the central paradigms of causality, namely counterfactual reasoning and probability-raising,we introduce a probabilistic notion of cause in Markov chains. Such a cause consists of finite executions of the probabilistic system after which the probability of an ω-regular effect exceeds a given threshold. The cause, as a set of executions, then has to cover all behaviors exhibiting the effect. With these properties, such causes can be used for monitoring purposes where the aim is to detect faulty behavior before it actually occurs. In order to choose which cause should be computed, we introduce multiple types of costs to capture the consumption of resources by the system or monitor from different perspectives, and study the complexity of computing cost-minimal causes.
|
Page generated in 0.0646 seconds