Spelling suggestions: "subject:"bnetwork artition"" "subject:"bnetwork epartition""
1 |
Partitioning of Urban Transportation Networks Utilizing Real-World Traffic Parameters for Distributed Simulation in SUMOAhmed, Md Salman, Hoque, Mohammad A. 27 January 2017 (has links)
This paper describes a partitioning algorithm for real-world transportation networks incorporating previously unaccounted parameters like signalized traffic intersection, road segment length, traffic density, number of lanes and inter-partition communication overhead due to the migration of vehicles from one partition to another. We also describe our hypothetical framework for distributed simulation of the partitioned road network on SUMO, where a master controller is currently under development using TraCI APIs and MPI library to coordinate the parallel simulation and synchronization between the sub-networks generated by our proposed algorithm.
|
2 |
Structural and dynamic models for complex road networksJiawei Xue (8672484) 04 May 2020 (has links)
<div>The interplay between network topology and traffic dynamics in road networks impacts various performance measures. There are extensive existing researches focusing on link-level fundamental diagrams, traffic assignments under route choice assumptions. However, the underlying coupling of structure and dynamic makes network-level traffic not fully investigated. In this thesis, we build structural and dynamic models to deal with three challenges: 1) describing road network topology and understanding the difference between cities; 2) quantifying network congestion considering both road network topology and traffic flow information; 3) allocating transportation management resources to optimize the road network connectivity.</div><div><br></div><div>The first part of the thesis focuses on structural models for complex road networks. Online road map data platforms, like OpenStreetMap, provide us with reliable road network data of the world. To solve the duplicate node problem, an O(n) time complexity node merging algorithm is designed to pre-process the raw road network with n nodes. Hereafter, we define unweighted and weighted node degree distribution for</div><div>road networks. Numerical experiments present the heterogeneity in node degree distribution for Beijing and Shanghai road network. Additionally, we find that the power law distribution fits the weighted road network under certain parameter settings, extending the current knowledge that degree distribution for the primal road network is not power law.</div><div><br></div><div>In the second part, we develop a road network congestion analysis and management framework. Different from previous methods, our framework incorporates both network structure and dynamics. Moreover, it relies on link speed data only, which is more accessible than previously used link density data. Specifically, we start from the existing traffic percolation theory and critical relative speed to describe network-level traffic congestion level. Based on traffic component curves, we construct Aij for two road segments i and j to quantify the necessity of considering the two road segments in the same traffic zone. Finally, we apply the Louvain algorithm on defined road segment networks to generate road network partition candidates. These candidate partitions will help transportation engineers to control regional traffic.</div><div><br></div><div>The last part formulates and solves a road network management resource allocation optimization. The objective is to maximize critical relative speed, which is defined from traffic component curves and is closely related to personal driving comfort. Budget upper bound serves as one of the constraints. To solve the simulation-based nonlinear optimization problem, we propose a simple allocation and a meta-heuristic method based on the genetic algorithm. Three applications demonstrate that the meta-heuristic method finds better solutions than simple allocation. The results will inform the optimal allocation of resources at each road segment in metropolitan cities to enhance the connectivity of road networks.</div>
|
3 |
A Data Analytics Framework for Regional Voltage ControlYang, Duotong 16 August 2017 (has links)
Modern power grids are some of the largest and most complex engineered systems. Due to economic competition and deregulation, the power systems are operated closer their security limit. When the system is operating under a heavy loading condition, the unstable voltage condition may cause a cascading outage. The voltage fluctuations are presently being further aggravated by the increasing integration of utility-scale renewable energy sources. In this regards, a fast response and reliable voltage control approach is indispensable.
The continuing success of synchrophasor has ushered in new subdomains of power system applications for real-time situational awareness, online decision support, and offline system diagnostics. The primary objective of this dissertation is to develop a data analytic based framework for regional voltage control utilizing high-speed data streams delivered from synchronized phasor measurement units. The dissertation focuses on the following three studies: The first one is centered on the development of decision-tree based voltage security assessment and control. The second one proposes an adaptive decision tree scheme using online ensemble learning to update decision model in real time. A system network partition approach is introduced in the last study. The aim of this approach is to reduce the size of training sample database and the number of control candidates for each regional voltage controller. The methodologies proposed in this dissertation are evaluated based on an open source software framework. / Ph. D. / Modern power grids are some of the largest and most complex engineered systems. When the system is heavily loaded, a small contingency may cause a large system blackout. In this regard, a fast response and reliable control approach is indispensable. Voltage is one of the most important metrics to indicate the system condition. This dissertation develops a cost-effective control method to secure the power system based on the real-time voltage measurements. The proposed method is developed based on an open source framework.
|
4 |
Potential-Based Routing In Wireless Sensor NetworksPraveen Kumar, M 03 1900 (has links)
Recent advances in VLSI technology, and wireless communication have enabled the development of tiny, low-cost sensor nodes that communicate over short distances. These sensor nodes, which consist of sensing, data processing, and wireless communication capabilities, suggest the idea of sensor networks based on collaborative effort of a large number of sensor nodes. Sensor networks hold the promise for numerous applications such as intrusion detection, weather monitoring, security and tactical surveillance, distributed computing, and disaster management. Several new protocols and algorithms have been proposed in the recent past in order to realize these applications. In this thesis, we consider the problem of routing in Wireless Sensor Networks (WSNs).
Routing is a challenging problem in WSNs due to the inherent characteristics which distinguish these networks from the others. Several routing algorithms have been proposed for WSNs, each considering a specific network performance objective such as long network lifetime (ChangandTassiulas,2004), end-to-end delay guarantees (T.Heetal,2003), and data fusion (RazvanCristescuetal,2005) etc. In this thesis, we utilize the Potential-based Routing Paradigm to develop routing algorithms for different performance objectives of interest in WSNs. The basic idea behind the proposed approach is to assign a scalar called the potential to every sensor node in the network. Data is then forwarded to the neighbor with highest potential. Potentials cause the data to flow along certain paths. By defining potential fields appropriately, one can cause data to flow along preferred paths, so that the given performance objective is achieved. We have demonstrated the usefulness of this approach by considering three performance objectives, and defining potentials appropriately in each case.
The performance objectives that we have considered are (i) maximizing the time to network partition, (ii) maximizing the packet delivery ratio, and (iii) Data fusion. In an operational sensor network, sensor nodes’ energy levels gradually deplete, leading eventually to network partition. A natural objective is to route packets in such a way that the time to network partition is maximized. We have developed a potential function for this objective. We analyzed simple network cases and used the insight to develop a potential function applicable to any network. Simulation results showed that considerable improvements in time to network partition can be obtained compared to popular approaches such as maximum lifetime routing, and shortest hop count routing. In the next step, we designed a potential function that leads to routes with high packet delivery ratios. We proposed a “channel-state aware” potential definition for a simple 2-relay network and performed a Markov-chain based analysis to obtain the packet delivery ratio. Considerable improvement was observed compared to a channel-state-oblivious policy. This motivated us to define a channel-state-dependent potential function for a general network. Simulation results showed that for a relatively slowly changing wireless network, our approach can provide up to 20% better performance than the commonly-used shortest-hop-count routing.
Finally, we considered the problem of correlated data gathering in sensor networks. The routing approach followed in literature is to construct a spanning tree rooted at the sink. Every node in the tree aggregates its data with the data from its children in order to reduce the number of transmitted bits. Due to this fact, the total energy cost of the data collection task is a function of the underlying tree structure. Noting that potential based routing schemes also result in a tree structure, we present a potential definition that results in the minimum energy cost tree under some special conditions. Specifically, we consider a scenario in which sensor nodes’ measurements are quantized to K values. The task at the sink is to construct a histogram of measurements of all sensor nodes. Sensor nodes do not directly send their measurements to sink. Instead, they construct a temporary histogram using the data from its children and forward it to its parent node in the tree. We present a potential definition that results in the minimum energy cost tree under some conditions on sensor nodes’ measurements. We include both the transmission energy cost as well as the energy cost associated with the aggregation process.
|
5 |
Programming Model and Protocols for Reconfigurable Distributed SystemsArad, Cosmin January 2013 (has links)
Distributed systems are everywhere. From large datacenters to mobile devices, an ever richer assortment of applications and services relies on distributed systems, infrastructure, and protocols. Despite their ubiquity, testing and debugging distributed systems remains notoriously hard. Moreover, aside from inherent design challenges posed by partial failure, concurrency, or asynchrony, there remain significant challenges in the implementation of distributed systems. These programming challenges stem from the increasing complexity of the concurrent activities and reactive behaviors in a distributed system on the one hand, and the need to effectively leverage the parallelism offered by modern multi-core hardware, on the other hand. This thesis contributes Kompics, a programming model designed to alleviate some of these challenges. Kompics is a component model and programming framework for building distributed systems by composing message-passing concurrent components. Systems built with Kompics leverage multi-core machines out of the box, and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic execution replay for debugging, testing, and reproducible behavior evaluation for large-scale Kompics distributed systems. The same system code is used for both simulation and production deployment, greatly simplifying the system development, testing, and debugging cycle. We highlight the architectural patterns and abstractions facilitated by Kompics through a case study of a non-trivial distributed key-value storage system. CATS is a scalable, fault-tolerant, elastic, and self-managing key-value store which trades off service availability for guarantees of atomic data consistency and tolerance to network partitions. We present the composition architecture for the numerous protocols employed by the CATS system, as well as our methodology for testing the correctness of key CATS algorithms using the Kompics simulation framework. Results from a comprehensive performance evaluation attest that CATS achieves its claimed properties and delivers a level of performance competitive with similar systems which provide only weaker consistency guarantees. More importantly, this testifies that Kompics admits efficient system implementations. Its use as a teaching framework as well as its use for rapid prototyping, development, and evaluation of a myriad of scalable distributed systems, both within and outside our research group, confirm the practicality of Kompics. / Kompics / CATS / REST
|
6 |
Programming Model and Protocols for Reconfigurable Distributed SystemsArad, Cosmin Ionel January 2013 (has links)
Distributed systems are everywhere. From large datacenters to mobile devices, an ever richer assortment of applications and services relies on distributed systems, infrastructure, and protocols. Despite their ubiquity, testing and debugging distributed systems remains notoriously hard. Moreover, aside from inherent design challenges posed by partial failure, concurrency, or asynchrony, there remain significant challenges in the implementation of distributed systems. These programming challenges stem from the increasing complexity of the concurrent activities and reactive behaviors in a distributed system on the one hand, and the need to effectively leverage the parallelism offered by modern multi-core hardware, on the other hand. This thesis contributes Kompics, a programming model designed to alleviate some of these challenges. Kompics is a component model and programming framework for building distributed systems by composing message-passing concurrent components. Systems built with Kompics leverage multi-core machines out of the box, and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic execution replay for debugging, testing, and reproducible behavior evaluation for largescale Kompics distributed systems. The same system code is used for both simulation and production deployment, greatly simplifying the system development, testing, and debugging cycle. We highlight the architectural patterns and abstractions facilitated by Kompics through a case study of a non-trivial distributed key-value storage system. CATS is a scalable, fault-tolerant, elastic, and self-managing key-value store which trades off service availability for guarantees of atomic data consistency and tolerance to network partitions. We present the composition architecture for the numerous protocols employed by the CATS system, as well as our methodology for testing the correctness of key CATS algorithms using the Kompics simulation framework. Results from a comprehensive performance evaluation attest that CATS achieves its claimed properties and delivers a level of performance competitive with similar systems which provide only weaker consistency guarantees. More importantly, this testifies that Kompics admits efficient system implementations. Its use as a teaching framework as well as its use for rapid prototyping, development, and evaluation of a myriad of scalable distributed systems, both within and outside our research group, confirm the practicality of Kompics. / <p>QC 20130520</p>
|
Page generated in 0.0695 seconds