• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Adaptive energy management mechanisms for cluster based routing wireless sensor networks

Eshaftri, Mohamed January 2017 (has links)
Wireless Sensor Network (WSN) technology has been one of the major avenues of Internet of Things (IoT) due to their potential role in digitising smart physical environments. WSNs are typically composed of a vast number of low-power, low–cost and multifunctional sensor nodes within an area that automatically cooperate to complete the application task. This emerging technology has already contributed to the advancement of a broad range of applications. Nevertheless, the development of WSNs is a challenging issue due to significant concerns, which need to be resolved to take full benefit of this remarkable technology. One of the main challenges of WSNs is how to reduce the energy consumption of a single node, in order to extend the network lifetime and improves the quality of service. For that reason, a newly design energy efficient communication protocol is required to tackle the issue. The clustering protocols designed for communication are alleged to be one of the most efficient solutions that can contribute to network scalability and energy consumption in WSNs. While different clustering protocols have been proposed to tackle the aforementioned issue, those solutions are either not scalable or do not provide the mechanisms to avoid a heavy loaded area. This thesis presents new adaptive energy management mechanisms, through which the limited critical energy source can be wisely managed so that the WSN application can achieve its intended design goals. Three protocols are introduced to manage the energy use. The first protocol presents an intra-cluster CH rotation approach that reduces the need for the execution of a periodical clustering process. The second protocol relates to load balancing in terms of the intra and inter-cluster communication patterns of clusters of unequal sizes. This proposed approach involves computing a threshold value that, when reached, triggers overall network re-clustering, with the condition that the network will be reconfigured into unequal cluster size. The third protocol proposes new performance factors in relation to CH selection. Based on these factors, the aggregated weight of each node is calculated, and the most suitable CH is selected. A comparison with existing communication protocols reveals that the proposed approaches balance effectively the energy consumption among all sensor nodes and significantly increase the network lifetime.
2

Uncertainty analysis and application on smart homes and smart grids : big data approaches

Shi, Heng January 2018 (has links)
Methods for uncertainty quantification (UQ) and mitigation in the electrical power system are very basic, Monte Carlo (MC) method and its meta methods are generally deployed in most applications, due to its simplicity and easy to be generalised. They are adequate for a traditional power system when the load is predictable, and generation is controllable. However, the large penetration of low carbon technologies, such as solar panels, electric vehicles, and energy storage, has necessitated the needs for more comprehensive approaches to uncertainty as these technologies introduce new sources of uncertainties with larger volume and diverse characteristics, understanding source and consequences of uncertainty becomes highly complex issues. Traditional methods assume that for a given system it has a unique uncertainty characteristic, hence deal with the uncertainty of the system as a single component in applications. However, this view is no longer applicable in the new context as it neglects the important underlying information associated with individual uncertainty components. Therefore, this thesis aims at: i) systematically developing UQ methodologies to identify, discriminate, and quantify different uncertainty components (forward UQ), and critically to model and trace the associated sources independently (inverse UQ) to deliver new uncertainty information, such as, how uncertainty components generated from its sources, how uncertainty components correlate with each other and how uncertainty components propagate through system aggregation; ii) applying the new uncertainty information to further improve a range of fundamental power system applications from Load Forecasting (LF) to Energy Management System (EMS).In the EMS application, the proposed forward UQ methods enable the development of a decentralised system that is able to tap into the new uncertainty information concerning the correlations between load pattern across individual households, the characteristics of uncertainty components and their propagation through aggregation. The decentralised EMS was able to achieve peak and uncertainty reduction by 18% and 45% accordingly at the grid level. In the LF application, this thesis developed inverse UQ through a deep learning model to directly build the connection between uncertainty components and its corresponding sources. For Load Forecasting on expectation (point LF) and probability (probabilistic LF) and witnessed 20%/12% performance improvement compared to the state-of-the-art, such as Support Vector Regression (SVR), Autoregressive Integrated Moving Average (ARIMA), and Multiple Linear Quantile Regression (MLQR).
3

Latency Tradeoffs in Distributed Storage Access

Ray, Madhurima January 2019 (has links)
The performance of storage systems is central to handling the huge amount of data being generated from a variety of sources including scientific experiments, social media, crowdsourcing, and from an increasing variety of cyber-physical systems. The emerging high-speed storage technologies enable the ingestion of and access to such large volumes of data efficiently. However, the combination of high data volume requirements of new applications that largely generate unstructured and semistructured streams of data combined with the emerging high-speed storage technologies pose a number of new challenges, including the low latency handling of such data and ensuring that the network providing access to the data does not become the bottleneck. The traditional relational model is not well suited for efficiently storing and retrieving unstructured and semi-structured data. An alternate mechanism, popularly known as Key-Value Store (KVS) has been investigated over the last decade to handle such data. A KVS store only needs a 'key' to uniquely identify the data record, which may be of variable length and may or may not have further structure in the form of predefined fields. Most of the KVS in existence have been designed for hard-disk based storage (before the SSDs gain popularity) where avoiding random accesses is crucial for good performance. Unfortunately, as the modern solid-state drives become the norm as the data center storage, the HDD-based KV structures result in high read, write, and space amplifications which becomes detrimental to both the SSD’s performance and endurance. Also note that regardless of how the storage systems are deployed, access to large amounts of storage by many nodes must necessarily go over the network. At the same time, the emerging storage technologies such as Flash, 3D-crosspoint, phase change memory (PCM), etc. coupled with highly efficient access protocols such as NVMe are capable of ingesting and reading data at rates that challenge even the leading edge networking technologies such as 100Gb/sec Ethernet. At the same time, some of the higher-end storage technologies (e.g., Intel Optane storage based on 3-D crosspoint technology, PCM, etc.) coupled with lean protocols like NVMe are capable of providing storage access latencies in the 10-20$\mu s$ range, which means that the additional latency due to network congestion could become significant. The purpose of this thesis is to addresses some of the aforementioned issues. We propose a new hash-based and SSD-friendly key-value store (KVS) architecture called FlashKey which is especially designed for SSDs to provide low access latencies, low read and write amplification, and the ability to easily trade-off latencies for any sequential access, for example, range queries. Through detailed experimental evaluation of FlashKey against the two most popular KVs, namely, RocksDB and LevelDB, we demonstrate that even as an initial implementation we are able to achieve substantially better write amplification, average, and tail latency at a similar or better space amplification. Next, we try to deal with network congestion by dynamically replicating the data items that are heavily used. The tradeoff here is between the latency and the replication or migration overhead. It is important to reverse the replication or migration as the congestion fades away since our observation tells that placing data and applications (that access the data) together in a consolidated fashion would significantly reduce the propagation delay and increase the network energy-saving opportunities which is required as the data center network nowadays are equipped with high-speed and power-hungry network infrastructures. Finally, we designed a tradeoff between network consolidation and congestion. Here, we have traded off the latency to save power. During the quiet hours, we consolidate the traffic is fewer links and use different sleep modes for the unused links to save powers. However, as the traffic increases, we reactively start to spread out traffic to avoid congestion due to the upcoming traffic surge. There are numerous studies in the area of network energy management that uses similar approaches, however, most of them do energy management at a coarser time granularity (e.g. 24 hours or beyond). As opposed to that, our mechanism tries to steal all the small to medium time gaps in traffic and invoke network energy management without causing a significant increase in latency. / Computer and Information Science

Page generated in 0.0666 seconds