101 |
CROSS-LAYER ADAPTATION OF SERVICE-BASED SYSTEMSZengin, Asli January 2012 (has links)
One of the key features of service-based systems (SBS) is the capability to adapt in order to react to various changes in the business requirements and the application context. Given the complex layered structure, and the heterogeneous and dynamic execution context of such systems, adaptation is not at all a trivial task.
The importance of the problem of adaptation has been widely recognized in the community of software services and systems. There exist several adaptation approaches which aim at identifying and solving problems that occur in
one of the SBS layers. A fundamental problem with most of these works is their fragmentation and isolation. While these solutions are quite effective when the specific problem they try to solve is considered, they may be incompatible or even harmful when the whole system is taken into account. Enacting an adaptation in the system might result in triggering new problems.
When building adaptive SBSs precautions must be taken to consider the impacts of the adaptations on the entire system. This can be achieved by properly coordinating adaptation actions provided by the different analysis and decision mechanisms through holistic and multi-layer adaptation strategies. In this dissertation, we address this problem. We present a novel framework for Crosslayer
Adaptation Management (CLAM) that enables a comprehensive impact analysis by coordinating the adaptation and analysis tools available in the SBS.
We define a new system modeling methodology for adaptation coordination. The SBS model and the accompanying adaptation model that we propose in this thesis overcome the limitations of the existing cross-layer adaptation approaches: (i) genericness for accommodating diverse SBS domains with different system elements and layers (ii) flexibility for allowing new system artifacts and adaptation tools (iii) capability for dealing with the complexity of the SBS considering the possibility of a huge number of problems and adaptations that might take place in the system.
Based on this model we present a tree-based coordination algorithm. On the one hand it exploits the local adaptation and analysis facilities provided by the system, and on the other hand it harmonizes the different layers and system elements by properly coordinating the local solutions. The outcome of the algorithm is a set of alternative cross-layer adaptation strategies which are consistent with the overall system.
Moreover, we propose novel selection criteria to rank the alternative strategies and select the best one. Differently from the traditional approaches, we consider as selection criteria not only the overall quality of the SBS, but also
the total efforts required to enact an adaptation strategy. Based on these criteria we present two possible ranking methods, one relying on simple additive weighting - multiple criteria decision making, the other relying on fuzzy logic.
The framework is implemented and integrated in a toolkit that allows for constructing and selecting the cross-layer adaptation strategies, and is evaluated on a set of case studies.
|
102 |
Computational problems in algebra: units in group rings and subalgebras of real simple Lie algebrasFaccin, Paolo January 2014 (has links)
In the first part of the thesis I produce and implement an algorithm for obtaining generators of the unit group of the integral group ring ZG of finite abelian group G. We use our implementation in MAGMA of this algorithm to compute the unit group of ZG for G of order up to 110. In the second part of the thesis I show how to construct multiplication tables of the semisimple real Lie algebras. Next I give an algorithm, based on the work of Sugiura, to find all Cartan subalgebra of such a Lie algebra. Finally I show algorithms for finding semisimple subalgebras of a given semisimple real Lie algebra.
|
103 |
Machine Learning for Tract Segmentation in dMRI dataThien Bao, Nguyen January 2016 (has links)
Diffusion MRI (dMRI) data allows to reconstruct the 3D pathways of axons within the white matter of the brain as a set of streamlines, called tractography. A streamline is a vectorial representation of thousands of neuronal axons expressing structural connectivity. An important task is to group the same functional streamlines into one tract segmentation. This work is extremely helpful for neuro surgery or diagnosing brain disease. However, the segmentation process is difficult and time consuming due to the large number of streamlines (about 3 × 10 5 in a normal brain) and the variability of the brain anatomy among different subjects. In our project, the goal is to design an effective method for tract segmentation task based on machine learning techniques and to develop an interactive tool to assist medical practitioners to perform this task more precisely, more easily, and faster. First, we propose a design of interactive segmentation process by presenting the user a clustered version of the tractography in which user selects some of the clusters to identify a superset of the streamlines of interest. This superset is then re-clustered at a finer scale and again the user is requested to select the relevant clusters. The process of re-clustering and manual selection is iterated until the remaining streamlines faithfully represent the desired anatomical structure of interest. To solve the computational issue of clustering a large number of streamlines under the strict time constraints requested by the interactive use, we present a solution which consists in embedding the streamlines into a Euclidean space (call dissimilarity representation), and then in adopting a state-of-the art scalable implementation of the k-means algorithm. The dissimilarity representation is defined by selecting a set of streamlines called prototypes and then mapping any new streamline to the vector of distances from prototypes. Second, an algorithm is proposed to find the correspondence/mapping between streamlines in tractographies among two different samples, without requiring any transformation as the traditional tractography registration usually does. In other words, we try to find a mapping between the tractographies. Mapping is very useful for studying tractography data across subjects. Last but not least, by exploring the mapping in the context of dissimilarity representation, we also propose the algorithmic solution to build the common vectorial representation of streamlines across subject. The core of the proposed solution combines two state-of-the-art elements: first using the recently proposed tractography mapping approach to align the prototypes across subjects; then applying the dissimilarity representation to build the common vectorial representation for streamline. Preliminary results of applying our methods in clinical use-cases show evidence that our proposed algorithm is greatly beneficial (in terms of time efficiency, easiness.etc.) for the study of white matter tractography in clinical applications.
|
104 |
Long-term morphological response of tide dominated estuariesTodeschini, Ilaria January 2006 (has links)
Most estuaries of the world are influenced by tides. The tidal action is a fundamental mechanism for mixing river and estuarine waters, resuspending and transporting sediments and creating bedforms. The planform of tide-dominated estuaries is characterized by a funnel-shaped geometry with a high width-to-depth ratio. The width of the estuarine section tends to decrease rapidly upstream, following an exponential law and the bottom slopes are generally non-significant.
In this thesis the long-term morphological evolution of tide-dominated estuaries is investigated through a relatively simple one-dimensional numerical model. When a reflective barrier condition is assigned at the landward boundary, the morphological evolution of an initially horizontal tidal channel is characterized by the formation of a sediment wave that migrates slowly landward until it leads to the emergence of a beach. The bottom profile reaches a dynamical equilibrium configuration which has been defined as the condition in which the tidally averaged sediment flux vanishes or, alternatively, the bottom elevation attains a constant value. For relatively short and weakly convergent estuaries, the beach is formed at the landward end of the channel, due to the reflective barrier chosen as boundary condition, and the equilibrium length coincides with the distance of such boundary from the mouth. When the above distance exceeds a threshold value, which decreases for increasing values of the degree of convergence of the channel, the beach forms within an internal section of the estuary and the final equilibrium length is much shorter and mainly governed by the convergence length. The final equilibrium length of the channel is found to depend mainly on two parameters, namely the physical length of the channel and the degree of convergence. Moreover, if a condition of vanishing sediment flux from the outer sea during the flood phase is imposed, a larger scour in the seaward part of the estuary is observed, though the overall longitudinal bottom profile does not differ much from that corresponding to a sediment influx equal to the equilibrium transport capacity at the mouth.
This fixed banks model is not able to explain this typical funnel shape; furthermore, the bottom slopes obtained with this models are quite large if compared with the mild slopes of real tidedominated estuaries. For these reasons, the problem has been analysed to understand the reason why tidal channels are convergent and to define the conditions under which the exponential law for width variation, which is so often observed in nature, is reproduced. The long-term evolution of the channel cross-section is investigated allowing the width to vary with time. The one-dimensional model is expanded by considering a simple way to take the banks erosion into account. A strongly simplified approach is adopted, whereby only the effects related to flow and sediment transport processes within the tidal channel are retained and further ingredients, like the control exerted by tidal flats or the direct effect of sea currents in the outer part of the estuary, are discarded. The lateral erosion is taken into account and computed as a function of bed shear stress, provided it exceeds a threshold value within the cross section. The effect of different initial and boundary conditions on the widening process of the channel is tested, along with the role of the incoming river discharge.
Another problem, which is somehow analogous to that of the cross-section evolution, is tackled: a part of a tidal flat dissected by a series of parallel channels is considered and the response of the system induced by a modification of the depth of the channels is studied. In particular, the aim is to assess whether the increase of the depth of one channel, starting from an equilibrium configuration, causes deposition in the others inducing their closure. The evaluation of the morphological effect of a depth variation in one of channel upon the other channels is quite a relevant task, because the stability of salt marshes and lagoons is intrinsically related to the stability of the hydrodynamic functionality of the channels. A result of this analysis is the determination of a characteristic distance between the channels to have mutual influence. It is found that this distance scales with the root of the longitudinal length of the flat; thus, a scaledependent spacing is expected in tidal networks.
|
105 |
Automatic Techniques for the Synthesis and Assisted Deployment of Security Policies in Workflow-based Applicationsdos Santos, Daniel Ricardo January 2017 (has links)
Workflows specify a collection of tasks that must be executed under the responsibility or supervision of human users. Workflow management systems and workflow-driven applications need to enforce security policies in the form of access control, specifying which users can execute which tasks, and authorization constraints, such as Separation/Binding of Duty, further restricting the execution of tasks at run-time. Enforcing these policies is crucial to avoid frauds and malicious use, but it may lead to situations where a workflow instance cannot be completed without the violation of the policy. The Workflow Satisfiability Problem (WSP) asks whether there exists an assignment of users to tasks in a workflow such that every task is executed and the policy is not violated. The run-time version of this problem amounts to answering user requests to execute tasks positively if the policy is respected and the workflow instance is guaranteed to terminate. The WSP is inherently hard, but solutions to this problem have a practical application in reconciling business compliance (stating that workflow instances should follow the specified policies) and business continuity (stating that workflow instances should be deadlock-free). Related problems, such as finding execution scenarios that not only satisfy a workflow but also satisfy other properties (e.g., that a workflow instance is still satisfiable even in the absence of users), can be solved at deployment-time to help users design policies and reuse available workflow models. The main contributions of this thesis are three: 1. We present a technique to synthesize monitors capable of solving the run-time version of the WSP, i.e., capable of answering user requests to execute tasks in such a way that the policy is not violated and the workflow instance is guaranteed to terminate. The technique is extended to modular workflow specifications, using components and gluing assertions. This allows us to compose synthesized monitors, reuse workflow models, and synthesize monitors for large models. 2. We introduce and present techniques to solve a new class of problems called Scenario Finding Problems, i.e., finding execution scenarios that satisfy properties of interest to users. Solutions to these problems can assist customers during the deployment of reusable workflow models with custom authorization policies. 3. We implement the proposed techniques in two tools. Cerberus integrates monitor synthesis, scenario finding, and run-time enforcement into workflow management systems. Aegis recovers workflow models from web applications using process mining, synthesizes monitors, and invokes them at run-time by using a reverse proxy. An extensive experimental evaluation shows the practical applicability of the proposed approaches on realistic and synthetic (for scalability) problem instances.
|
106 |
Privacy-Aware Risk-Based Access Control SystemsMetoui, Nadia January 2018 (has links)
Modern organizations collect massive amounts of data, both internally (from their employees and processes) and externally (from customers, suppliers, partners). The increasing availability of these large datasets was made possible thanks to the increasing storage and processing capability. Therefore, from a technical perspective, organizations are now in a position to exploit these diverse datasets to create new data-driven businesses or optimizing existing processes (real-time customization, predictive analytics, etc.). However, this kind of data often contains very sensitive information that, if leaked or misused, can lead to privacy violations. Privacy is becoming increasingly relevant for organization and businesses, due to strong regulatory frameworks (e.g., the EU General Data Protection Regulation GDPR, the Health Insurance Portability and Accountability Act HIPAA) and the increasing awareness of citizens about personal data issues. Privacy breaches and failure to meet privacy requirements can have a tremendous impact on companies (e.g., reputation loss, noncompliance fines, legal actions). Privacy violation threats are not exclusively caused by external actors gaining access due to security gaps. Privacy breaches can also be originated by internal actors, sometimes even by trusted and authorized ones. As a consequence, most organizations prefer to strongly limit (even internally) the sharing and dissemination of data, thereby making most of the information unavailable to decision-makers, and thus preventing the organization from fully exploit the power of these new data sources. In order to unlock this potential, while controlling the privacy risk, it is necessary to develop novel data sharing and access control mechanisms able to support risk-based decision making and weigh the advantages of information against privacy considerations. To achieve this, access control decisions must be based on an (dynamically assessed) estimation of expected cost and benefits compared to the risk, and not (as in traditional access control systems) on a predefined policy that statically defines what accesses are allowed and denied. In Risk-based access control for each access request, the corresponding risk is estimated and if the risk is lower than a given threshold (possibly related to the trustworthiness of the requester), then access is granted or denied. The aim is to be more permissive than in traditional access control systems by allowing for a better exploitation of data. Although existing risk-based access control models provide an important step towards a better management and exploitation of data, they have a number of drawbacks which limit their effectiveness. In particular, most of the existing risk-based systems only support binary access decisions: the outcome is “allowed” or “denied”, whereas in real life we often have exceptions based on additional conditions (e.g., “I cannot provide this information, unless you sign the following non-disclosure agreement.” or “I cannot disclose this data, because they contain personal identifiable information, but I can disclose an anonymized version of the data.”). In other words, the system should be able to propose risk mitigation measures to reduce the risk (e.g., disclose partial or anonymized version of the requested data) instead of denying risky access requests. Alternatively, it should be able to propose appropriate trust enhancement measures (e.g., stronger authentication), and once they are accepted/fulfilled by the requester, more information can be shared. The aim of this thesis is to propose and validate a novel privacy enhancing access control approach offering adaptive and fine-grained access control for sensitive data-sets. This approach enhances access to data, but it also mitigates privacy threats originated by authorized internal actors. More in detail: 1. We demonstrate the relevance and evaluate the impact of authorized actors’ threats. To this aim, we developed a privacy threats identification methodology EPIC (Evaluating Privacy violation rIsk in Cyber security systems) and apply EPIC in a cybersecurity use case where very sensitive information is used. 2. We present the privacy-aware risk-based access control framework that supports access control in dynamic contexts through trust enhancement mechanisms and privacy risk mitigation strategies. This allows us to strike a balance between the privacy risk and the trustworthiness of the data request. If the privacy risk is too large compared to the trust level, then the framework can identify adaptive strategies that can decrease the privacy risk (e.g., by removing/obfuscating part of the data through anonymization) and/or increase the trust level (e.g., by asking for additional obligations to the requester). 3. We show how the privacy-aware risk-based approach can be integrated to existing access control models such as RBAC and ABAC and that it can be realized using a declarative policy language with a number of advantages including usability, flexibility, and scalability. 4. We evaluate our approach using several industrial relevant use cases, elaborated to meet the requirements of the industrial partner (SAP) of this industrial doctorate.
|
107 |
Distributed Computing for Large-scale GraphsGuerrieri, Alessio January 2015 (has links)
The last decade has seen an increased attention on large-scale data analysis, caused mainly by the availability of new sources of data and the development of programming model that allowed their analysis. Since many of these sources can be modeled as graphs, many large-scale graph processing frameworks have been developed, from vertex-centric models such as pregel to more complex programming models that allow asynchronous computation, can tackle dynamism in the data and permit the usage of different amount of resources. This thesis presents theoretical and practical results in the area of distributed large- scale graph analysis by giving an overview of the entire pipeline. Data must first be pre-processed to obtain a graph, which is then partitioned into subgraphs of similar size. To analyze this graph the user must choose a system and a programming model that matches her available resources, the type of data and the class of algorithm to execute. Aside from an overview of all these different steps, this research presents three novel approaches to those steps. The first main contribution is dfep, a novel distributed partitioning algorithm that divides the edge set into similar sized partition. dfep can obtain partitions with good quality in only a few iterations. The output of dfep can then be used by etsch, a graph processing framework that uses partitions of edges as the focus of its programming model. etsch’s programming model is shown to be flexible and can easily reuse sequential classical graph algorithms as part of its workflow. Implementations of etsch in hadoop, spark and akka allow for a comparison of those systems and the discussion of their advantages and disadvantages. The implementation of etsch in akka is by far the fastest and is able to process billion-edges graphs faster that competitors such as gps, blogel and giraph++, while using only a few computing nodes. A final contribution is an application study of graph-centric approaches to word sense induction and disambiguation: from a large set of documents a word graph is constructed and then processed by a graph clustering algorithm, to find documents that refer to the same entities. A novel graph clustering algorithm, named tovel, uses a diffusion-based approach inspired by the cycle of water.
|
108 |
Formal Proofs of Security for Privacy-Preserving Blockchains and other Cryptographic ProtocolsLongo, Riccardo January 2018 (has links)
Cryptography is used to protect data and communications. The basic tools are cryptographic primitives, whose security and efficiency are widely studied. But in real-life applications these primitives are not used individually, but combined inside complex protocols. The aim of this thesis is to analyse various cryptographic protocols and assess their security in a formal way. In chapter 1 the concept of formal proofs of security is introduced and the main categorisation of attack scenarios and types of adversary are presented, and the protocols analysed in the thesis are briefly introduced with some motivation. In chapter 2 are presented the security assumptions used in the proofs of the following chapters, distinguishing between the hardness of algebraic problems and the strength of cryptographic primitives. Once that the bases are given, the first protocols are analysed in chapter 3, where two Attribute Based Encryption schemes are proven secure. First context and motivation are introduced, presenting settings of cloud encryption, alongside the tools used to build ABE schemes. Then the first scheme, that introduces multiple authorities in order to improve privacy, is explained in detail and proven secure. Finally the second scheme is presented as a variation of the first one, with the aim of improving the efficiency performing a round of collaboration between the authorities. The next protocol analysed is a tokenization algorithm for the protection of credit cards. In chapter 4 the advantages of tokenization and the regulations required by the banking industry are presented, and a practical algorithm is proposed, and proven secure and compliant with the standard. In chapter 5 the focus is on the BIX Protocol, that builds a chain of certificates in order to decentralize the role of certificate authorities. First the protocol and the structure of the certificates are introduced, then two attack scenarios are presented and the protocol is proven secure in these settings. Finally a viable attack vector is analysed, and a mitigation approach is discussed. In chapter 6 is presented an original approach on building a public ledger with end-to-end encryption and a one-time-access property, that make it suitable to store sensitive data. Its security is studied in a variety of attack scenarios, giving proofs based on standard algebraic assumptions. The last protocol presented in chapter 7 uses a proof-of-stake system to maintain the consistency of subchains built on top of the Bitcoin blockchain, using only standard Bitcoin transactions. Particular emphasis is given to the analysis of the refund policies employed, proving that the naive approach is always ineffective whereas the chosen policy discourages attackers whose stake falls below a threshold, that may be adjusted varying the protocol parameters.
|
109 |
An Organizational Approach to the Polysemy Problem in WordNetFreihat, Abed Alhakim January 2014 (has links)
Polysemy in WordNet corresponds to various kinds of linguistic phenomena that can be grouped into five classes. One of them is homonymy that refers to the cases, where the meanings of a term are unrelated, and three of the classes refer to the polysemy cases, where the meanings of a term are related. These three classes are specialization polysemy, metonymy, and metaphoric polysemy.Another polysemy class is the compound noun polysemy. In this thesis, we focus on compound noun polysemy and specialization polysemy. Compound noun Polysemy corresponds to the cases, where we use the modified noun to refer to a compound noun. Specialization polysemy is a type of related polysemy referring to the polysemy cases, when a term is used to refer to either a more general meaning or a more specific meaning. Compound noun polysemy and specialization polysemy in WordNet are considered the main reasons behind the highpolysemous nature of WordNet that make WordNet redundant and too fine grained for natural language processing. Another problem in WordNet is its polysemy representation. WordNet represents the polysemous terms by capturing the different meanings of them at lexical level but without giving emphasis on the polysemy classes these terms belong to. The highpolysemous nature and the polysemy representation in WordNet affect the usability of it as suitable knowledge representation resource for natural language processing applications. In fact, the polysemy problem in WordNet is a challenging problem for natural language processing applications, especially in the field of information retrieval and semantic search. To solve this problem, many approaches have been suggested. Although all the state of the art approaches are good to solve the polysemy problem partially, they do not give a general solution for it. In this thesis, we propose a novel approach to solve the compound noun and specialization polysemy problem in WordNet in the case of nouns. Solving the compound noun polysemy and the specialization polysemy problem is an important step that enhances the usability of WordNet as a knowledge representation resource. The proposed approach is not an alternative to the existing approaches. It is a complementary solution for the state of the art approaches especially the systematic polysemy approaches.
|
110 |
Wake-up Radio based Approach to Low-Power and Low-Latency Communication in the Internet of ThingsPiyare, Rajeev January 2019 (has links)
For the Internet of Things to flourish a long lasting energy supply for remotely deployed large- scale sensor networks is of paramount importance. An uninterrupted power supply is required by these nodes to carry out tasks such as sensing, data processing, and data communication. Of these, radio communication remains the primary battery consuming activity in wireless systems. Advances in MAC protocols have enabled significant lifetime improvements by putting the main transceiver in sleep mode for extended periods. However, the sensor nodes still waste energy due to two main issues. First, the nodes periodically wake-up to sample the channel even when there is no data for it to receive, leading to idle listening cost. On the other side, the sending node must repeatedly transmit packets until the receiver wakes up and acknowledges receipt, leading to energy wastage due to over-transmission. In systems with the low data rate, idle listening and over-transmission can begin to dominate energy costs. In this thesis, we take a novel hardware approach to eliminate energy overhead in WSNs by addition of a second, extremely low-power wake-up radio component. This approach leverages an always-on wake-up receiver to delegate the task of listening to the channel for a trigger and then waking up a higher power transceiver when required. With this on-demand approach, energy constrained devices are able to drastically reduce power consumption without sacrificing the application requirements in terms of reliability and network latency. As a first major contribution, we survey a large body of work to identify the benefits and limitations of the current wake-up radio hardware technology. We also present a new taxonomy for categorizing the wake-up radios and the respective protocols, further highlighting the main issues and challenges that must be addressed while designing systems based on wake-up radios. Our survey forms a guideline for assisting application and system designers to make appropriate choices while utilizing this new technology. Secondly, this thesis proposes a first-ever benchmarking framework to enable accurate and repeatable profiling of wake-up radios. Specifically, we outline a set of specifications to follow when benchmarking wake-up radio-based systems, leading to more consistent and therefore comparable evaluations whether in simulation or testbed for current and future systems. To quantitatively assess whether wake-up technology can provide energy savings superior to duty cycled MACs, reliable tools are required to accurately model the wake-up radio hardware and its performance in combination with the upper layers of the stack. As our third contribution, we provide an open-source simulator, WaCo for development and evaluation of wake-up radio protocols across all layers of the software stack. Using our tool together with a newly proposed wake-up radio MAC layer, we provide an exhaustive evaluation of the wake-up radio system for periodic data collection applications. Our evaluations highlight that wake-up technology is indeed effective in extending the network lifetime by shrinking the overall energy consumption. To close the gap between the simulation and the real world experiments, we adopt a cutting edge wake-up radio hardware and build a Wake-up Lab, a modular dual-radio prototype. Using our Wake-up Lab, we thoroughly evaluate the performance of the wake-up radio solution in a realistic office environment. Our in-depth system-wide evaluation reveals that wake-up radio-based systems can achieve significant improvements over traditional duty cycling MACs by eliminating periodic receive checks and reducing unnecessary main radio transmissions while maintaining end-to-end latency on the order of tens of milliseconds in a multi-hop network. As a step toward sustainable wireless sensing, this thesis presents a proof of concept system where an extremely low-power switch coupled with a wake-up receiver is continuously powered by a plant microbial fuel cell (PMFC) and a new receiver-initiated MAC-level communication protocol for on-demand data collection. MFC converts the chemical energy into electricity by exploiting the metabolism of bacteria found in the sediment, thus offering a promising power source for autonomous sensing system. However, sources such as PMFCs are severely limited in the quantity of energy they can generate, unable to directly power the sensor nodes. Therefore, we consider radical hardware solutions in combination with the communication stacks to reduce this power gap. Thanks, to the hardware-software co-design proposed above, we were able to reduce the overall power consumption to a point where an extremely low-power PMFC source can sustain the sensor node’s operation with a data sampling rate of over 30 seconds. Finally, we propose to enhance the LoRa based low-power wide area networks by fusing wake-up receivers and long-range wireless technologies. The current LoRaWAN architecture is mainly designed and optimized for up-links where the remote end devices disseminate data to the gateway using pure ALOHA techniques. As such, this limits the ability of the gateway to control, reconfigure, or query the specific end devices, crucial for many Internet of Things applications. To shift the communication modality from push to pull based, we propose a new network architecture that leverages wake-up receiver and a receiver-initiated On-demand TDMA MAC. The former allows the gateway to trigger the remote device when there is data to be collected else keep the device in sleep mode, while the latter allows retrieving data efficiently from the nodes without congesting the network. Our testbed experiments reveal that the proposed system significantly improves energy efficiency by offering network reliability of 100% with end devices dissipating only a few microwatts of power during periods of inactivity. By moving away from the realm of pure ALOHA communication to wake-up receivers, we were able to exploit the low power modes of the sensor node more effectively. Through these contributions, this thesis pushes forward the applicability of ultra-low power wake-up radios, by quantitatively measuring the trade-offs, energy efficiency, reliability, and latency. Further, by demonstrating superior performance via proof of concepts, this thesis provides a stepping stone towards the goal of achieving energy-neutral, yet responsive communication systems using wake-up radio technology.
|
Page generated in 0.0313 seconds