• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1678
  • 340
  • 13
  • 11
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 2043
  • 707
  • 488
  • 365
  • 346
  • 279
  • 252
  • 251
  • 236
  • 225
  • 223
  • 216
  • 191
  • 189
  • 179
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Wake-up Radio based Approach to Low-Power and Low-Latency Communication in the Internet of Things

Piyare, Rajeev January 2019 (has links)
For the Internet of Things to flourish a long lasting energy supply for remotely deployed large- scale sensor networks is of paramount importance. An uninterrupted power supply is required by these nodes to carry out tasks such as sensing, data processing, and data communication. Of these, radio communication remains the primary battery consuming activity in wireless systems. Advances in MAC protocols have enabled significant lifetime improvements by putting the main transceiver in sleep mode for extended periods. However, the sensor nodes still waste energy due to two main issues. First, the nodes periodically wake-up to sample the channel even when there is no data for it to receive, leading to idle listening cost. On the other side, the sending node must repeatedly transmit packets until the receiver wakes up and acknowledges receipt, leading to energy wastage due to over-transmission. In systems with the low data rate, idle listening and over-transmission can begin to dominate energy costs. In this thesis, we take a novel hardware approach to eliminate energy overhead in WSNs by addition of a second, extremely low-power wake-up radio component. This approach leverages an always-on wake-up receiver to delegate the task of listening to the channel for a trigger and then waking up a higher power transceiver when required. With this on-demand approach, energy constrained devices are able to drastically reduce power consumption without sacrificing the application requirements in terms of reliability and network latency. As a first major contribution, we survey a large body of work to identify the benefits and limitations of the current wake-up radio hardware technology. We also present a new taxonomy for categorizing the wake-up radios and the respective protocols, further highlighting the main issues and challenges that must be addressed while designing systems based on wake-up radios. Our survey forms a guideline for assisting application and system designers to make appropriate choices while utilizing this new technology. Secondly, this thesis proposes a first-ever benchmarking framework to enable accurate and repeatable profiling of wake-up radios. Specifically, we outline a set of specifications to follow when benchmarking wake-up radio-based systems, leading to more consistent and therefore comparable evaluations whether in simulation or testbed for current and future systems. To quantitatively assess whether wake-up technology can provide energy savings superior to duty cycled MACs, reliable tools are required to accurately model the wake-up radio hardware and its performance in combination with the upper layers of the stack. As our third contribution, we provide an open-source simulator, WaCo for development and evaluation of wake-up radio protocols across all layers of the software stack. Using our tool together with a newly proposed wake-up radio MAC layer, we provide an exhaustive evaluation of the wake-up radio system for periodic data collection applications. Our evaluations highlight that wake-up technology is indeed effective in extending the network lifetime by shrinking the overall energy consumption. To close the gap between the simulation and the real world experiments, we adopt a cutting edge wake-up radio hardware and build a Wake-up Lab, a modular dual-radio prototype. Using our Wake-up Lab, we thoroughly evaluate the performance of the wake-up radio solution in a realistic office environment. Our in-depth system-wide evaluation reveals that wake-up radio-based systems can achieve significant improvements over traditional duty cycling MACs by eliminating periodic receive checks and reducing unnecessary main radio transmissions while maintaining end-to-end latency on the order of tens of milliseconds in a multi-hop network. As a step toward sustainable wireless sensing, this thesis presents a proof of concept system where an extremely low-power switch coupled with a wake-up receiver is continuously powered by a plant microbial fuel cell (PMFC) and a new receiver-initiated MAC-level communication protocol for on-demand data collection. MFC converts the chemical energy into electricity by exploiting the metabolism of bacteria found in the sediment, thus offering a promising power source for autonomous sensing system. However, sources such as PMFCs are severely limited in the quantity of energy they can generate, unable to directly power the sensor nodes. Therefore, we consider radical hardware solutions in combination with the communication stacks to reduce this power gap. Thanks, to the hardware-software co-design proposed above, we were able to reduce the overall power consumption to a point where an extremely low-power PMFC source can sustain the sensor node’s operation with a data sampling rate of over 30 seconds. Finally, we propose to enhance the LoRa based low-power wide area networks by fusing wake-up receivers and long-range wireless technologies. The current LoRaWAN architecture is mainly designed and optimized for up-links where the remote end devices disseminate data to the gateway using pure ALOHA techniques. As such, this limits the ability of the gateway to control, reconfigure, or query the specific end devices, crucial for many Internet of Things applications. To shift the communication modality from push to pull based, we propose a new network architecture that leverages wake-up receiver and a receiver-initiated On-demand TDMA MAC. The former allows the gateway to trigger the remote device when there is data to be collected else keep the device in sleep mode, while the latter allows retrieving data efficiently from the nodes without congesting the network. Our testbed experiments reveal that the proposed system significantly improves energy efficiency by offering network reliability of 100% with end devices dissipating only a few microwatts of power during periods of inactivity. By moving away from the realm of pure ALOHA communication to wake-up receivers, we were able to exploit the low power modes of the sensor node more effectively. Through these contributions, this thesis pushes forward the applicability of ultra-low power wake-up radios, by quantitatively measuring the trade-offs, energy efficiency, reliability, and latency. Further, by demonstrating superior performance via proof of concepts, this thesis provides a stepping stone towards the goal of achieving energy-neutral, yet responsive communication systems using wake-up radio technology.
112

Modeling and Reasoning about Contextual Requirements: Goal-based Framework

Ali, Raian January 2010 (has links)
Most of requirements engineering (RE) research ignores, or presumes a uniform nature of, the context in which the system operates. This assumption is no longer valid in emerging computing paradigms, such as Ambient, Pervasive and Ubiquitous Computing, where it is essential to monitor and adapt to an inherently varying context. There is a strong relationship between requirements and context. Context might be considered to determine the set of requirements relevant to a system, to derive the alternatives the system can adopt to reach these requirements, and to assess the quality of each alternative. A RE framework that explicitly captures and analyzes this relationship is still missing. Before influencing the behavior of software, context influences the behavior of users. It influences users' goals and their choices to reach these goals. Capturing this latest influence is essential for a software developed to meet users' requirements in different contexts. In this thesis, we propose a goal-oriented RE modeling and reasoning framework for systems operating in and reflecting varying contexts. To this end, we develop a conceptual modeling language, the \emph{contextual goal model}, that captures the relationship between context and requirements at the goal level and provides constructs to analyze context. We develop a set of reasoning mechanisms to analyze contextual goal models addressing various problems: the consistency of context, the derivation of requirements in different contexts, the detection of conflicts between requirements happening as a consequence of changes in the context they lead to, and the derivation of a set of requirements that leads to a system developed with minimum costs and operable in all of the analyzed contexts. We develop a formal framework, CASE tool, and methodological process to assist analysts in using our modeling and reasoning RE framework. We evaluate our proposed RE framework by applying it on two systems: smart home for patient with dementia and museum-guide mobile information system. Our contribution to RE research is a RE framework specialized for emerging computing paradigms that weave together software and context. It allows us to overcome the limitation of existing RE frameworks that ignore, or presume a uniform nature of, the context in which the system operates.
113

Privacy elicitation and utilization in distributed data exchange systems

Chiasera, Annamaria January 2012 (has links)
Recently we are assisting to the advent of many data integration projects to allow the cooperation of systems in the more disparate fields (healthcare, finance, education, public security). This trend responds to the increasing needs of data to monitor, compare, correlate and analyse the distributed business processes managed by different institutions and companies for different purposes. As the availability of data in electronic form increases, the risk of improper use of sensitive information is raising also. In this thesis work we focus on the problem of realising an infrastructure for the data and application integration of systems in the healthcare domain. Our solution is compliant with the privacy regulations, reconciling the visibility requirements of the institutional data consumers with the needs of control and protection of the data subjects. It is an event-based solution which allows to capture the processes going on between the systems to be integrated in a way that is flexible, decoupled and adherent to reality. Our solution enables the sharing of very fine-grained pieces of information to a wide range of consumers still allowing the producers to control who can see what and for what purposes. The architecture minimizes the transit of sensitive information and controls the distribution of events and of their content at a very fine-grained level. In this thesis work we take into account also the impact of the proposed solution on the existing systems ensuring to minimize the effort of companies and institutions in adopting the infrastructure. As legal privacy regulations are most of the time quite distant from unambiguous IT requirements we investigate the problem of privacy constraints elicitation. Typically privacy constraints are defined manually with a tedious procedure by the IT experts based on the desiderata of the users. This approach is not always yielding the best results as designers lacks the domain knowledge required to produce complete, meaningful and not over-constraining privacy requirements. We believe the user holds the knowledge of the domain and of the data that is necessary to define privacy constraints at the right level of granularity. In particular, we provide a novel approach to privacy constraints elicitation based on the interaction with the user. Our approach derives from high level indications given by the user a concise definition of the privacy constraints directly applicable to the underlying database. Such constraints can be used to further restrict the data values that can appear in a report.
114

Towards structured representation of academic search results

Mirylenka, Daniil January 2015 (has links)
Searching for scientific publications is a tedious task, especially when exploring an unfamiliar domain. Typical scholarly search engines produce lengthy unstructured result lists, which are difficult to comprehend, interpret and browse.An informative visual summary could convey useful information about the returned results as a whole, without the need to sift through individual publications.The first contribution of this thesis is a novel method of representing academic search results with concise and informative topic maps. The method consists of two steps: i) extracting interrelated topics from the publication titles and abstracts, and ii) summarizing the resulting topic graph. In the first step we map the returned publications to articles and categories of Wikipedia, constructing a graph of relevant topics with hierarchical relations. In the second step we sequentially build a summary of the topic graph that represents the search results in the most informative way. We rely on sequential prediction to automatically learn to build informative summaries from examples. The summarized topic maps share the most of the benefits and avoid most of the drawbacks of the current methods for grouping documents, such as clustering, topic models, and predefined taxonomies. Specifically, the topic maps are dynamic, fine-grained, of flexible granularity, with up-to-date topics connected with informative relations and having meaningful concise labels. The second contribution of this thesis is a method for bootstrapping domain-specific ontologies from the categories of Wikipedia. The method performs three steps: i) selecting the set of categories relevant to the domain, ii) classifying the categories into classes and individuals, and iii) classifying the sub-category relations into ``subclass-of'', ``instance-of'', ``part-of'' and ``related-to''. In each step we rely on binary classification, which makes the method flexible and easily extensible with new features. For the purpose of academic search, the proposed method advances the creation of semantically rich topic maps. In general, the method semi-automates the construction of large-scale domain ontologies, benefiting multiple potential applications. Providing ground truth data for structured prediction of large objects, such as topic map summaries or domain ontologies, is tedious. The last contribution of this thesis is an initial investigation into reducing the labeling effort in structured prediction tasks. First, we present a labeling interface that suggests topics to be added to the ground truth topic map summary. We modify a state of the art sequential prediction method to iteratively learn from the summaries one topic at a time, while retaining the convergence guarantees. Second, we present an interactive learning method for selecting the categories of Wikipedia relevant to a given domain. The method reduces the number of required labels by actively selecting the queries to the annotator and learning one label at a time.
115

Computational Approaches to Linguistic Creativity for Real World Applications

Ozbal, Gozde January 2013 (has links)
Recent years have witnessed a growing interest in computational linguistic creativity, a research field at the boundary between many disciplines including natural language processing, linguistics, psychology, cognitive sciences and humanities. Even though the state-of-the-art in this field has been striding forward in the last decade, real-world applications of computational linguistic creativity are still uncommon. For comparison, computer-enhanced productivity software is significantly augmenting the skills of both casual users and professionals in other areas, such as image and signal processing. In this thesis, we advocate three main points that computational linguistic creativity should address to achieve a higher state of maturity and demonstrate its full potential: 1) the focus on real-world applications, in which state-of-the- art technology can be leveraged to offer solutions with a practical utility for end users; 2) the adoption of an interactive paradigm in which technology collaborates with users to enhance their creativity instead of attempting to replace it; 3) the investigation of the explorative dimension of creativity, as a means to achieve the two previous points by offering users richer ways of interaction and more powerful tools that can solve a larger class of problems.We present three applications that we designed and developed to address these points: 1) a system for the interactive construction of creative names designed as a support tool for copywriters; 2) a platform for the generation of memory tips for second language learning; 3) an explorative and general-purpose framework for creative sentence generation with the potential to be deployed in a wide range of settings, including advertisement, education and entertainment. All these platforms leverage state-of-the-art technology to deliver creative results with the potential to be useful for end users. We demonstrate this point through three different evaluations, in which we show that 1) the generated neologisms are appealing and successful, and that 2) the sentences that we generate have many of the qualities of successful slogans used for advertisement and 3) they are effective mnemonic devices when used as memory aids for second language learning.
116

Gluing silting objects along recollements of well generated triangulated categories

Fabiano, Bonometti January 2019 (has links)
We provide an explicit procedure to glue (not necessarily compact) silting objects along recollements of triangulated categories with coproducts having a ‘nice’ set of generators, namely, well generated triangulated categories. This procedure is compatible with gluing co-t-structures and it generalizes a result by Liu, Vitória and Yang. We provide conditions for our procedure to restrict to tilting objects and to silting and tilting modules. As applications, we retrieve the classification of silting modules over the Kronecker algebra and the classification of non-compact tilting sheaves over a weighted noncommutative regular projective curve of genus 0.
117

Online Adaptive Neural Machine Translation: from single- to multi-domain scenarios

Farajian, Mohammad Amin January 2018 (has links)
In this thesis we investigate methods for deploying machine translation (MT) in real-world application scenarios related to the use of MT in computer assisted translation (CAT), where human translators post-edit MT outputs. In particular, we investigate (in chronological order) MT adaptation under two working conditions: single-domain and multi-domain. In the former, we assume that MT receives requests by a single user working on a single domain, while in the latter we assume the MT system to receive requests i) from multiple users working on different domains, ii) with no predefined order, and iii) without domain information. In the single-domain case, we first focus on word alignment, a core component of online adaptive phrase-based MT (PBMT) that is crucial for extracting features from a post-edited segment. In particular, we concentrate on improving word alignment in presence of out-of-vocabulary words observed in the source sentences or introduced by the post-editor. In the multi-domain scenario we turned our focus to the neural MT (NMT) paradigm. In particular, we introduce a scalable solution that adapts on-the-fly a generic NMT model to each incoming translation request. It relies on a procedure that locally fine-tunes the model to each input sentence using samples retrieved from a pool of parallel data. Our instance-based adaptation uses a more general formulation of the log-likelihood approach to control the contribution of relevant and irrelevant words during model update. Finally, we test our approach on a simulated continuous learning setting, where the system receives user feedback under form of post-editing.
118

Security of Publish/Subscribe Systems

Ion, Mihaela January 2013 (has links)
The increasing demand for content-centric applications has motivated researchers to rethink and redesign the way information is stored and delivered on the Internet. Increasingly, network traffic consists of content dissemination to multiple recipients. However, the host-centric architecture of the Internet was designed for point-to-point communication between two fixed endpoints. As a result, there is a mismatch between the current Internet architecture and current data or content-centric applications, where users demand data, regardless of the source of the information, which in many cases is unknown to them. Content-based networking has been proposed to address such demands with the advantage of increased efficiency, network load reduction, low latency, and energy efficiency. The publish/subscribe (pub/sub) communication paradigm is the most complex and mature example of such a network. Another example is Information Centric Networking (ICN), a global-scale version of pub/sub systems that aims at evolving the Internet from its host-based packet delivery to directly retrieving information by name. Both approaches completely decouple senders (or publishers) and receivers (or subscribers) being very suitable for content-distribution applications or event-driven applications such as instant news delivery, stock quote dissemination, and pervasive computing. To enable this capability, at the core of pub/sub systems are distributed routers or brokers that forward information based on its content. The basic operation that brokers need to perform is to match incoming messages or publications against registered interests or subscriptions. Though a lot of research has focused on increasing the networking efficiency, security has been only marginally addressed. We believe there are several reasons for this. First of all, security solutions designed for point-to-point communication such as symmetric-key encryption do not scale up to pub/sub systems or ICN applications, mainly because publishers and subscribers are decoupled and it is infeasible for them to establish or to maintain contact and therefore to exchange keying material. In this thesis we analyse several such emerging applications like Smart Energy Systems, Smart Cities and eHealth applications that require greater decoupling of publishers and subscribers, and possible full decoupling. Second, in large applications that run over public networks and span several administrative domains, brokers cannot be trusted with the content of exchanged messages. Therefore, what pub/sub systems need are solutions that allow brokers to match the content of publications against subscriptions without learning anything about their content. This task is made even more difficult when subscriptions are complex, representing conjunctions and disjunctions of both numeric and non-numeric inequalities. The solutions we surveyed were unable to provide publication and subscription confidentiality, while at the same time supporting complex subscription filters and keeping key management scalable. Another challenge for publish/subscribe systems is enforcing fine-grained access control policies on the content of publications. Access control policies are usually enforced by a trusted third party or by the owner holding the data. However, such solutions are not possible for pub/sub systems. When brokers are not trusted, even the policies themselves should remain private as they can reveal sensitive information about the data. In this thesis we address these challenges and design a novel security solution for pub/sub systems when brokers are not trusted such that: (i) it provides confidentiality of publications and subscriptions, (ii) it does not require publishers and subscribers to share keys, (iii) it allows subscribers to express complex subscription filters in the form of general Boolean expressions of predicates, and (iv) it allows enforcing fine-grained access control policies on the data. We provide a security analysis of the scheme. %We further consider active attackers that corrupt messages or try to disrupt the network by replaying old legitimate messages, or that the publishers and subscribers themselves could misbehave, and provide solutions for data integrity, authentication and non-repudiation. Furthermore, to secure data caching and replication in the network, a key requirement for ICN systems and recently also of pub/sub systems that extended brokers with database functionality, we show how our solution can be transformed in an encrypted search solution able to index publications at the broker side and allow subscribers to make encrypted queries. This is the first full-fledged multi-user encrypted search scheme that allows complex queries. We analyse the inference exposure of our index using different threat models. To allow our encrypted routing solution to scale up to large applications or performance constrained applications that require real-time delivery of messages, we also discuss subscription indexing and the inference exposure of the index. Finally, we implement our solution as a set of middleware-agnostic libraries and deploy them on two popular content-based networking implementations: a pub/sub system called PADRES, and an ICN called CCNx. Performance analysis shows that our solution is scalable.
119

Study of Visual Clutter in Geographic Node-Link Diagrams

Debiasi, Alberto January 2016 (has links)
An increasing amount of geographic data is now freely available on the Internet, and this number is expected to increase as monitoring systems and sensors are becoming a ubiquitous part of our environment. A substantial subset is structured as networks (or graphs). Notable examples are import/export of goods, networks of climate stations, flight connections, migration flows and Internet traffic. To fully comprehend such networks, it is needed to know both their geographic and relational information. For this purpose, the most common visual representation is the node-link diagram, where vertices are depicted as points and the edges connecting them are drawn as lines. This Thesis focuses on an instance of node-link diagrams, where nodes are over-imposed on a map and fixed according to their geographical information. The major issue of this visualization is the visual clutter of nodes or edges, defined in literature as ``the state in which excess items, or their representation or organization, lead to a degradation of performance at some task''. In particular, nodes and links may cause occlusion and ambiguity in the graph representation. Such problems characterize a cluttered diagram because they reduce the potential usefulness of the visualization. The goal of this thesis is to advance the state of the art in Graph Visualization with respect to visual clutter. On the theoretical perspective, our goal is to acquire a deep understanding of existing approaches for visualizing geographic networks and for reducing visual clutter. Initially, we provide a classification of geographic node-link diagrams and a survey on techniques to reduce the visual clutter on such visualizations. Afterwards, we present a schematization of techniques that helps the reader to decide, given a task and a geographical node-link diagram, which are the candidate solutions that help to reduce the visual clutter. On the practical perspective, our goal is the development of visualization and interaction techniques to overcome various issues of the state-of-the-art approaches. On the one hand, we present an interactive lens that faces the issues of links organization over a map. On the other hand, we describe a deformation-based technique that reveals nodes and links otherwise hidden behind the globe surface. Finally, we introduce a method that automatically generates flow map layouts starting from multivariate geographical datasets.
120

Multiple Tasks are Better than One: Multi-task Learning and Feature Selection for Head Pose Estimation, Action Recognition and Event Detection

Yan, Yan January 2014 (has links)
Computer vision is a field that includes methods for acquiring, processing, analyzing, and understanding images and videos and, in general, high-dimensional data from the real world in order to produce numerical or symbolic information. The classical problem in computer vision is that of determining whether or not the image or video data contains some specific object, feature, or activity. This task can normally be solved robustly and without effort by a human, but is still not satisfactorily solved in computer vision for the general case - arbitrary objects in arbitrary situations. The existing methods for dealing with this problem can at best solve it only for specific objects, such as simple geometric objects (e.g., polyhedra), human faces, printed or hand-written characters, or vehicles, and in specific situations, typically described in terms of well-defined illumination, background, and pose of the object relative to the camera. Machine Learning (ML) and Computer Vision (CV) have been put together during the development of computer vision in the past decade. Nowadays, machine learning is considered as a powerful tool to solve many computer vision problems. Multi-task learning, as one important branch of machine learning, has developed very fast during the past decade. Multi-task learning methods aim to simultaneously learn classification or regression models for a set of related tasks. This typically leads to better models as compared to a learner that does not account for task relationships. The goal of multi-task learning is to improve the performance of learning algorithms by learning classifiers for multiple tasks jointly. This works particularly well if these tasks have some commonality and are generally slightly under-sampled.

Page generated in 0.1436 seconds