• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 94
  • 32
  • 24
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • 1
  • 1
  • Tagged with
  • 386
  • 386
  • 326
  • 316
  • 200
  • 107
  • 76
  • 67
  • 66
  • 65
  • 56
  • 45
  • 38
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

PRACTICAL CONFIDENTIALITY-PRESERVING DATA ANALYTICS IN UNTRUSTED CLOUDS

Savvas Savvides (9113975) 27 July 2020 (has links)
<div> <div> <div> <p>Cloud computing offers a cost-efficient data analytics platform. This is enabled by constant innovations in tools and technologies for analyzing large volumes of data through distributed batch processing systems and real-time data through distributed stream processing systems. However, due to the sensitive nature of data, many organizations are reluctant to analyze their data in public clouds. To address this stalemate, both software-based and hardware-based solutions have been proposed yet all have substantial limitations in terms of efficiency, expressiveness, and security. In this thesis, we present solutions that enable practical and expressive confidentiality- preserving batch and stream-based analytics. We achieve this by performing computations over encrypted data using Partially Homomorphic Encryption (PHE) and Property-Preserving Encryption (PPE) in novel ways, and by utilizing remote or Trusted Execution Environment (TEE) based trusted services where needed.</p><p><br></p><p>We introduce a set of extensions and optimizations to PHE and PPE schemes and propose the novel abstraction of Secure Data Types (SDTs) which enables the application of PHE and PPE schemes in ways that improve performance and security. These abstractions are leveraged to enable a set of compilation techniques making data analytics over encrypted data more practical. When PHE alone is not expressive enough to perform analytics over encrypted data, we use a novel planner engine to decide the most efficient way of utilizing client-side completion, remote re-encryption, or trusted hardware re-encryption based on Intel Software Guard eXtensions (SGX) to overcome the limitations of PHE. We also introduce two novel symmetric PHE schemes that allow arithmetic operations over encrypted data. Being symmetric, our schemes are more efficient than the state-of-the-art asymmetric PHE schemes without compromising the level of security or the range of homomorphic operations they support. We apply the aforementioned techniques in the context of batch data analytics and demonstrate the improvements over previous systems. Finally, we present techniques designed to enable the use of PHE and PPE in resource-constrained Internet of Things (IoT) devices and demonstrate the practicality of stream processing over encrypted data.</p></div></div></div><div><div><div> </div> </div> </div>
352

Intercultural development in global service-learning

Jones, Stephen W. 01 January 2011 (has links)
This research project examined the effects of participation in a six-month global service-learning program in the intercultural development of a group of students. The students under consideration herein participated in the 2009 program year of the Grace University EDGE Program, which took place in Mali, West Africa. The present research builds on and contributes to three primary areas of research: intercultural development, service-learning, and study abroad. As the literature in these areas revealed the lack of a consistent way to assess global service-learning, I tried a three-part method of assessment. First, the Intercultural Development Inventory formally measured growth in intercultural competence. Second, guided course-writing generated by the students was used to facilitate followup interviews of most participants, especially considering the intersections between IDI results and students' self-perceptions as reported in their papers. Third, the interviews were coded and explored for information related to the process of intercultural development. The participants, overall, demonstrated positive intercultural competence gains while undergoing a complex process involving the impetus for and experience of development, ultimately resulting in changed patterns of thought.
353

Distributed Network Processing and Optimization under Communication Constraint

Chang Shen Lee (11184969) 26 July 2021 (has links)
<div>In recent years, the amount of data in the information processing systems has significantly increased, which is also referred to as big-data. The design of systems handling big-data calls for a scalable approach, which brings distributed systems into the picture. In contrast to centralized systems, data are spread across the network of agents in the distributed system, and agents cooperatively complete tasks through local communications and local computations. However, the design and analysis of distributed systems, in which no central coordinators with complete information are present, are challenging tasks. In order to support communication among agents to enable multi-agent coordination among others, practical communication constraints should be taken into consideration in the design and analysis of such systems. The focus of this dissertation is to provide design and analysis of distributed network processing using finite-rate communications among agents. In particular, we address the following open questions: 1) can one design algorithms balancing a graph weight matrix using finite-rate and simplex communications among agents? 2) can one design algorithms computing the average of agents’ states using finite-rate and simplex communications? and 3) going beyond of ad-hoc algorithmic designs, can one design a black-box mechanism transforming a general class of algorithms with unquantized communication to their finite-bit quantized counterparts?</div><div><br></div><div>This dissertation addresses the above questions. First, we propose novel distributed algorithms solving the weight-balancing and average consensus problems using only finite-rate simplex communications among agents, compliant to the directed nature of the network topology. A novel convergence analysis is put forth, based on a new metric inspired by the</div><div>positional system representations. In the second half of this dissertation, distributed optimization subject to quantized communications is studied. Specifically, we consider a general class of linearly convergent distributed algorithms cast as fixed-point iterate, and propose a novel black-box quantization mechanism. In the proposed mechanism, a novel quantizer preserving linear convergence is proposed, which is proved to be more communication efficient than state-of-the-art quantization mechanisms. Extensive numerical results validate our theoretical findings.</div>
354

Relativistic Causal Ordering A Memory Model for Scalable Concurrent Data Structures

Triplett, Josh 01 January 2012 (has links)
High-performance programs and systems require concurrency to take full advantage of available hardware. However, the available concurrent programming models force a difficult choice, between simple models such as mutual exclusion that produce little to no concurrency, or complex models such as Read-Copy Update that can scale to all available resources. Simple concurrent programming models enforce atomicity and causality, and this enforcement limits concurrency. Scalable concurrent programming models expose the weakly ordered hardware memory model, requiring careful and explicit enforcement of causality to preserve correctness, as demonstrated in this dissertation through the manual construction of a scalable hash-table item-move algorithm. Recent research on "relativistic programming" aims to standardize the programming model of Read-Copy Update, but thus far these efforts have lacked a generalized memory ordering model, requiring data-structure-specific reasoning to preserve causality. I propose a new memory ordering model, "relativistic causal ordering", which combines the scalabilty of relativistic programming and Read-Copy Update with the simplicity of reader atomicity and automatic enforcement of causality. Programs written for the relativistic model translate to scalable concurrent programs for weakly-ordered hardware via a mechanical process of inserting barrier operations according to well-defined rules. To demonstrate the relativistic causal ordering model, I walk through the straightforward construction of a novel concurrent hash-table resize algorithm, including the translation of this algorithm from the relativistic model to a hardware memory model, and show through benchmarks that the resulting algorithm scales far better than those based on mutual exclusion.
355

Concentric Layout, A New Scientific Data Layout For Matrix Data Set In Hadoop File System

Cheng, Lu 01 January 2010 (has links)
The data generated by scientific simulation, sensor, monitor or optical telescope has increased with dramatic speed. In order to analyze the raw data speed and space efficiently, data preprocess operation is needed to achieve better performance in data analysis phase. Current research shows an increasing tread of adopting MapReduce framework for large scale data processing. However, the data access patterns which generally applied to scientific data set are not supported by current MapReduce framework directly. The gap between the requirement from analytics application and the property of MapReduce framework motivates us to provide support for these data access patterns in MapReduce framework. In our work, we studied the data access patterns in matrix files and proposed a new concentric data layout solution to facilitate matrix data access and analysis in MapReduce framework. Concentric data layout is a data layout which maintains the dimensional property in chunk level. Contrary to the continuous data layout which adopted in current Hadoop framework by default, concentric data layout stores the data from the same sub-matrix into one chunk. This matches well with the matrix operations like computation. The concentric data layout preprocesses the data beforehand, and optimizes the afterward run of MapReduce application. The experiments indicate that the concentric data layout improves the overall performance, reduces the execution time by 38% when the file size is 16 GB, also it relieves the data overhead phenomenon and increases the effective data retrieval rate by 32% on average.
356

AI-WSN: Adaptive and Intelligent Wireless Sensor Networks

Li, Jiakai 24 September 2012 (has links)
No description available.
357

Collaborative design in electromagnetics

Almaghrawi, Ahmed Almaamoun January 2007 (has links)
No description available.
358

Engineering analysis of object-oriented software development tools for distributed real-time systems

Al Mazid, Abul Hasnat Mamun 01 July 2000 (has links)
No description available.
359

The middleware/gateway approaches to interoperability : a case study

Huthmann, Andre 01 April 2001 (has links)
No description available.
360

Benchmarking distributed real-time applications

Su, Shenchao 01 July 2000 (has links)
No description available.

Page generated in 0.0334 seconds