• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 274
  • 274
  • 220
  • 86
  • 56
  • 51
  • 46
  • 43
  • 40
  • 37
  • 36
  • 35
  • 35
  • 33
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Improving Resource Management in Virtualized Data Centers using Application Performance Models

Kundu, Sajib 01 April 2013 (has links)
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
182

An Empirical Investigation of the Willingness of US Intelligence Community Analysts to Contribute Knowledge to a Knowledge Management System (KMS) in a Highly Classified and Sensitive Environment

Hambly, Robert 01 January 2016 (has links)
Since September 11, 2001, the United States Government (USG) has possessed unparalleled capability in terms of dedicated intelligence and information collection assets supporting the analysts of the Intelligence Community (IC). The USG IC has sponsored, developed, and borne witness to extraordinary advances in technology, techniques, and procedures focused on knowledge harvesting, knowledge sharing, and collaboration. Knowledge, within successful (effective & productive) organizations, exists as a commodity; a commodity that can be created, captured, imparted, shared, and leveraged. The research problem that this study addressed is the challenge of maintaining strong organizational effectiveness and productivity through the use of an information technology-based knowledge management system (KMS). The main goal of this study was to empirically assess a model testing the impact of the factors of rewards, power, centrality, trust, collaborative environment, resistance to share, ease-of-using KMS, organizational structure, and top management support to inducement, willingness to share, as well as opportunity to contribute knowledge to a KMS on knowledge-sharing in a highly classified and sensitive environment of the USG IC. This study capitalized on prior literature to measure each of the 15 model constructs. This study was conducted with a select group of USG Departments and Agencies whose primary interest is Intelligence Operations. This study solicited responses from more than 1,000 current, as well as former, Intelligence Analysts of the USG IC, using an unclassified anonymous survey instrument. A total of 525 (52.5%) valid responses were analyzed using a partial least squares (PLS) structural equation modeling (SEM) statistical technique to perform model testing. Pre-analysis data screening was conducted to ensure the accuracy of the data collected, as well as to correct irregularities or errors within the gathered data. The 14 propositions outlined in this research study were tested using the PLS-SEM analysis along with reliability and validity checks. The results of this study provide insights into the key factors that shed light onto the willingness of US intelligence community analysts to contribute knowledge to a KMS in a highly classified and sensitive environment. Specifically, the significance of a knowledge worker’s willingness to contribute his/her knowledge to a KMS along with the opportunity to contribute knowledge, while inducement was not a significant factor when it comes to knowledge sharing using KMS in highly classified environments.
183

A Performance Analysis of Distributed Algorithms in JavaSpaces, CORBA Services and Web Services

Sunku, Suresh 01 January 2003 (has links)
Implementation of distributed parallel algorithms on networked computers has always been very difficult until the introduction of service-oriented architectures (SOA) like JavaSpaces service, CORBA services and Web Services. Algorithms of the type Master/Worker pattern are implemented with relative ease using the SOAs. This project analyzes the performance of such algorithms on three contemporary SOAs namely JavaSpaces service, CORBA services and Web Services. These architectures make the implementations of distributed algorithms reasonably fault tolerant and highly and dynamically scalable. Also, the systems built on these architectures are generally loosely coupled and operate asynchronously. In this project we measure and analyze the latency, speed-up and efficiency metrics of an insertion sort of 0 (n^2) complexity on all the three SOAs. We then draw conclusions of overall performance and scalability on all the three architectures.
184

Scheduling Medical Application Workloads on Virtualized Computing Systems

Delgado, Javier 30 March 2012 (has links)
This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of “cloud computing” services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. A performance prediction methodology applicable to the target environment. A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20-30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.
185

Trajectory Privacy Preservation in Mobile Wireless Sensor Networks

Jin, Xinyu 23 October 2013 (has links)
In recent years, there has been an enormous growth of location-aware devices, such as GPS embedded cell phones, mobile sensors and radio-frequency identification tags. The age of combining sensing, processing and communication in one device, gives rise to a vast number of applications leading to endless possibilities and a realization of mobile Wireless Sensor Network (mWSN) applications. As computing, sensing and communication become more ubiquitous, trajectory privacy becomes a critical piece of information and an important factor for commercial success. While on the move, sensor nodes continuously transmit data streams of sensed values and spatiotemporal information, known as ``trajectory information". If adversaries can intercept this information, they can monitor the trajectory path and capture the location of the source node. This research stems from the recognition that the wide applicability of mWSNs will remain elusive unless a trajectory privacy preservation mechanism is developed. The outcome seeks to lay a firm foundation in the field of trajectory privacy preservation in mWSNs against external and internal trajectory privacy attacks. First, to prevent external attacks, we particularly investigated a context-based trajectory privacy-aware routing protocol to prevent the eavesdropping attack. Traditional shortest-path oriented routing algorithms give adversaries the possibility to locate the target node in a certain area. We designed the novel privacy-aware routing phase and utilized the trajectory dissimilarity between mobile nodes to mislead adversaries about the location where the message started its journey. Second, to detect internal attacks, we developed a software-based attestation solution to detect compromised nodes. We created the dynamic attestation node chain among neighboring nodes to examine the memory checksum of suspicious nodes. The computation time for memory traversal had been improved compared to the previous work. Finally, we revisited the trust issue in trajectory privacy preservation mechanism designs. We used Bayesian game theory to model and analyze cooperative, selfish and malicious nodes' behaviors in trajectory privacy preservation activities.
186

Design and Development of a Comprehensive and Interactive Diabetic Parameter Monitoring System - BeticTrack

Chowdhury, Nusrat 01 December 2019 (has links)
A novel, interactive Android app has been developed that monitors the health of type 2 diabetic patients in real-time, providing patients and their physicians with real-time feedback on all relevant parameters of diabetes. The app includes modules for recording carbohydrate intake and blood glucose; for reminding patients about the need to take medications on schedule; and for tracking physical activity, using movement data via Bluetooth from a pair of wearable insole devices. Two machine learning models were developed to detect seven physical activities: sitting, standing, walking, running, stair ascent, stair descent and use of elliptical trainers. The SVM and decision tree models produced an average accuracy of 85% for these seven activities. The decision tree model is implemented in an app that classifies human activity in real-time.
187

PROCESSOR TEMPERATURE AND RELIABILITY ESTIMATION USING ACTIVITY COUNTERS

Chhablani, Mayank 23 March 2016 (has links)
With the advent of technology scaling lifetime reliability is an emerging threat in high-performance and deadline-critical systems. High on-chip thermal gradients accelerates localised thermal elevations (hotspots) which increases the aging rate of the semiconductor devices. As a result, reliable operation of the processors has become a challenging task. Therefore, cost effective schemes for estimating temperature and reliability are crucial. In this work we present a reliability estimation scheme that is based on a light-weight temperature estimation technique that monitors hardware events. Unlike previously pro- posed hardware counter-based approaches, our approach involves a linear-temporal-feedback estimator, taking into account the effects of thermal inertia. The proposed approach shows an average absolute error of We then present a counter-based technique to estimate the thermal accelerated aging factor (TAAF), which is an indicator of lifetime reliability. Results demonstrate that the estimation error is within [−3, +5].
188

Gamification as a Service: Conceptualization of a Generic Enterprise Gamification Platform

Herzig, Philipp 02 July 2014 (has links)
Gamification is a novel method to improve engagement, motivation, or participation in non-game contexts using game mechanics. To a large extent, gamification is a psychological- and design-oriented discipline, i.e., a lot of effort has to be spent already in the design phase of a gamification project. Subsequently, the design is implemented in information systems such as portals or enterprise resource planning applications. These systems act as mediators to transport a gameful design to its users. However, the efforts for the subsequent development and integration process are often underestimated. In fact, most conceptual gamification designs are never implemented due to the high development costs that arise from building the gamification solution from scratch, imprecise design or technical requirements, and communication conflicts between different stakeholders in the project. This thesis addresses these problems by systematically defining the phases and stakeholders of the overall gamification process. Furthermore, the thesis rigorously defines the conceptual requirements of gamification based on a broad literature review. The identified conceptual requirements are mapped to a domain-specific language, called the Gamification Modeling Language. Moreover, this thesis analyzes 29 existing gamification solutions that aim to decrease the implementation efforts of gamification. However, using the different language elements, it is shown that none of the existing solutions suffices all requirements. Therefore, a generic and reusable platform as runtime environment for gamification is proposed which fulfills all presented functional and non-functional requirements. As another benefit, it is shown how the Gamification Modeling Language can be automatically compiled into code for the gamification runtime environment and, thus, further reduces development efforts. Based on the developed artifacts and five real gamified applications from industry, it is shown that the efforts for the implementation of the gamification can be significantly reduced from several months or weeks to a few days. Since the technology is designed as a reusable service, future projects benefit continuously with regards to time and efforts.
189

On the Effect of Heterogeneity on the Dynamics and Performance of Dynamical Networks

Goudarzi, Alireza 01 January 2012 (has links)
The high cost of processor fabrication plants and approaching physical limits have started a new wave research in alternative computing paradigms. As an alternative to the top-down manufactured silicon-based computers, research in computing using natural and physical system directly has recently gained a great deal of interest. A branch of this research promotes the idea that any physical system with sufficiently complex dynamics is able to perform computation. The power of networks in representing complex interactions between many parts make them a suitable choice for modeling physical systems. Many studies used networks with a homogeneous structure to describe the computational circuits. However physical systems are inherently heterogeneous. We aim to study the effect of heterogeneity in the dynamics of physical systems that pertains to information processing. Two particularly well-studied network models that represent information processing in a wide range of physical systems are Random Boolean Networks (RBN), that are used to model gene interactions, and Liquid State Machines (LSM), that are used to model brain-like networks. In this thesis, we study the effects of function heterogeneity, in-degree heterogeneity, and interconnect irregularity on the dynamics and the performance of RBN and LSM. First, we introduce the model parameters to characterize the heterogeneity of components in RBN and LSM networks. We then quantify the effects of heterogeneity on the network dynamics. For the three heterogeneity aspects that we studied, we found that the effect of heterogeneity on RBN and LSM are very different. We find that in LSM the in-degree heterogeneity decreases the chaoticity in the network, whereas it increases chaoticity in RBN. For interconnect irregularity, heterogeneity decreases the chaoticity in LSM while its effects on RBN the dynamics depends on the connectivity. For {K} < 2, heterogeneity in the interconnect will increase the chaoticity in the dynamics and for {K} > 2 it decreases the chaoticity. We find that function heterogeneity has virtually no effect on the LSM dynamics. In RBN however, function heterogeneity actually makes the dynamics predictable as a function of connectivity and heterogeneity in the network structure. We hypothesize that node heterogeneity in RBN may help signal processing because of the variety of signal decomposition by different nodes.
190

A Hardware Framework for Yield and Reliability Enhancement in Chip Multiprocessors

Pan, Abhisek 01 January 2009 (has links) (PDF)
Device reliability and manufacturability have emerged as dominant concerns in end-of-road CMOS devices. Today an increasing number of hardware failures are attributed to device reliability problems that cause partial system failure or shutdown. Also maintaining an acceptable manufacturing yield is seen as challenge because of smaller feature sizes, process variation, and reduced headroom for burn-in tests. In this project we investigate a hardware-based scheme for improving yield and reliability of a homogeneous chip multiprocessor (CMP). The proposed solution involves a hardware framework that enables us to utilize the redundancies inherent in a multi-core system to keep the system operational in face of partial failures due to hard faults (faults due to manufacturing defects or permanent faults developed during system lifetime). A micro-architectural modification allows a faulty core in a multiprocessor system to use another core as a coprocessor to service any instruction that the former cannot execute correctly by itself. This service is accessed to improve yield and reliability, but at the cost of some loss of performance. In order to quantify this loss we have used a cycle-accurate architectural simulator to simulate the performance of dual-core and quad-core systems with one or more cores sustaining partial failure. Simulation studies indicate that when a large and sparingly-used unit such as a floating point unit fails in a core, even for a floating point intensive benchmark, we can continue to run the faulty core with as little as 10% performance impact and minimal area overhead. Incorporating this recovery mechanism entails some modifications in the microprocessor micro-architecture. The modifications are also described here through a simplified model of a superscalar processor.

Page generated in 0.052 seconds