• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 327
  • 58
  • 46
  • 35
  • 21
  • 9
  • 9
  • 8
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 634
  • 66
  • 65
  • 54
  • 54
  • 49
  • 47
  • 45
  • 41
  • 35
  • 35
  • 34
  • 33
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

SECURED ROUTING PROTOCOL FOR AD HOC NETWORKS

Venkatraman, Lakshmi 11 October 2001 (has links)
No description available.
52

Generalization error rates for margin-based classifiers

Park, Changyi 24 August 2005 (has links)
No description available.
53

The Predictive Accuracy of Conscientiousness when Responses are Dissimulated: Does Self-Consistency Matter?

Chang, Wan-Yin 10 June 2004 (has links)
The present study used a laboratory setting to explore the criterion-related validity of non-cognitive measures as related to personnel selection. The focal study investigated psychological processes resulting from situational causes of motivation to distort item responses. In particular, I investigated whether differences in the motivation to distort item responses interacted with self-consistency in the prediction of performance on a clerical task. Findings suggested that despite range restriction and the existence of faking behavior, a positive correlation between conscientiousness and performance exists. Variation of selection ratio (SR) and monetary incentives successfully produced faking behaviors, and the existence of faking behaviors was found in selection setting. Results partially supported the proposed hypothesis that there are positive and negative effects of faking behaviors. Implications of the present study were further discussed. / Master of Science
54

Retina: Cross-Layered Key-Value Store using Computational Storage

Bikonda, Naga Sanjana 10 March 2022 (has links)
Modern SSDs are getting faster and smarter with near-data computing capabilities. Due to their design choices, traditional key-value stores do not fully leverage these new storage devices. These key-value stores become CPU-bound even before fully utilizing the IO bandwidth. LSM or B+ tree-based key-value stores involve complex garbage collection and store sorted keys and complicated synchronization mechanisms. In this work, we propose a cross-layered key-value store named Retina that decouples the design to delegate control path manipulations to host CPU and data path manipulations to computational SSD to maximize performance and reduce compute bottlenecks. We employ many design choices not explored in other persistent key-value stores to achieve this goal. In addition to the cross-layered design paradigm, Retina introduces a new caching mechanism called Mirror cache, support for variable key-value pairs, and a novel version-based crash consistency model. By enabling all the design features, we equip Retina to reduce compute hotspots on the host CPU, take advantage of the on-storage accelerators to leverage the data locality on the computational storage, improve overall bandwidth and reduce the bandwidth net- work latencies. Thus when evaluated using YCSB, we observe the CPU utilization reduced by 4x and throughput performance improvement of 20.5% against the state-of-the-art for read-intensive workloads. / Master of Science / Modern secondary storage systems are providing an exponential increase in memory access speeds. In addition, new generation storage systems attach compute resources near data to offload computation to storage. Traditional datastore systems are lacking in performance when used with the new generation SSDs (Solid State Drive). The key reason is the SSDs are underutilized due to CPU bottlenecks. Due to design choices, conventional datastores incur expensive CPU tasks that cause the CPU to bottleneck even before the storage speeds are fully utilized. Thus, when attached to a modern SSD, conventional datastores will underutilize the storage resources. In this work, we propose a cross-layered key-value store named Retina that decouples the design to delegate control path manipulations to host CPU and data path manipulations to computational SSD to maximize performance and reduce compute bottlenecks. In addition to the cross-layered design paradigm, Retina introduces a new caching mechanism called Mirror cache and a novel version-based crash consistency model. By enabling all the design features, we equip Retina to reduce compute hotspots on the host CPU, take advantage of the on-storage accelerators to leverage the data locality on the computational storage and improve overall access speed. To evaluate Retina, we use throughput and CPU utilization as the comparison metric. We test our implementation with Yahoo Cloud Serving Benchmark, a popular datastore benchmark. We evaluate against RocksDB(the most widely adopted datastore) to enable fair performance comparison. In conclusion, we show that Retina key-value store improves the throughput performance by offloading logic to computational storage to reduce the CPU bottlenecks.
55

Architecture Support and Scalability Analysis of Memory Consistency Models in Network-on-Chip based Systems

Naeem, Abdul January 2013 (has links)
The shared memory systems should support parallelization at the computation (multi-core), communication (Network-on-Chip, NoC) and memory architecture levels to exploit the potential performance benefits. These parallel systems supporting shared memory abstraction both in the general purpose and application specific domains are confronting the critical issue of memory consistency. The memory consistency issue arises due to the unconstrained memory operations which leads to the unexpected behavior of shared memory systems. The memory consistency models enforce ordering constraints on the memory operations for the expected behavior of the shared memory systems. The intuitive Sequential Consistency (SC) model enforces strict ordering constraints on the memory operations and does not take advantage of the system optimizations both in the hardware and software. Alternatively, the relaxed memory consistency models relax the ordering constraints on the memory operations and exploit these optimizations to enhance the system performance at the reasonable cost. The purpose of this thesis is twofold. First, the novel architecture supports are provided for the different memory consistency models like: SC, Total Store Ordering (TSO), Partial Store Ordering (PSO), Weak Consistency (WC), Release Consistency (RC) and Protected Release Consistency (PRC) in the NoC-based multi-core (McNoC) systems. The PRC model is proposed as an extension of the RC model which provides additional reordering and relaxation in the memory operations. Second, the scalability analysis of these memory consistency models is performed in the McNoC systems. The architecture supports for these different memory consistency models are provided in the McNoC platforms. Each configurable McNoC platform uses a packet-switched 2-D mesh NoC with deflection routing policy, distributed shared memory (DSM), distributed locks and customized processor interface. The memory consistency models/protocols are implemented in the customized processor interfaces which are developed to integrate the processors with the rest of the system. The realization schemes for the memory consistency models are based on a transaction counter and an an an address ddress ddress ddress ddress ddress ddress stack tacktack-based based based based based based novel approaches.approaches.approaches.approaches. approaches.approaches.approaches.approaches.approaches.approaches. The transaction counter is used in each node of the network to keep track of the outstanding memory operations issued by a processor in the system. The address stack is used in each node of the network to keep track of the addresses of the outstanding memory operations issued by a processor in the system. These hardware structures are used in the processor interface to enforce the required global orders under these different memory consistency models. The realization scheme of the PRC model in addition also uses acquire counter for further classification of the data operations as unprotected and protected operations. The scalability analysis of these different memory consistency models is performed on the basis of different workloads which are developed and mapped on the various sized networks. The scalability study is conducted in the McNoC systems with 1 to 64-cores with various applications using different problem sizes and traffic patterns. The performance metrics like execution time, performance, speedup, overhead and efficiency are evaluated as a function of the network size. The experiments are conducted both with the synthetic and application workloads. The experimental results under different application workloads show that the average execution time under the relaxed memory consistency models decreases relative to the SC model. The specific numbers are highly sensitive to the application and depend on how well it matches to the architectures. This study shows the performance improvement under the relaxed memory consistency models over the SC model that is dependent on the computation-to-communication ratio, traffic patterns, data-to-synchronization ratio and the problem size. The performance improvement of the PRC and RC models over the SC model tends to be higher than 50% as observed in the experiments, when the system is further scaled up. / <p>QC 20130204</p>
56

The Impact of Leader¡¦s Integrity Character on the Effec- tiveness of Organization Change --- A Case of Formosa Plastics Corporation, Taiwan Semiconductor Manufacturing Company, and Macronix International

Lai, Shih-Chung 02 February 2007 (has links)
The Impact of Leader¡¦s Integrity Character on the Effectiveness of Organization Change --- A Case of Formosa Plastics Corporation, Taiwan Semiconductor Manufacturing Company, and Macronix International Student : Lai, Shih-Chung Advisor : Dr. Huang , Jason H. National Sun Yat¡V Sen University Department of Business Management Abstract Integrity is one of the key factors to organizational change and it long standing. It is also a highly emphasized issue in corporate governance. Across the world, business misconducts that have caused severe consequences to the economy and society are increasing each day, and the level of trust on the businesses worsens after each event. The objective of this study is to investigate the impact of the integrity orientation of the leaders on organizational change, on the innovation which leads to a new stage of growth. The study is conducted through literature review and case analyses. I probe into the factors that impact the effectiveness of organization change. The study focused on the leader's character of integrity. The findings are as follow. 1.Integrity character bears a high degree of influence on the organization change process. The more intense the changes, the more it should be initiated from the top down, and the more important is the leadership. 2.The strength of crisis consciousness drives organization change, and further the demand on leader¡¦s integral character, no matter it's a stable enterprise or the one under change. 3.The best prediction model for the integrity character of leaders is the long term coworker observation. The same model would apply for selecting and fostering successors. 4.By observing the consistency between leader's oral expression and actual behavior is sufficiently indicative of the suitability of the lesders. 5.There are three critical dimensions of the character of integrity, which this study referred to as the three components of integrity, that is, ¡§righteousness and the belief of integrity¡¨,¡§consistency in thought and words¡¨, and ¡§consistency between words and behaviors¡¨. ¡iKeyword¡j¡G Integrity Character, Organizational change, Righteousness , Integrity, Consistency between Thought and Words, Consistency between Words and Behaviors.
57

Highly available storage with minimal trust

Mahajan, Prince 05 July 2012 (has links)
Storage services form the core of modern Internet-based services spanning commercial, entertainment, and social-networking sectors. High availability is crucial for these services as even an hour of unavailability can cost them millions of dollars in lost revenue. Unfortunately, it is difficult to build highly available storage services that provide useful correctness properties. Both benign (system crashes, power out- ages etc.) and Byzantine faults (memory or disk corruption, software or configuration errors etc.) plague the availability of these services. Furthermore, the goal of high availability conflicts with our desire to provide good performance and strong correctness guarantees. For example, the Consistency, Availability, and Partition- resilience (CAP) theorem states that a storage service that must be available despite network partitions cannot enforce strong consistency. Similarly, the tradeoff between latency and durability dictates that a low-latency service cannot ensure durability in the presence of data-center wide failures. This dissertation explores the theoretical and practical limits of storage services that can be safe and live despite the presence of benign and Byzantine faults. On the practical front, we use cloud storage as a deployment model to build Depot, a highly available storage service that addresses the above challenges. Depot minimizes the trust clients have to put in the third party storage provider. As a result, Depot clients can continue functioning despite benign or Byzantine faults of the cloud servers. Yet, Depot provides stronger availability, durability, and consistency properties than those provided by many of the existing cloud deployments, without incurring prohibitive performance cost. For example, in contrast to Amazon S3’s eventual consistency, Depot provides a variation of causal consistency on each volume, while tolerating Byzantine faults. On the theoretical front, we explore the consistency-availability tradeoffs. Tradeoffs between consistency and availability have proved useful for designers in deciding how much to strengthen consistency if high availability is desired or how much to compromise availability if strong consistency is essential. We explore the limits of such tradeoffs by attempting to answer the question: What are the semantics that can be implemented without compromising availability? In this work, we investigate this question for both fail-stop and Byzantine failure models. An immediate benefit of answering this question is that we can compare and contrast the consistency provided by Depot with that achievable by an optimal implementation. More crucially, this result complements the CAP theorem. While, the CAP theorem defines a set of properties that cannot be achieved, this work identifies the limits of properties that can be achieved. / text
58

Causal weak-consistency replication

Hupfeld, Felix 03 June 2009 (has links)
Replikation kann helfen, in einem verteilten System die Fehlertoleranz und Datensicherheit zu verbessern. In Systemen, die über Weitverkehrsnetze kommunizieren oder mobile Endgeräte einschließen, muß das Replikationssystem mit großen Kommunikationslatenzen umgehen können. Deshalb werden in solchen Systemen in der Regel nur asynchrone Replikationsalgorithmen mit schwach-konsistenter Änderungssemantik eingesetzt, da diese die lokale Annahme von Änderungen der Daten und deren Koordinierung mit anderen Replikaten entkoppeln und somit ein schnelles Antwortverhalten bieten können. Diese Dissertation stellt einen Ansatz für die Entwicklung schwach-konsistenter Replikationssysteme mit erweiterten kausalen Konsistenzgarantien vor und weist nach, daß auf seiner Grundlage effiziente Replikationssysteme konstruiert werden können. Dazu werden Mechanismen, Algorithmen und Protokolle vorgestellt, die Änderungen an replizierten Daten aufzeichnen und verteilen und dabei Kausalitätsbeziehungen erhalten. Kern ist ein Änderungsprotokoll, das sowohl als grundlegende Datenstruktur der verteilten Algorithmen agiert, als auch für die Konsistenz der lokalen Daten nach Systemabstürzen sorgt. Die kausalen Garantien werden mit Hilfe von zwei Algorithmen erweitet, die gleichzeitige Änderungen konsistent handhaben. Beide Algorithmen basieren auf der Beobachtung, daß die Divergenz der Replikate durch unkoordinierte, gleichzeitige Änderungen nicht unbedingt als Inkonsistenz gesehen werden muß, sondern auch als das Erzeugen verschiedener Versionen der Daten modelliert werden kann. Distributed Consistent Branching (DCB) erzeugt diese alternativen Versionen der Daten konsistent auf allen Replikaten; Distributed Consistent Cutting (DCC) wählt eine der Versionen konsistent aus. Die vorgestellten Algorithmen und Protokolle wurden in einer Datenbankimplementierung validiert. Mehrere Experimente zeigen ihre Einsetzbarkeit und helfen, ihr Verhalten unter verschiedenen Bedingungen einzuschätzen. / Data replication techniques introduce redundancy into a distributed system architecture that can help solve several of its persistent problems. In wide area or mobile systems, a replication system must be able to deal with the presence of unreliable, high-latency links. Only asynchronous replication algorithms with weak-consistency guarantees can be deployed in these environments, as these algorithms decouple the local acceptance of changes to the replicated data from coordination with remote replicas. This dissertation proposes a framework for building weak-consistency replication systems that provides the application developer with causal consistency guarantees and mechanisms for handling concurrency. By presenting an integrated set of mechanisms, algorithms and protocols for capturing and disseminating changes to the replicated data, we show that causal consistency and concurrency handling can be implemented in an efficient and versatile manner. The framework is founded on log of changes, which both acts the core data structure for its distributed algorithms and protocols and serves as the database log that ensures the consistency of the local data replica. The causal consistency guarantees are complemented with two distributed algorithms that handle concurrent operations. Both algorithms are based on the observation that uncoordinated concurrent operations introduce a divergence of state in a replication system that can be modeled as the creation of version branches. Distributed Consistent Branching (DCB) recreates these branches on all participating processes in a consistent manner. Distributed Consistent Cutting (DCC) selects one of the possible branches in a consistent and application-controllable manner and enforces a total causal order for all its operations. The contributed algorithms and protocols were validated in an database system implementation, and several experiments assess the behavior of these algorithms and protocols under varying conditions.
59

Stimulus generalization and matching in concurrent variable interval schedules

Larsson, Eric V January 2011 (has links)
Photocopy of typescript. / Digitized by Kansas Correctional Industries
60

A comparison of omission training with constant or changing reinforcers vs. extinction:response reduction and recovery

Vatterott, Madeleine Kay. January 1984 (has links)
Call number: LD2668 .T4 1984 V37 / Master of Science

Page generated in 0.4071 seconds