• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 12
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Real-Time Telemetry Data Interface to Graphics Workstation

Sidorovich, Amy 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada / The demand for additional computing power and more sophisticated graphics displays to strengthen real-time flight testing prompted the Real-time Systems Team to turn to graphics workstations. In order to drive graphics displays with real-time data, the questions became, "What interface to use?" and "How to integrate workstations into our existing telemetry processing system?". This paper discusses the interface and integration of graphics workstations to the Real-time Telemetry Processing System III (RTPS III).
2

THE RELIABILITY OF SURFACE ASSEMBLAGES IN ARCHAEOLOGICAL INTERPRETATION

Gumbs, Vernice Pamela January 2000 (has links)
No description available.
3

Optimizing recovery protocols for replicated database systems

García Muñoz, Luis Hector 02 September 2013 (has links)
En la actualidad, el uso de tecnologías de informacíon y sistemas de cómputo tienen una gran influencia en la vida diaria. Dentro de los sistemas informáticos actualmente en uso, son de gran relevancia los sistemas distribuidos por la capacidad que pueden tener para escalar, proporcionar soporte para la tolerancia a fallos y mejorar el desempeño de aplicaciones y proporcionar alta disponibilidad. Los sistemas replicados son un caso especial de los sistemas distribuidos. Esta tesis está centrada en el área de las bases de datos replicadas debido al uso extendido que en el presente se hace de ellas, requiriendo características como: bajos tiempos de respuesta, alto rendimiento en los procesos, balanceo de carga entre las replicas, consistencia e integridad de datos y tolerancia a fallos. En este contexto, el desarrollo de aplicaciones utilizando bases de datos replicadas presenta dificultades que pueden verse atenuadas mediante el uso de servicios de soporte a mas bajo nivel tales como servicios de comunicacion y pertenencia. El uso de los servicios proporcionados por los sistemas de comunicación de grupos permiten ocultar los detalles de las comunicaciones y facilitan el diseño de protocolos de replicación y recuperación. En esta tesis, se presenta un estudio de las alternativas y estrategias empleadas en los protocolos de replicación y recuperación en las bases de datos replicadas. También se revisan diferentes conceptos sobre los sistemas de comunicación de grupos y sincronia virtual. Se caracterizan y clasifican diferentes tipos de protocolos de replicación con respecto a la interacción o soporte que pudieran dar a la recuperación, sin embargo el enfoque se dirige a los protocolos basados en sistemas de comunicación de grupos. Debido a que los sistemas comerciales actuales permiten a los programadores y administradores de sistemas de bases de datos renunciar en alguna medida a la consistencia con la finalidad de aumentar el rendimiento, es importante determinar el nivel de consistencia necesario. En el caso de las bases de datos replicadas la consistencia está muy relacionada con el nivel de aislamiento establecido entre las transacciones. Una de las propuestas centrales de esta tesis es un protocolo de recuperación para un protocolo de replicación basado en certificación. Los protocolos de replicación de base de datos basados en certificación proveen buenas bases para el desarrollo de sus respectivos protocolos de recuperación cuando se utiliza el nivel de aislamiento snapshot. Para tal nivel de aislamiento no se requiere que los readsets sean transferidos entre las réplicas ni revisados en la fase de cetificación y ya que estos protocolos mantienen un histórico de la lista de writesets que es utilizada para certificar las transacciones, este histórico provee la información necesaria para transferir el estado perdido por la réplica en recuperación. Se hace un estudio del rendimiento del protocolo de recuperación básico y de la versión optimizada en la que se compacta la información a transferir. Se presentan los resultados obtenidos en las pruebas de la implementación del protocolo de recuperación en el middleware de soporte. La segunda propuesta esta basada en aplicar el principio de compactación de la informacion de recuperación en un protocolo de recuperación para los protocolos de replicación basados en votación débil. El objetivo es minimizar el tiempo necesario para transfeir y aplicar la información perdida por la réplica en recuperación obteniendo con esto un protocolo de recuperación mas eficiente. Se ha verificado el buen desempeño de este algoritmo a través de una simulación. Para efectuar la simulación se ha hecho uso del entorno de simulación Omnet++. En los resultados de los experimentos puede apreciarse que este protocolo de recuperación tiene buenos resultados en múltiples escenarios. Finalmente, se presenta la verificación de la corrección de ambos algoritmos de recuperación en el Capítulo 5. / Nowadays, information technology and computing systems have a great relevance on our lives. Among current computer systems, distributed systems are one of the most important because of their scalability, fault tolerance, performance improvements and high availability. Replicated systems are a specific case of distributed system. This Ph.D. thesis is centered in the replicated database field due to their extended usage, requiring among other properties: low response times, high throughput, load balancing among replicas, data consistency, data integrity and fault tolerance. In this scope, the development of applications that use replicated databases raises some problems that can be reduced using other fault-tolerant building blocks, as group communication and membership services. Thus, the usage of the services provided by group communication systems (GCS) hides several communication details, simplifying the design of replication and recovery protocols. This Ph.D. thesis surveys the alternatives and strategies being used in the replication and recovery protocols for database replication systems. It also summarizes different concepts about group communication systems and virtual synchrony. As a result, the thesis provides a classification of database replication protocols according to their support to (and interaction with) recovery protocols, always assuming that both kinds of protocol rely on a GCS. Since current commercial DBMSs allow that programmers and database administrators sacrifice consistency with the aim of improving performance, it is important to select the appropriate level of consistency. Regarding (replicated) databases, consistency is strongly related to the isolation levels being assigned to transactions. One of the main proposals of this thesis is a recovery protocol for a replication protocol based on certification. Certification-based database replication protocols provide a good basis for the development of their recovery strategies when a snapshot isolation level is assumed. In that level readsets are not needed in the validation step. As a result, they do not need to be transmitted to other replicas. Additionally, these protocols hold a writeset list that is used in the certification/validation step. That list maintains the set of writesets needed by the recovery protocol. This thesis evaluates the performance of a recovery protocol based on the writeset list tranfer (basic protocol) and of an optimized version that compacts the information to be transferred. The second proposal applies the compaction principle to a recovery protocol designed for weak-voting replication protocols. Its aim is to minimize the time needed for transferring and applying the writesets lost by the recovering replica, obtaining in this way an efficient recovery. The performance of this recovery algorithm has been checked implementing a simulator. To this end, the Omnet++ simulating framework has been used. The simulation results confirm that this recovery protocol provides good results in multiple scenarios. Finally, the correction of both recovery protocols is also justified and presented in Chapter 5. / García Muñoz, LH. (2013). Optimizing recovery protocols for replicated database systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31632 / TESIS
4

High Performance Inter-kernel Communication and Networking in a Replicated-kernel Operating System

Ansary, B M Saif 20 January 2016 (has links)
Modern computer hardware platforms are moving towards high core-count and heterogeneous Instruction Set Architecture (ISA) processors to achieve improved performance as single core performance has reached its performance limit. These trends put the current monolithic SMP operating system (OS) under scrutiny in terms of scalability and portability. Proper pairing of computing workloads with computing resources has become increasingly arduous with traditional software architecture. One of the most promising emerging operating system architectures is the Multi-kernel. Multi-kernels not only address scalability issues, but also inherently support heterogeneity. Furthermore, provide an easy way to properly map computing workloads to the correct type of processing resources in presence of heterogeneity. Multi-kernels do so by partitioning the resources and running independent kernel instances and co-operating amongst themselves to present a unified view of the system to the application. Popcorn is one the most prominent multi-kernels today, which is unique in the sense that it runs multiple Linux instances on different cores or group of cores, and provides a unified view of the system i.e., Single System Image (SSI). This thesis presents four contributions. First, it introduces a filesystem for Popcorn, which is a vital part to provide a SSI. Popcorn supports thread/process migration that requires migration of file descriptors which is not provided by traditional filesystems as well as popular distributed file systems, this work proposes a scalable messaging based file descriptor migration and consistency protocol for Popcorn. Second, multi-kernel OSs rely heavily on a fast low latency messaging layer to be scalable. Messaging is even more important in heterogeneous systems where different types of cores are on different islands with no shared memory. Thus, another contribution proposes a fast-low latency messaging layer to enable communication among heterogeneous processor islands for Heterogeneous Popcorn. With advances in networking technology, newest Ethernet technologies are able to support up to 40 Gbps bandwidth, but due to scalability issues in monolithic kernels, the number of connections served per second does not scale with this increase in speed.Therefore, the third and fourth contributions try to address this problem with Snap Bean, a virtual network device and Angel, an opportunistic load balancer for Popcorn's network system. With the messaging layer Popcorn gets over 30% performance benefit over OpenCL and Intel Offloading technique (LEO). And with NetPopcorn we achieve over 7 to 8 times better performance over vanilla Linux and 2 to 5 times over state-of-the-art Affinity Accept. / Master of Science
5

A METHOD FOR SELECTIVE UPDATE PROPAGATION IN REPLICATED DATABASES

Jolson, Michael Jacob January 2008 (has links)
No description available.
6

A Study of Replicated and Distributed Web Content

John, Nitin Abraham 10 August 2002 (has links)
" With the increase in traffic on the web, popular web sites get a large number of requests. Servers at these sites are sometimes unable to handle the large number of requests and clients to such sites experience long delays. One approach to overcome this problem is the distribution or replication of content over multiple servers. This approach allows for client requests to be distributed to multiple servers. Several techniques have been suggested to direct client requests to multiple servers. We discuss these techniques. With this work we hope to study the extent and method of content replication and distribution at web sites. To understand the distribution and replication of content we ran client programs to retrieve headers and bodies of web pages and observed the changes in them over multiple requests. We also hope to understand possible problems that could face clients to such sites due to caching and standardization of newer protocols like HTTP/1.1. The main contribution of this work is to understand the actual implementation of replicated and distributed content on multiple servers and its implication for clients. Our investigations showed issues with replicated and distributed content and its effects on caching due to incorrect identifers being send by different servers serving the same content. We were able to identify web sites doing application layer switching mechanisms like DNS and HTTP redirection. Lower layers of switching needed investigation of the HTTP responses from servers, which were hampered by insuffcient tags send by servers. We find web sites employ a large amount of distribution of embedded content and its ramifcations on HTTP/1.1 need further investigation. "
7

UpRight fault tolerance

Clement, Allen Grogan 13 November 2012 (has links)
Experiences with computer systems indicate an inconvenient truth: computers fail and they fail in interesting ways. Although using redundancy to protect against fail-stop failures is common practice, non-fail-stop computer and network failures occur for a variety of reasons including power outage, disk or memory corruption, NIC malfunction, user error, operating system and application bugs or misconfiguration, and many others. The impact of these failures can be dramatic, ranging from service unavailability to stranding airplane passengers on the runway to companies closing. While high-stakes embedded systems have embraced Byzantine fault tolerant techniques, general purpose computing continues to rely on techniques that are fundamentally crash tolerant. In a general purpose environment, the current best practices response to non-fail-stop failures can charitably be described as pragmatic: identify a root cause and add checksums to prevent that error from happening again in the future. Pragmatic responses have proven effective for patching holes and protecting against faults once they have occurred; unfortunately the initial damage has already been done, and it is difficult to say if the patches made to address previous faults will protect against future failures. We posit that an end-to-end solution based on Byzantine fault tolerant (BFT) state machine replication is an efficient and deployable alternative to current ad hoc approaches favored in general purpose computing. The replicated state machine approach ensures that multiple copies of the same deterministic application execute requests in the same order and provides end-to-end assurance that independent transient failures will not lead to unavailability or incorrect responses. An efficient and effective end-to-end solution covers faults that have already been observed as well as failures that have not yet occurred, and it provides structural confidence that developers won't have to track down yet another failure caused by some unpredicted memory, disk, or network behavior. While the promise of end-to-end failure protection is intriguing, significant technical and practical challenges currently prevent adoption in general purpose computing environments. On the technical side, it is important that end-to-end solutions maintain the performance characteristics of deployed systems: if end-to-end solutions dramatically increase computing requirements, dramatically reduce throughput, or dramatically increase latency during normal operation then end-to-end techniques are a non-starter. On the practical side, it is important that end-to-end approaches be both comprehensible and easy to incorporate: if the cost of end-to-end solutions is rewriting an application or trusting intricate and arcane protocols, then end-to-end solutions will not be adopted. In this thesis we show that BFT state machine replication can and be used in deployed systems. Reaching this goal requires us to address both the technical and practical challenges previously mentioned. We revisiting disparate research results from the last decade and tweak, refine, and revise the core ideas to fit together into a coherent whole. Addressing the practical concerns requires us to simplify the process of incorporating BFT techniques into legacy applications. / text
8

Controlling IER, EER, and FDR In Replicated Regular Two-Level Factorial Designs

Akinlawon, Oludotun J Unknown Date
No description available.
9

The application and interpretation of the two-parameter item response model in the context of replicated preference testing

Button, Zach January 1900 (has links)
Master of Science / Statistics / Suzanne Dubnicka / Preference testing is a popular method of determining consumer preferences for a variety of products in areas such as sensory analysis, animal welfare, and pharmacology. However, many prominent models for this type of data do not allow different probabilities of preferring one product over the other for each individual consumer, called overdispersion, which intuitively exists in real-world situations. We investigate the Two-Parameter variation of the Item Response Model (IRM) in the context of replicated preference testing. Because the IRM is most commonly applied to multiple-choice testing, our primary focus is the interpretation of the model parameters with respect to preference testing and the evaluation of the model’s usefulness in this context. We fit a Bayesian version of the Two-Parameter Probit IRM (2PP) to two real-world datasets, Raisin Bran and Cola, as well as five hypothetical datasets constructed with specific parameter properties in mind. The values of the parameters are sampled via the Gibbs Sampler and examined using various plots of the posterior distributions. Next, several different models and prior distribution specifications are compared over the Raisin Bran and Cola datasets using the Deviance Information Criterion (DIC). The Two-Parameter IRM is a useful tool in the context of replicated preference testing, due to its ability to accommodate overdispersion, its intuitive interpretation, and its flexibility in terms of parameterization, link function, and prior specification. However, we find that this model brings computational difficulties in certain situations, some of which require creative solutions. Although the IRM can be interpreted for replicated preference testing scenarios, this data typically contains few replications, while the model was designed for exams with many items. We conclude that the IRM may provide little evidence for marketing decisions, and it is better-suited for exploring the nature of consumer preferences early in product development.
10

Snapple : A distributed, fault-tolerant, in-memory key-value store using Conflict-Free Replicated Data Types / Snapple : En distribuerad feltolerant nyckelvärdesdatabas i RAM-minnet baserad på konfliktfria replikerade datatyper

Stenberg, Johan January 2016 (has links)
As services grow and receive more traffic, data resilience through replication becomes increasingly important. Modern large-scale Internet services such as Facebook, Google and Twitter serve millions of users concurrently. Replication is a vital component of distributed systems. Eventual consistency and Conflict-Free Replicated Data Types (CRDTs) are suggested as an alternative to strong consistency systems. This thesis implements and evaluates Snapple, a distributed, fault-tolerant, in-memory key-value database based on CRDTs running on the Java Virtual Machine. Snapple supports two kinds of CRDTs, an optimized implementation of the OR-Set and version vectors. Performance measurements show that the Snapple system is significantly faster than Riak, a persistent database based on CRDTs, but has a factor 5x - 2.5x lower throughput than Redis, a popular in-memory key-value database written in C. Snapple is a prototype-implementation but might be a viable alternative to Redis if the user wants the consistency guarantees CRDTs provide. / När internet-baserade tjänster växer och får mer trafik blir data replikering allt viktigare. Moderna storskaliga internet-baserade tjänster såsom Facebook, Google och Twitter hanterar miljoner av förfrågningar från användare samtidigt. Datareplikering är en vital komponent av distribuerade system. Eventuell synkronisering och Konfliktfria Replikerade Datatyper (CRDTs) är föreslagna som alternativ till direkt synkronisering. Denna uppsats implementerar och evaluerar Snapple, en distribuerad feltolerant nyckelvärdesdatabas i RAM-minnet baserad på CRDTs och som exekverar på Javas virtuella maskin. Snapple stödjer två sorters CRDTs, den optimerade implementationen av observera-ta-bort setet och versionsvektorer. Prestanda-mätningar visar att Snapple-systemet är mycket snabbare än Riak, en persistent databas baserad på CRDTs. Snapple visar sig ha 5x - 2.5x lägre genomströmning än Redis, en popular i-minnet nyckel-värdes databas skriven i C. Snapple är en prototyp men CRDT-stödda system kan vara ett värdigt alternativ till Redis om användaren vill ta del av synkroniseringsgarantierna som CRDTs tillhandahåller.

Page generated in 0.0561 seconds