• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 327
  • 58
  • 46
  • 35
  • 21
  • 9
  • 9
  • 8
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 635
  • 66
  • 65
  • 54
  • 54
  • 49
  • 47
  • 45
  • 41
  • 36
  • 35
  • 34
  • 33
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Robust Consistency Checking for Modern Filesystems

Sun, Kuei 19 March 2013 (has links)
A runtime file system checker protects file-system metadata integrity. It checks the consistency of file system update operations before they are committed to disk, thus preventing corrupted updates from reaching the disk. In this thesis, we describe our experiences with building Brunch, a runtime checker for an emerging Linux file system called Btrfs. Btrfs supports many modern file-system features that pose challenges in designing a robust checker. We find that the runtime consistency checks need to be expressed clearly so that they can be reasoned about and implemented reliably, and thus we propose writing the checks declaratively. This approach reduces the complexity of the checks, ensures their independence, and helps identify the correct abstractions in the checker. It also shows how the checker can be designed to handle arbitrary file system corruption. Our results show that runtime consistency checking is still viable for complex, modern file systems.
102

Asymptotic behavior of Bayesian nonparametric procedures /

Xing, Yang, January 2009 (has links) (PDF)
Diss. (sammanfattning) Umeå : Sveriges lantbruksuniv., 2009. / Härtill 6 uppsatser.
103

A critical project : a thesis submitted in partial fulfilment of the requirements for the degree of Master of Arts in Philosophy, Department of Philosophy and Religious Studies, University of Canterbury /

Rowe, T. S. January 2008 (has links)
Thesis (M.A.)--University of Canterbury, 2008. / Typescript (photocopy). "March 2008." Includes bibliographical references (leaves 90-95). Also available via the World Wide Web.
104

Counterfactual thinking and cognitive consistency

Uldall, Brian Robert, January 2005 (has links)
Thesis (Ph. D.)--Ohio State University, 2005. / Title from first page of PDF file. Includes bibliographical references (p. 97-107).
105

Redefining early child neglect subthreshold pathways to non-optimal development /

Akai, Carol Elizabeth. January 2007 (has links)
Thesis (Ph. D.)--University of Notre Dame, 2007. / Thesis directed by John G. Borkowski for the Department of Psychology. "August 2007." Includes bibliographical references (leaves 79-86).
106

Cohérence et relations de travail / Consistency in labor law

Lam, Hélène 15 May 2013 (has links)
L'originalité de la cohérence en droit du travail tient à la variation de sa densité en fonction du degré de liberté de consentement exprimé ou du comportement adopté, qui détermine le caractère légitime et raisonnable de l'attente de cocontractant à son respect. Quand l'employeur est tenu à une réelle cohérence, le salarié se voit lui, de par sa position subordonnée, reconnaitre un droit à la contradiction. S'il est compréhensible que la subordination puisse atténuer l'effet obligatoire du comportement il n'est pas opportun pour la stabilité contractuelle, que le salarié puisse se délier par son comportement de certaines de ses obligations. Le devoir de cohérence souffre aujourd'hui d'une existence seulement implicite, fondée à tort, sur la bonne foi ou l'abus et empêchant une prévisibilité des sanctions des contradictions, tant procédurales qu'au fond. La consécration d'un principe général de cohérence en droit du travail permettrait que le salarié, trop souvent autorisé à se contredire, et l'employeur, à qui quelques contradictions sont encore permises, voient leurs comportements encadré afin de renforcer la confiance mutuelle nécessaire à la pérennité de la relation de travail. / The originality of consistency in labor law is the graduation of its density with the degree of freedom of consent (behavior adopted, which determines the legitimacy and reasonableness of the expectation of the other party to respect it. When the employer is obliged to a real consistency, the employee himself, by his subordinate position, is recognized right to contradiction. While it is understandable that the subordination can attenuate the binding effect of employee' behavior, it is not suitable for the stability of contract that the employee could, by its behavior, get out of its obligations. The duty of consistency today suffers from an only implicit existence, wrongly based on good faith and breach of law and preventing predictability of contradiction sanctions, both procedural and basis. The consecration of a general principle of coherence in labor law would be beneficial to both the employee, who is too often authorized to contradict, and the employer, who is still allowed to some contradictions, and would be a Ham strengthening mutual confidence, in the common goal of sustainability of the labor relationship.
107

Towards a holistic framework for software artefact consistency management

Pete, Ildiko January 2017 (has links)
A software system is represented by different software artefacts ranging from requirements specifications to source code. As the system evolves, artefacts are often modified at different rates and times resulting in inconsistencies, which in turn can hinder effective communication between stakeholders, and the understanding and maintenance of systems. The problem of the differential evolution of heterogeneous software artefacts has not been sufficiently addressed to date as current solutions focus on specific sets of artefacts and aspects of consistency management and are not fully automated. This thesis presents the concept of holistic artefact consistency management and a proof-of-concept framework, ACM, which aim to support the consistent evolution of heterogeneous software artefacts while minimising the impact on user choices and practices and maximising automation. The ACM framework incorporates traceability, change impact analysis, change detection, consistency checking and change propagation mechanisms and is designed to be extensible. The thesis describes the design, implementation and evaluation of the framework, and an approach to automate trace link creation using machine learning techniques. The framework evaluation uses six open source systems and suggests that managing the consistency of heterogeneous artefacts may be feasible in practical scenarios.
108

Memory consistency directed cache coherence protocols for scalable multiprocessors

Elver, Marco Iskender January 2016 (has links)
The memory consistency model, which formally specifies the behavior of the memory system, is used by programmers to reason about parallel programs. From a hardware design perspective, weaker consistency models permit various optimizations in a multiprocessor system: this thesis focuses on designing and optimizing the cache coherence protocol for a given target memory consistency model. Traditional directory coherence protocols are designed to be compatible with the strictest memory consistency model, sequential consistency (SC). When they are used for chip multiprocessors (CMPs) that provide more relaxed memory consistency models, such protocols turn out to be unnecessarily strict. Usually, this comes at the cost of scalability, in terms of per-core storage due to sharer tracking, which poses a problem with increasing number of cores in today’s CMPs, most of which no longer are sequentially consistent. The recent convergence towards programming language based relaxed memory consistency models has sparked renewed interest in lazy cache coherence protocols. These protocols exploit synchronization information by enforcing coherence only at synchronization boundaries via self-invalidation. As a result, such protocols do not require sharer tracking which benefits scalability. On the downside, such protocols are only readily applicable to a restricted set of consistency models, such as Release Consistency (RC), which expose synchronization information explicitly. In particular, existing architectures with stricter consistency models (such as x86) cannot readily make use of lazy coherence protocols without either: adapting the protocol to satisfy the stricter consistency model; or changing the architecture’s consistency model to (a variant of) RC, typically at the expense of backward compatibility. The first part of this thesis explores both these options, with a focus on a practical approach satisfying backward compatibility. Because of the wide adoption of Total Store Order (TSO) and its variants in x86 and SPARC processors, and existing parallel programs written for these architectures, we first propose TSO-CC, a lazy cache coherence protocol for the TSO memory consistency model. TSO-CC does not track sharers and instead relies on self-invalidation and detection of potential acquires (in the absence of explicit synchronization) using per cache line timestamps to efficiently and lazily satisfy the TSO memory consistency model. Our results show that TSO-CC achieves, on average, performance comparable to a MESI directory protocol, while TSO-CC’s storage overhead per cache line scales logarithmically with increasing core count. Next, we propose an approach for the x86-64 architecture, which is a compromise between retaining the original consistency model and using a more storage efficient lazy coherence protocol. First, we propose a mechanism to convey synchronization information via a simple ISA extension, while retaining backward compatibility with legacy codes and older microarchitectures. Second, we propose RC3 (based on TSOCC), a scalable cache coherence protocol for RCtso, the resulting memory consistency model. RC3 does not track sharers and relies on self-invalidation on acquires. To satisfy RCtso efficiently, the protocol reduces self-invalidations transitively using per-L1 timestamps only. RC3 outperforms a conventional lazy RC protocol by 12%, achieving performance comparable to a MESI directory protocol for RC optimized programs. RC3’s storage overhead per cache line scales logarithmically with increasing core count and reduces on-chip coherence storage overheads by 45% compared to TSO-CC. Finally, it is imperative that hardware adheres to the promised memory consistency model. Indeed, consistency directed coherence protocols cannot use conventional coherence definitions (e.g. SWMR) to be verified against, and few existing verification methodologies apply. Furthermore, as the full consistency model is used as a specification, their interaction with other components (e.g. pipeline) of a system must not be neglected in the verification process. Therefore, verifying a system with such protocols in the context of interacting components is even more important than before. One common way to do this is via executing tests, where specific threads of instruction sequences are generated and their executions are checked for adherence to the consistency model. It would be extremely beneficial to execute such tests under simulation, i.e. when the functional design implementation of the hardware is being prototyped. Most prior verification methodologies, however, target post-silicon environments, which when used for simulation-based memory consistency verification would be too slow. We propose McVerSi, a test generation framework for fast memory consistency verification of a full-system design implementation under simulation. Our primary contribution is a Genetic Programming (GP) based approach to memory consistency test generation, which relies on a novel crossover function that prioritizes memory operations contributing to non-determinism, thereby increasing the probability of uncovering memory consistency bugs. To guide tests towards exercising as much logic as possible, the simulator’s reported coverage is used as the fitness function. Furthermore, we increase test throughput by making the test workload simulation-aware. We evaluate our proposed framework using the Gem5 cycle accurate simulator in full-system mode with Ruby (with configurations that use Gem5’s MESI protocol, and our proposed TSO-CC together with an out-of-order pipeline). We discover 2 new bugs in the MESI protocol due to the faulty interaction of the pipeline and the cache coherence protocol, highlighting that even conventional protocols should be verified rigorously in the context of a full-system. Crucially, these bugs would not have been discovered through individual verification of the pipeline or the coherence protocol. We study 11 bugs in total. Our GP-based test generation approach finds all bugs consistently, therefore providing much higher guarantees compared to alternative approaches (pseudo-random test generation and litmus tests).
109

The potential use of bar force sensor measurements for control in low consistency refining

Harirforoush, Reza 30 January 2018 (has links)
A crucial parameter in the production of mechanical pulp through refining is energy consumption. Although low consistency (LC) refining has been shown to be more energy efficient than conventional high consistency refining, the degradation of mechanical properties of the end-product paper due to fiber cutting has limited the widespread adoption of LC refining. In conventional control strategies, the onset of fiber cutting is determined by post-refining measurement of pulp properties which does not enable rapid in-process adjustment of refiner operation in response to the onset of fiber cutting. In this dissertation, we exploit a piezoelectric force sensor to detect the onset of fiber cutting in real time. Detection of the onset of fiber cutting is potentially beneficial in low consistency refining as part of a control system to reduce fiber cutting and increase energy efficiency. The sensor has a probe which replaces a short length of a refiner bar, enabling measurement of normal and shear forces applied to pulp fibers by the refiner bars. The custom-designed sensors are installed in an AIKAWA pilot-scale 16-in. single-disc refiner at the Pulp and Paper Centre at the University of British Columbia. Trials were run using different pulp furnishes and refiner plate patterns at differing rotational speeds and a wide range of plate gaps. Pulp samples were collected at regular intervals and the pulp and paper properties were measured. We observe distinct transitions in the parameters that characterize the distributions of peak normal and shear forces which consistently correspond to the onset of fiber cutting. In addition, the analysis of the power spectrum of the sensor data shows that the magnitude of the dominant frequency can be used as an indicator of fiber cutting. The power of the time domain signal of the normal force is shown to be the most reliable and consistent indication of the onset of fiber cutting. This parameter consistently identifies the onset of fiber cutting, as determined by fiber length data, for all tested pulp furnishes and plate patterns. In addition, we investigate the effect of pulp furnish and plate pattern on bar forces in LC refining. For tested pulp furnishes and at all plate gaps, the plate with higher bar edge length (which has smaller bar width and groove width) results in lower mean peak normal and shear forces but higher mean coefficient of friction. Moreover, at the onset of fiber cutting, the mean peak normal force of softwood pulp is higher than that for hardwood pulp. Our results also show that the mean coefficient of friction at the onset of fiber cutting is a function of plate gap, pulp furnish, and plate pattern. / Graduate / 2019-01-09
110

Segmentation in a Distributed Real-Time Main-Memory Database

Mathiason, Gunnar January 2002 (has links)
To achieve better scalability, a fully replicated, distributed, main-memory database is divided into subparts, called segments. Segments may have individual degrees of redundancy and other properties that can be used for replication control. Segmentation is examined for the opportunity of decreasing replication effort, lower memory requirements and decrease node recovery times. Typical usage scenarios are distributed databases with many nodes where only a small number of the nodes share information. We present a framework for virtual full replication that implements segments with scheduled replication of updates between sharing nodes. Selective replication control needs information about the application semantics that is specified using segment properties, which includes consistency classes and other properties. We define a syntax for specifying the application semantics and segment properties for the segmented database. In particular, properties of segments that are subject to hard real-time constraints must be specified. We also analyze the potential improvements for such an architecture.

Page generated in 0.0513 seconds