• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1071
  • 903
  • 214
  • 149
  • 88
  • 62
  • 48
  • 44
  • 26
  • 17
  • 13
  • 10
  • 9
  • 9
  • 8
  • Tagged with
  • 3010
  • 666
  • 427
  • 401
  • 369
  • 368
  • 360
  • 336
  • 299
  • 284
  • 263
  • 249
  • 239
  • 230
  • 223
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Time-triggered Runtime Verification of Real-time Embedded Systems

Navabpour, Samaneh January 2014 (has links)
In safety-critical real-time embedded systems, correctness is of primary concern, as even small transient errors may lead to catastrophic consequences. Due to the limitations of well-established methods such as verification and testing, recently runtime verification has emerged as a complementary approach, where a monitor inspects the system to evaluate the specifications at run time. The goal of runtime verification is to monitor the behavior of a system to check its conformance to a set of desirable logical properties. The literature of runtime verification mostly focuses on event-triggered solutions, where a monitor is invoked when a significant event occurs (e.g., change in the value of some variable used by the properties). At invocation, the monitor evaluates the set of properties of the system that are affected by the occurrence of the event. This type of monitor invocation has two main runtime characteristics: (1) jittery runtime overhead, and (2) unpredictable monitor invocations. These characteristics result in transient overload situations and over-provisioning of resources in real-time embedded systems and hence, may result in catastrophic outcomes in safety-critical systems. To circumvent the aforementioned defects in runtime verification, this dissertation introduces a novel time-triggered monitoring approach, where the monitor takes samples from the system with a constant frequency, in order to analyze the system's health. We describe the formal semantics of time-triggered monitoring and discuss how to optimize the sampling period using minimum auxiliary memory and path prediction techniques. Experiments on real-time embedded systems show that our approach introduces bounded overhead, predictable monitoring, less over-provisioning, and effectively reduces the involvement of the monitor at run time by using negligible auxiliary memory. We further advance our time-triggered monitor to component-based multi-core embedded systems by establishing an optimization technique that provides the invocation frequency of the monitors and the mapping of components to cores to minimize monitoring overhead. Lastly, we present RiTHM, a fully automated and open source tool which provides time-triggered runtime verification specifically for real-time embedded systems developed in C.
152

Community agencies as participants in an alternative high school internship program

Allen, Joyce Kay January 1982 (has links)
The purposes of the study were: first, to identify what personnel in community agencies providing internships judge they contribute to a student intern's learning about (a) the role of the agency in the culture: (b) his competency to perform specific services; (c) his understanding of his cultural heritage; and second, to identify how community agencies are influenced as they provide experiences for student interns. Data were collected by interviewing and analyzed by a descriptive-survey design.FindingsCommunity agency personnel judged their agencies contributed to students' understandings of roles of thecommunity agencies in the culture by providing:orientation sessions, on-the-job training, and opportunities to observe a variety of adult, professional, and organizational contacts direct involvements in the agencies' functionsCommunity agency personnel judged the agencies contributed to students' competencies to perform specific services by: - assuming some of the agencies' responsibilities - acquiring specific personal qualities and knowledge - working in students' interest areasCommunity agency personnel judged the agencies contributed to students' understandings of their cultural heritages by providing opportunities for students to associate and communicate with professionals. Community agency personnel judged the agencies did not accommodate and/or build upon cultural/ethnic differences of students; neither did they plan for students to learn more about themselves while in agenciesCommunity agency personnel judged the agencies were influenced as they provided experiences for student interns by receiving services, improving public services, and improving employees' moraleConclusions Community agency personnel judge they contribute importantly to students learning while the students fulfill internship responsibilities in agenciesCommunity agency personnel judge the cooperating agencies are influenced positively but to a limited extent as they provide experiences for student interns
153

The role of protests as platforms for action on sustainability in the Kullu Valley, India

Lozecznik, Vanessa 28 October 2010 (has links)
The Himalayan region of India has a surprisingly fragile ecosystem due in part to its geomorphic characteristics. In recent years the Himalayan ecosystem has been disturbed in various ways by both human and natural processes. Large developments threaten ecosystems in the area modifying local land use and subsistence patterns. This has important implications for the sustainable livelihoods of the local communities. People in these areas are very concerned about the lack of inclusion in development decision-making processes and the negative effects of development on their livelihood. Protest actions are spreading throughout Himachal Pradesh, not only to stop developments but also to re-shape how developments are taking place. The village of Jagatsukh was selected for in-depth study. That is where people started to organize around the Allain Duhangan Hydro Project and also where the protest actions in relation to the Hydro Project actually started. The overall purpose of this research was to understand the role of protests as a vehicle for public participation in relation to decisions about resources and the environment and to consider whether such movements are learning platforms for action on sustainability.
154

Standard English, the National Curriculum, and linguistic disadvantage : a sociolinguistic account of the careful speech of Tyneside adolescents

Crinson, James Richard January 1997 (has links)
This study investigates adolescents' use of standard English in situations requiring careful speech. An account is given of the historical, political, linguistic and educational development of the concept of standard English, with particular emphasis on spoken standard English. Popular conceptions of 'correct speech' are also considered, and all of these are related to requirements in the National Curriculum for England and Wales for the teaching of spoken standard English. This is related to a specific case, namely that of Tyneside English. This variety is described, and an account is given of the area and its main social and econornIc characteristics. Twenty four adolescents are chosen from two schools which contrast highly in terms of socioeconomic profile. The individuals are also selected to provide a spread of levels of attainment, and both sexes are equally represented. M Phonological, grammatical, lexical and discourse variables are quantified using Labovian quantification techniques and approaches which involve counting non-standard variants over a period of time. Principal linguistic variables are: glottalised variants of (p) (t) and (k); non standard verb and pronoun forms; non-standard lexical items, and certain kinds of discourse markers. This process provides evidence of the extent to which young people use or do not use spoken standard English. It is shown that in more careful speech young people from more and less privileged backgrounds use only small frequencies of non-standard variants, but that within this relatively small number differences do exist: certain items are used mainly by less privileged boys, others mainly by girls, others by more privileged individuals in general. Use of non-standard speech is shown to differ for different groups at different linguistic levels. Important differences in gender and in social class emerge, but attainment also appears to have a significant bearing on children's use of spoken standard English. The study concludes by discussing pedagogical approaches which might increase awareness of issues associated with standard English.
155

The play versus formal debate : a study of early years provision in Northern Ireland and Denmark

Walsh, Glenda January 2000 (has links)
No description available.
156

Formal Verification of Instruction Dependencies in Microprocessors

Shehata, Hazem January 2011 (has links)
In microprocessors, achieving an efficient utilization of the execution units is a key factor in improving performance. However, maintaining an uninterrupted flow of instructions is a challenge due to the data and control dependencies between instructions of a program. Modern microprocessors employ aggressive optimizations trying to keep their execution units busy without violating inter-instruction dependencies. Such complex optimizations may cause subtle implementation flaws that can be hard to detect using conventional simulation-based verification techniques. Formal verification is known for its ability to discover design flaws that may go undetected using conventional verification techniques. However, with formal verification come two major challenges. First, the correctness of the implementation needs to be defined formally. Second, formal verification is often hard to apply at the scale of realistic implementations. In this thesis, we present a formal verification strategy to guarantee that a microprocessor implementation preserves both data and control dependencies among instructions. Throughout our strategy, we address the two major challenges associated with formal verification: correctness and scalability. We address the correctness challenge by specifying our correctness in the context of generic pipelines. Unlike conventional pipeline hazard rules, we make no distinction between the data and control aspects. Instead, we describe the relationship between a producer instruction and a consumer instruction in a way such that both instructions can speculatively read their source operands, speculatively write their results, and go out of their program order during execution. In addition to supporting branch and value prediction, our correctness criteria allow the implementation to discard (squash) or replay instructions while being executed. We address the scalability challenge in three ways: abstraction, decomposition, and induction. First, we state our inter-instruction dependency correctness criteria in terms of read and write operations without making reference to data values. Consequently, our correctness criteria can be verified for implementations with abstract datapaths. Second, we decompose our correctness criteria into a set of smaller obligations that are easier to verify. All these obligations can be expressed as properties within the Syntactically-Safe fragment of Linear Temporal Logic (SSLTL). Third, we introduce a technique to verify SSLTL properties by induction, and prove its soundness and completeness. To demonstrate our overall strategy, we verified a term-level model of an out-of-order speculative processor. The processor model implements register renaming using a P6-style reorder buffer and branch prediction with a hybrid (discard-replay) recovery mechanism. The verification obligations (expressed in SSLTL) are checked using a tool implementing our inductive technique. Our tool, named Tahrir, is built on top of a generic interface to SMT solvers and can be generally used for verifying SSLTL properties about infinite-state systems.
157

Information flow security - models, verification and schedulers

Zhang, Chenyi, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
Information flow security concerns how to protect sensitive data in computer systems by avoiding undesirable flow of information between the users of the systems. This thesis studies information flow security properties in state-based systems, dealing in particular with modelling and verification methods for asynchronous systems and synchronous systems with schedulers. The aim of this study is to provide a foundational guide to ensure confidentiality in system design and verification. The thesis begins with a study of definitions of security properties in asynchronous models. Two classes of security notions are of particular interest. Trace-based properties disallow deductions of high security level secrets from low level observation traces. Bisimulation-based properties express security as a low-level observational equivalence relation on states. In the literature, several distinct schools have developed frameworks for information flow security properties based on different semantic domains. One of the major contributions of the thesis is a systematic study that compares security notions, using semantic mappings between two state-based models and a particular process algebraic model. An advantage of state-based models is the availability of well-developed verification methods and tools for functional properties in finite state systems. The thesis investigates the application of these methods to the algorithmic verification of the information flow security properties in the asynchronous settings. The complexity bounds for verifying these security properties are given as polynomial time for the bisimulation-based properties and polynomial space complete for the trace-based properties. Two heuristics are presented to benefit the verifications of the properties in practice. Timing channels are one of the major concerns in the computer security community, but are not captured in asynchronous models. In the final part of the thesis, a new system model is defined that deals with timing and scheduling. A group of novel security notions, including both trace-based and bisimulation-based properties, are proposed in this new model. It is further investigated whether these security properties are preserved by refinement of schedulers and scheduler implementations. A case study of a multi- evel secure file server is described, which applies a number of access control rules to enforce a particular bisimulation-based property in the synchronous setting.
158

Provably correct on-chip communication: a formal approach to automatic synthesis of SoC protocol converters

Avnit, Karin, Computer Science & Engineering, Faculty of Engineering, UNSW January 2010 (has links)
The field of chip design is characterized by contradictory pressures to reduce time-to-market and maintain a high level of reliability. As a result, module reuse has become common practice in chip design. To save time on both design and verification, Systems-on-Chips (SoCs) are composed using pre-designed and pre-verified modules. The integrated modules are often designed by different groups and for different purposes, and are later integrated into a single chip. In the absence of a single interface standard for such modules, "plug-n-play" style integration is not likely, as the subject modules are often designed to comply with different interface protocols. For such modules to communicate correctly there is a need for some glue logic, also called a protocol converter that mediates between them. Though much research has been dedicated to the protocol converter synthesis problem of SoC communication, converter synthesis is still performed manually, consuming development and verification time and risking human error. Current approaches to automatic synthesis of protocol converters mostly lack formal foundations and either employ abstractions far removed from the Hardware Description Language (HDL) implementation level or grossly simplify the structure of the protocols considered. This thesis develops and presents techniques for automatic synthesis of provably correct on-chip protocol converters. Basing the solution on a formal approach, a novel state-machine based formalism is presented for modelling bus-based protocols and formalizing the notions of protocol compatibility and correct protocol conversion. Algorithms for automatic compatibility checking and provably-correct converter synthesis are derived from the formalism, including a systematic exploration of the design space of the protocol converter, the first in the field, which enables generation of various alternative deterministic converters. The work presented is unique in its combination of a completely formal approach and the use of a low abstraction level that enables precise modelling of protocol characteristics and automatic translation of the constructed converter to HDL.
159

Provably correct on-chip communication: a formal approach to automatic synthesis of SoC protocol converters

Avnit, Karin, Computer Science & Engineering, Faculty of Engineering, UNSW January 2010 (has links)
The field of chip design is characterized by contradictory pressures to reduce time-to-market and maintain a high level of reliability. As a result, module reuse has become common practice in chip design. To save time on both design and verification, Systems-on-Chips (SoCs) are composed using pre-designed and pre-verified modules. The integrated modules are often designed by different groups and for different purposes, and are later integrated into a single chip. In the absence of a single interface standard for such modules, "plug-n-play" style integration is not likely, as the subject modules are often designed to comply with different interface protocols. For such modules to communicate correctly there is a need for some glue logic, also called a protocol converter that mediates between them. Though much research has been dedicated to the protocol converter synthesis problem of SoC communication, converter synthesis is still performed manually, consuming development and verification time and risking human error. Current approaches to automatic synthesis of protocol converters mostly lack formal foundations and either employ abstractions far removed from the Hardware Description Language (HDL) implementation level or grossly simplify the structure of the protocols considered. This thesis develops and presents techniques for automatic synthesis of provably correct on-chip protocol converters. Basing the solution on a formal approach, a novel state-machine based formalism is presented for modelling bus-based protocols and formalizing the notions of protocol compatibility and correct protocol conversion. Algorithms for automatic compatibility checking and provably-correct converter synthesis are derived from the formalism, including a systematic exploration of the design space of the protocol converter, the first in the field, which enables generation of various alternative deterministic converters. The work presented is unique in its combination of a completely formal approach and the use of a low abstraction level that enables precise modelling of protocol characteristics and automatic translation of the constructed converter to HDL.
160

Applying Formal Methods to Software Testing

Stocks, Philip Alan Unknown Date (has links)
This thesis examines applying formal methods to software testing. Software testing is a critical phase of the software life-cycle which can be very effective if performed rigorously. Formal specifications offer the bases for rigorous testing practices. Not surprisingly, the most immediate use of formal specifications in software testing is as sources of black-box test suites. However, formal specifications have more uses in software testing than merely being sources for test data. We examine these uses, and show how to get more assistance and benefit from formal methods in software testing. At the core of this work is a exible framework in which to conduct specification-based testing. The framework is founded on formal definitions of tests and test suites, which directly addresses important issues in managing software testing. This provides a uniform platform for other applications of formal methods to testing such as analysis and reification of tests, and also for applications beyond testing such as maintenance and specification validation. The framework has to be exible so that any testing strategies can be used. We examine the need to adapt certain strategies to work with the framework and formal specification. Our experiments showed some deficiencies that arise when using derivation strategies on abstract specifications. These deficiencies led us to develop two new specification-based testing strategies based on extensions to existing strate- gies. We demonstrate the framework, strategies, and other applications of formal methods to software testing using three case studies. In each of these, the framework was easy to use. It provided an elegant and powerful means for defining and structuring tests, and a suitable staging ground for other applications of formal methods to software testing. This thesis demonstrates how formal specification techniques can systematise the application of testing strategies, and also how the concepts of software testing can be combined with formal specifications to extend the role of the formal specification in software development.

Page generated in 0.0653 seconds