• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 867
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 20
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1759
  • 421
  • 359
  • 295
  • 270
  • 260
  • 254
  • 223
  • 211
  • 192
  • 179
  • 171
  • 128
  • 123
  • 121
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

System Level Techniques for Verification and Synchronization after Local Design Refinements

Raudvere, Tarvo January 2007 (has links)
Today's advanced digital devices are enormously complex and incorporate many functions. In order to capture the system functionality and to be able to analyze the needs for a final implementation more efficiently, the entry point of the system development process is pushed to a higher level of abstraction. System level design methodologies describe the initial system model without considering lower level implementation details and the objective of the design development process is to introduce lower level details through design refinement. In practice this kind of refinement process may entail non-semantic-preserving changes in the system description, and introduce new behaviors in the system functionality. In spite of new behaviors, a model formed by the refinement may still satisfy the design constraints and to realize the expected system. Due to the size of the involved models and the huge abstraction gap, the direct verification of a detailed implementation model against the abstract system model is quite impossible. However, the verification task can be considerably simplified, if each refinement step and its local implications are verified separately. One main idea of the Formal System Design (ForSyDe) methodology is to break the design process into smaller refinement steps that can be individually understood, analyzed and verified. The topic of this thesis is the verification of refinement steps in ForSyDe and similar methodologies. It proposes verification attributes attached to each non-semantic-preserving transformation. The attributes include critical properties that have to be preserved by transformations. Verification properties are defined as temporal logic expressions and the actual verification is done with the SMV model checker. The mapping rules of ForSyDe models to the SMV language are provided. In addition to properties, the verification attributes include abstraction techniques to reduce the size of the models and to make verification tractable. For computation refinements, the author defines the polynomial abstraction technique, that addresses verification of DSP applications at a high abstraction level. Due to the size of models, predefined properties target only the local correctness of refined design blocks and the global influence has to be examined separately. In order to compensate the influence of temporal refinements, the thesis provides two novel synchronization techniques. The proposed verification and synchronization techniques have been applied to relevant applications in the computation area and to communication protocols. / QC 20100816
292

Examining Methods and Practices of Source Data Verification in Canadian Critical Care Randomized Controlled Trials

Ward, Roxanne E. 21 March 2013 (has links)
Statement of the Problem: Source data verification (SDV) is the process of comparing data collected at the source to data recorded on a Case Report Form, either paper or electronic (1) to ensure that the data are complete, accurate and verifiable. Good Clinical Practice (GCP) Guidelines are vague and lack evidence as to the degree of SDV and whether or not SDV affects study outcomes. Methods of Investigation: We performed systematic reviews to establish the published evidence-base for methods of SDV and to examine the effect of SDV on study outcomes. We then conducted a national survey of Canadian Critical Care investigators and research coordinators regarding their attitudes and beliefs regarding SDV. We followed by an audit of the completed and in-progress Randomized Controlled Trials (RCTs) of the Canadian Critical Care Trials Group (CCCTG). Results: Systematic Review of Methods of SDV: The most common reported or recommended frequency of source data verification (10/14 - 71%) was either based on level or risk, or that it be conducted early (i.e. after 1st patient enrolled). The amount of SDV recommended or reported, varied from 5-100%. Systematic Review of Impact of SDV on Study Outcomes: There was no difference in study outcomes for 1 trial and unable to assess in the other. National Survey of Critical Care Investigators and Research Coordinators: Data from the survey found that 95.8% (115/120) of respondents believed that SDV was an important part of Quality Assurance; 73.3% (88/120) felt that academic studies should do more SDV; and 62.5% (75/120) felt that there is insufficient funding available for SDV. Audit of Source Data Verification Practices in CCCTG RCTs: In the national audit of in-progress and completed CCCTG RCTs, 9/15 (60%) included a plan for SDV and 8/15 (53%) actually conducted SDV. Of the 9 completed published trials, 44% (4/9) conducted SDV. Conclusion: There is little evidence base for methods and effect of SDV on study outcomes. Based on the results of the systematic review, survey, and audit, more research is needed to support the evidence base for the methods and effect of SDV on study outcomes.
293

Auditing Fair Value measurements and Disclosures: A case of the Big 4 Audit Firms

Ahmed, Kemal January 2013 (has links)
Abstract Problem: In today’s business environment, rising demand in financial reporting and frequent changes in accounting frameworks lead to an increased focus on reliability in Fair Value Measurement (FVM) and disclosures. The frequent changes in accounting frameworks create a challenge for managers in measuring accounting estimates accurately and have been an exceedingly difficult task. The difficult task is that of the auditors. How would auditors endorse and ensure the reliability and relevance of financial statements? Also how could they evaluate the accuracy of the measurement of fair values as presented in the financial statements? (IFAC, 2011, ISA 540). Purpose: The purpose of this thesis is to explore the methods and approaches used by auditors while auditing fair values from practical perspectives. Method: A multiple case study with pure qualitative methods and an inductive approach has been adopted. The qualitative method used a semi-structured interview to collect data.  Result: The result shows that by understanding the challenges and following the phases of auditing, auditors can maintain the quality of financial reporting. Four key audit phases are relevant to audit FVM. These are: understanding the Client-Business environment, Engagement, Internal Control, and Planning phases of auditing. Furthermore, the results revealed key challenges of auditing FVM and disclosures. These challenges are information insufficiency in the market (reliability), competence, auditors’ lack of fair value audit exposure, and the manager's leadership role and style. Moreover, as previous studies on FV have primarily relied on synthesis of academic literature, the thesis contributes knowledge to academia by using an empirical approach.
294

Application of a bayesian network to integrated circuit tester diagnosis

Mittelstadt, Daniel Richard 06 December 1993 (has links)
This thesis describes research to implement a Bayesian belief network based expert system to solve a real-world diagnostic problem troubleshooting integrated circuit (IC) testing machines. Several models of the IC tester diagnostic problem were developed in belief networks, and one of these models was implemented using Symbolic Probabilistic Inference (SPI). The difficulties and advantages encountered in the process are described in this thesis. It was observed that modelling with interdependencies in belief networks simplified the knowledge engineering task for the IC tester diagnosis problem, by avoiding procedural knowledge and sticking just to diagnostic component's interdependencies. Several general model frameworks evolved through knowledge engineering to capture diagnostic expertise that facilitated expanding and modifying the networks. However, model implementation was restricted to a small portion of the modelling - contact resistance failures - because evaluation of the probability distributions could not be made fast enough to expand the code to a complete model with real-time diagnosis. Further research is recommended to create new methods, or refine existing methods, to speed evaluation of the models created in this research. If this can be done, more complete diagnosis can be achieved. / Graduation date: 1994
295

Modelling the impact of total stress changes on groundwater flow

Dissanayake, Nalinda 29 April 2008
The research study involved using the modified FEMWATER code to investigate the impact of total stress changes on groundwater flow in the vicinity of a salt tailings pile. Total stress and pore-pressure data observed at the Lanigan and Rocanville potash-mine sites were used to assist the development of a generic FEMWATER model. The original 3-D mesh considered for model study covers a region of 7.6 km x 7.6 km x 60 m. The simulated pile itself covers a surface area of 1.6 km x 1.6 km within the region. Symmetry of the idealized system allowed half of the system to be modelled to reduce the size of the mesh. The model was layered to facilitate different materials representing different hydrostratigraphic scenarios. The GMS-release of the FEMWATER code (version 2.1) was modified to simulate the pore-pressure response to total stress changes caused by tailings pile loading at the ground surface to be modelled. The modified code was verified before applying to present study.<p>Long-term pore pressure generation and dissipation due to pile construction was investigated for eleven hydrostratigraphic scenarios consisting of plastic clays, stiff till and dense sand layers commonly found in Saskatchewan potash mining regions. The model was run for two distinctive pile loading patterns. Model results indicated that the loading pattern has a significant influence on pore pressure generation beneath the pile. The model was initially run for 30 year pile construction period and later simulated for 15, 25 and 35 year construction periods to investigate the impact of loading rate. These results showed that, as expected, the peak pore water pressure head is proportional to the pile construction rate. A sensitivity analysis, which was carried out by changing hydraulic conductivity of stiff till, revealed that the lower the hydraulic conductivity, the greater the pore pressure generation beneath the pile.<p>Overall, the research study helped to understand and predict the influence of pile construction and hydrostratigraphy on pore-pressure changes beneath salt tailing piles. Low K/Ss or cv materials (compressible tills) demonstrate a slow dissipation rate and high excess pressures. Compared to dense sand which has very high K/Ss, till has very low K/Ss which causes in high excess pore pressure generation. Sand layers act as drains, rapidly dissipating pore pressures. Thicker low K/Ss units result in slower dissipation and higher pressures. As the thickness of the low K/Ss layer increases, the peak pressures increase as the drainage path lengthens. Thin plastic clay layers give rise to the highest pressures.<p>The model study showed that hydrostratigraphic scenarios similar to those found at Saskatchewan potash mine sites can generate the high pore pressures observed in the vicinity of salt tailings piles as a result of pile loading. Peak pressures are very sensitive to pile construction rates, loading patterns and hydrostratiagraphy of the region. Peak pressures can reach levels that would be of concern for pile stability on the presence of adverse geological conditions.
296

Robust Consistency Checking for Modern Filesystems

Sun, Kuei 19 March 2013 (has links)
A runtime file system checker protects file-system metadata integrity. It checks the consistency of file system update operations before they are committed to disk, thus preventing corrupted updates from reaching the disk. In this thesis, we describe our experiences with building Brunch, a runtime checker for an emerging Linux file system called Btrfs. Btrfs supports many modern file-system features that pose challenges in designing a robust checker. We find that the runtime consistency checks need to be expressed clearly so that they can be reasoned about and implemented reliably, and thus we propose writing the checks declaratively. This approach reduces the complexity of the checks, ensures their independence, and helps identify the correct abstractions in the checker. It also shows how the checker can be designed to handle arbitrary file system corruption. Our results show that runtime consistency checking is still viable for complex, modern file systems.
297

Conditions Affecting the Relationship between Power and Identity Verification in Power Imbalanced Dyads

Davis, Jennifer 1983- 14 March 2013 (has links)
In the present study, I look at the relationship between power and identity verification and the conditions under which this relationship can be disrupted. Specifically, I look at the role of information in disrupting power differences within identity processes. I examine these processes through an experiment with task-oriented, power-imbalanced, dyads (N=144). Priming participants with a task-leader identity, I test how the introduction of negotiation resources—or information discrepant and external to a high power actor’s self presentation, affect presentation power—or the degree to which an actor can maintain identity meanings in light of partner negotiations. In contrast with existing literature, I did not find a direct relationship between power and identity verification. I did, however, find that those in higher positions of power experience greater identity stability, while those in lower positions of power experience increased identity change. Interestingly, I found that identity change and identity verification varied with identity valence, such that those with dominant task leader identity meanings experienced greater identity stability but less identity verification than their more submissive counterparts. These relationships, however were power dependent, such that differences disappeared among power-high actors, and were magnified for power-low actors. Negotiation Resources did not have a significant main effect, but showed a significant interaction with identity valence when predicting identity verification among power-low actors.
298

Safety Verification of Material Handling Systems Driven by Programmable Logic Controller : Consideration of Physical Behavior of Plants

OKUMA, Shigeru, SUZUKI, Tatsuya, KONAKA, Eiji 01 April 2004 (has links)
No description available.
299

Reasoning About Multi-stage Programs

Inoue, Jun 24 July 2013 (has links)
Multi-stage programming (MSP) is a style of writing program generators---programs which generate programs---supported by special annotations that direct construction, combination, and execution of object programs. Various researchers have shown MSP to be effective in writing efficient programs without sacrificing genericity. However, correctness proofs of such programs have so far received limited attention, and approaches and challenges for that task have been largely unexplored. In this thesis, I establish formal equational properties of the multi-stage lambda calculus and related proof techniques, as well as results that delineate the intricacies of multi-stage languages that one must be aware of. In particular, I settle three basic questions that naturally arise when verifying multi-stage functional programs. Firstly, can adding staging MSP to a language compromise the interchangeability of terms that held in the original language? Unfortunately it can, and more care is needed to reason about terms with free variables. Secondly, staging annotations, as the term ``annotations'' suggests, are often thought to be orthogonal to the behavior of a program, but when is this formally guaranteed to be the case? I give termination conditions that characterize when this guarantee holds. Finally, do multi-stage languages satisfy extensional facts, for example that functions agreeing on all arguments are equivalent? I develop a sound and complete notion of applicative bisimulation, which can establish not only extensionality but, in principle, any other valid program equivalence as well. These results improve our general understanding of staging and enable us to prove the correctness of complicated multi-stage programs.
300

Validation of UML conceptual schemas with OCL constraints and operations

Queralt Calafat, Anna 02 March 2009 (has links)
Per tal de garantir la qualitat final d'un sistema d'informació, és imprescindible que l'esquema conceptual que representa el coneixement sobre el seu domini i les funcions que ha de realitzar sigui semànticament correcte.La correctesa d'un esquema conceptual es pot veure des de dues perspectives. Per una banda, des del punt de vista de la seva definició, determinar la correctesa d'un esquema conceptual consisteix en respondre la pregunta "És correcte l'esquema conceptual?". Aquesta pregunta es pot respondre determinant si l'esquema satisfà certes propietats, com satisfactibilitat, no redundància o executabilitat de les seves operacions.D'altra banda, des de la perspectiva dels requisits que el sistema d'informació ha de satisfer, l'esquema conceptual no només ha de ser correcte sinó que també ha de ser el correcte. Per tal d'assegurar-ho, el dissenyador necessita algun tipus de guia i ajut durant el procés de validació, de manera que pugui entendre què està representant l'esquema exactament i veure si es correspon amb els requisits que s'han de formalitzar.En aquesta tesi presentem una aproximació que millora els resultats de les propostes anteriors adreçades a validar un esquema conceptual en UML, amb les restriccions i operacions formalitzades en OCL. La nostra aproximació permet validar un esquema conceptual tant des del punt de vista de la seva definició com de la seva correspondència amb els requisits.La validació es porta a terme mitjançant un conjunt de proves que s'apliquen a l'esquema, algunes de les quals es generen automàticament mentre que d'altres són definides ad-hoc pel dissenyador. Totes les proves estan formalitzades de tal manera que es poden tractar d'una manera uniforme,independentment de la propietat específica que determinen.La nostra proposta es pot aplicar tant a un esquema conceptual complet com només a la seva part estructural. Quan es pretén validar només la part estructural d'un esquema, oferim un conjunt de condicions que permeten determinar si qualsevol prova de validació que es pugui fer sobrel'esquema acabarà en temps finit. Per aquells casos en els quals aquestes condicions de terminació se satisfan, també proposem un procediment de raonament sobre l'esquema que s'aprofita d'aquest fet i és més eficient que en el cas general. Aquesta aproximació permet validar esquemes conceptuals molt expressius, assegurant completesa i decidibilitat al mateix temps.Per provar la factibilitat de la nostra aproximació, hem implementat el procés de validació complet per a la part estructural d'un esquema. A més, per a la validació d'un esquema conceptual que inclou la definició del comportament, hem implementat el procediment de raonament estenent un mètode existent. / To ensure the quality of an information system, it is essential that the conceptual schema that represents the knowledge about its domain and the functions it has to perform is semantically correct.The correctness of a conceptual schema can be seen from two different perspectives. On the one hand, from the point of view of its definition, determining the correctness of a conceptual schema consists in answering to the question "Is the conceptual schema right?". This can be achieved by determining whether the schema fulfills certain properties, such as satisfiability, non-redundancy or operation executability.On the other hand, from the perspective of the requirements that the information system should satisfy, not only the conceptual schema must be right, but it also must be the right one. To ensure this, the designer must be provided with some kind of help and guidance during the validation process, so that he is able to understand the exact meaning of the schema and see whether it corresponds to the requirements to be formalized.In this thesis we provide an approach which improves the results of previous proposals that address the validation of a UML conceptual schema, with its constraints and operations formalized in OCL. Our approach allows to validate the conceptual schema both from the point of view of its definition and of its correspondence to the requirements.The validation is performed by means of a set of tests that are applied to the schema, including automatically generated tests and ad-hoc tests defined by the designer. All the validation tests are formalized in such a way that they can be treated uniformly, regardless the specific property they allow to test.Our approach can be either applied to a complete conceptual schema or only to its structural part. In case that only the structural part is validated, we provide a set of conditions to determine whether any validation test performed on the schema will terminate. For those cases in which these conditions of termination are satisfied, we also provide a reasoning procedure that takes advantage of this situation and works more efficiently than in the general case. This approach allows the validation of very expressive schemas and ensures completeness and decidability at the same time. To show the feasibility of our approach, we have implemented the complete validation process for the structural part of a conceptual schema.Additionally, for the validation of a conceptual schema with a behavioral part, the reasoning procedure has been implemented as an extension of an existing method.

Page generated in 0.0813 seconds