• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 867
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 20
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1759
  • 421
  • 359
  • 295
  • 270
  • 260
  • 254
  • 223
  • 211
  • 192
  • 179
  • 171
  • 128
  • 123
  • 121
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Automatic generation of instruction sequences for software-based self-test of processors and systems-on-a-chip

Gurumurthy, Sankaranarayanan 29 August 2008 (has links)
Not available / text
322

Generalization, lemma generation, and induction in ACL2

Erickson, John D., Ph. D. 29 August 2008 (has links)
Formal verification is becoming a critical tool for designing software and hardware today. Rising complexity, along with software's pervasiveness in the global economy have meant that errors are becoming more difficult to find and more costly to fix. Among the formal verification tools available today, theorem provers offer the ability to do the most complete verification of the most complex systems. However, theorem proving requires expert guidance and typically is too costly to be economical for all but the most mission critical systems. Three major challenges to using a theorem prover are: finding generalizations, choosing the right induction scheme, and generating lemmas. In this dissertation we study all three of these in the context of the ACL2 theorem prover. / text
323

Towards Intelligent Tumor Tracking and Setup Verification in Radiation Therapy For Lung Cancer

Xu, Qianyi January 2007 (has links)
Lung cancer is the most deadly cancer in the United States. Radiation therapy uses ionizing radiation with high energy to destroy lung tumor cells by damaging their genetic material, preventing those cells from reproducing. The most challenging aspect of modern radiation therapy for lung cancer is the motion of lung tumors caused by patient breathing during treatment. Most gating based radiotherapy derives the tumor motion from external surrogates and generates a respiratory signal to trigger the beam. We propose a method that monitors internal diaphragm motion, which can provide a respiratory signal that is more highly correlated to lung tumor motion compared to the external surrogates. We also investigate direct tracking of the tumor in fluoroscopic video imagery. We tracked fixed tumor contours in fluoroscopic videos for 5 patients. The predominant tumor displacements are well tracked based on optical flow. Some tumors or nearby anatomy features exhibit severe nonrigid deformation, especially in the supradiaphragmatic region. By combining Active Shape Models and the respiratory signal, the deformed contours are tracked within a range defined in the training period. All the tracking results are validated by a human expert and the proposed methods are promising for applications in radiotherapy. Another important aspect of lung patient treatment is patient setup verification, which is needed to reduce inter- and intra-fractions geometry uncertainties and ensure precise dose delivery. Currently, there is no universally accepted method for lung patient verification. We propose to register 4DCT and 2D x-ray images taken before treatment to derive the couch shifts necessary for precise radiotherapy. The proposed technique leads to improved patient care.
324

Separation logic : expressiveness, complexity, temporal extension

Brochenin, Rémi 25 September 2013 (has links) (PDF)
This thesis studies logics which express properties on programs. These logics were originally intended for the formal verification of programs with pointers. Overall, no automated verification method will be proved tractable here- rather, we give a new insight on separation logic. The complexity and decidability of some essential fragments of this logic for Hoare triples were not known before this work. Also, its combination with some other verification methods was little studied. Firstly, in this work we isolate the operator of separation logic which makes it undecidable. We describe the expressive power of this logic, comparing it to second-order logics. Secondly, we try to extend decidable subsets of separation logic with a temporal logic, and with the ability to describe data. This allows us to give boundaries to the use of separation logic. In particular, we give boundaries to the creation of decidable logics using this logic combined with a temporal logic or with the ability to describe data.
325

Exploiting structure for scalable software verification

Domagoj, Babić 11 1900 (has links)
Software bugs are expensive. Recent estimates by the US National Institute of Standards and Technology claim that the cost of software bugs to the US economy alone is approximately 60 billion USD annually. As society becomes increasingly software-dependent, bugs also reduce our productivity and threaten our safety and security. Decreasing these direct and indirect costs represents a significant research challenge as well as an opportunity for businesses. Automatic software bug-finding and verification tools have a potential to completely revolutionize the software engineering industry by improving reliability and decreasing development costs. Since software analysis is in general undecidable, automatic tools have to use various abstractions to make the analysis computationally tractable. Abstraction is a double-edged sword: coarse abstractions, in general, yield easier verification, but also less precise results. This thesis focuses on exploiting the structure of software for abstracting away irrelevant behavior. Programmers tend to organize code into objects and functions, which effectively represent natural abstraction boundaries. Humans use such structural abstractions to simplify their mental models of software and for constructing informal explanations of why a piece of code should work. A natural question to ask is: How can automatic bug-finding tools exploit the same natural abstractions? This thesis offers possible answers. More specifically, I present three novel ways to exploit structure at three different steps of the software analysis process. First, I show how symbolic execution can preserve the data-flow dependencies of the original code while constructing compact symbolic representations of programs. Second, I propose structural abstraction, which exploits the structure preserved by the symbolic execution. Structural abstraction solves a long-standing open problem --- scalable interprocedural path- and context-sensitive program analysis. Finally, I present an automatic tuning approach that exploits the fine-grained structural properties of software (namely, data- and control-dependency) for faster property checking. This novel approach resulted in a 500-fold speedup over the best previous techniques. Automatic tuning not only redefined the limits of automatic software analysis tools, but also has already found its way into other domains (like model checking), demonstrating the generality and applicability of this idea.
326

Polynomial Functions over Rings of Residue Classes of Integers

Meredith, M Brandon 06 August 2007 (has links)
In this thesis we discuss how to find equivalent representations of polynomial functions over the ring of integers modulo a power of a prime. Specifically, we look for lower degree representations and representations with fewer variables for which important applications in electrical and computer engineering exist. We present several algorithms for finding these compact formulations.
327

Scalably Verifiable Cache Coherence

Zhang, Meng January 2013 (has links)
<p>The correctness of a cache coherence protocol is crucial to the system since a subtle bug in the protocol may lead to disastrous consequences. However, the verification of a cache coherence protocol is never an easy task due to the complexity of the protocol. Moreover, as more and more cores are compressed into a single chip, there is an urge for the cache coherence protocol to have higher performance, lower power consumption, and less storage overhead. People perform various optimizations to meet these goals, which unfortunately, further exacerbate the verification problem. The current situation is that there are no efficient and universal methods for verifying a realistic cache coherence protocol for a many-core system. </p><p>We, as architects, believe that we can alleviate the verification problem by changing the traditional design paradigm. We suggest taking verifiability as a first-class design constraint, just as we do with other traditional metrics, such as performance, power consumption, and area overhead. To do this, we need to incorporate verification effort in the early design stage of a cache coherence protocol and make wise design decisions regarding the verifiability. Such a protocol will be amenable to verification and easier to be verified in a later stage. Specifically, we propose two methods in this thesis for designing scalably verifiable cache coherence protocols. </p><p>The first method is Fractal Coherence, targeting verifiable hierarchical protocols. Fractal Coherence leverages the fractal idea to design a cache coherence protocol. The self-similarity of the fractal enables the inductive verification of the protocol. Such a verification process is independent of the number of nodes and thus is scalable. We also design example protocols to show that Fractal Coherence protocols can attain comparable performance compared to a traditional snooping or directory protocol. </p><p>As a system scales hierarchically, Fractal Coherence can perfectly solve the verification problem of the implemented cache coherence protocol. However, Fractal Coherence cannot help if the system scales horizontally. Therefore, we propose the second method, PVCoherence, targeting verifiable flat protocols. PVCoherence is based on parametric verification, a widely used method for verifying the coherence of a flat protocol with infinite number of nodes. PVCoherence captures the fundamental requirements and limitations of parametric verification and proposes a set of guidelines for designing cache coherence protocols that are compatible with parametric verification. As long as designers follow these guidelines, their protocols can be easily verified. </p><p>We further show that Fractal Coherence and PVCoherence can also facilitate the verification of memory consistency, another extremely challenging problem. One piece of previous work proves that the verification of memory consistency can be decomposed into three steps. The most complex and non-scalable step is the verification of the cache coherence protocol. If we design the protocol following the design methodology of Fractal Coherence or PVCoherence, we can easily verify the cache coherence protocol and overcome the biggest obstacle in the verification of memory consistency. </p><p>As system expands and cache coherence protocols get more complex, the verification problem of the protocol becomes more prominent. We believe it is time to reconsider the traditional design flow in which verification is totally separated from the design stage. We show that by incorporating the verifiability in the early design stage and designing protocols to be scalably verifiable in the first place, we can greatly reduce the burden of verification. Meanwhile, we perform various experiments and show that we do not lose benefits in performance as well as in other metrics when we obtain the correctness guarantee.</p> / Dissertation
328

Dynamic Adaptive Multimesh Refinement for Coupled Physics Equations Applicable to Nuclear Engineering

Dugan, Kevin 16 December 2013 (has links)
The processes studied by nuclear engineers generally include coupled physics phenomena (Thermal-Hydraulics, Neutronics, Material Mechanics, etc.) and modeling such multiphysics processes numerically can be computationally intensive. A way to reduce the computational burden is to use spatial meshes that are optimally suited for a specific solution; such meshes are obtained through a process known as Adaptive Mesh Refinement (AMR). AMR can be especially useful for modeling multiphysics phenomena by allowing each solution component to be computed on an independent mesh (Multimesh AMR). Using AMR on time dependent problems requires the spatial mesh to change in time as the solution changes in time. Current algorithms presented in the literature address this concern by adapting the spatial mesh at every time step, which can be inefficient. This Thesis proposes an algorithm for saving computational resources by using a spatially adapted mesh for multiple time steps, and only adapting the spatial mesh when the solution has changed significantly. This Thesis explores the mechanisms used to determine when and where to spatially adapt for time dependent, coupled physics problems. The algorithm is implemented using the Deal.ii fiinite element library [1, 2], in 2D and 3D, and is tested on a coupled neutronics and heat conduction problem in 2D. The algorithm is shown to perform better than a uniformly refined static mesh and, in some cases, a mesh that is spatially adapted at every time step.
329

A framework for semantically verifying schema mappings for data exchange

Walny, Jagoda K Unknown Date
No description available.
330

An investigation into the meaningfulness of moral language.

January 2007 (has links)
No abstract available. / Thesis (M.A.) - University of KwaZulu-Natal, Pietermaritzburg, 2007.

Page generated in 0.1415 seconds