• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 34
  • 22
  • 12
  • 11
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 303
  • 303
  • 86
  • 55
  • 55
  • 51
  • 50
  • 47
  • 45
  • 44
  • 41
  • 34
  • 30
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

A SYSTEMATIC METHODOLOGY FOR FATIGUE ANALYSIS OF MACHINE ELEMENTS WITH CHARACTERIZED DYNAMIC LOADS

Rahul Mula (6397871) 10 June 2019 (has links)
Fatigue analysis is essential for the optimization of products subjected to dynamic loads. However, a number of fatigue analysis theories have been developed, how to apply an established method in real-world product designs is not a trivial task. Most of small or medium sized enterprises (SMEs) still rely heavily on the experiments to evaluate the fatigue lives of products. Among existing fatigue design methods (i.e., experiments, analytical methods, and simulations), the simulation-based methods have the advantages of low cost, low risk environment and enable a designer to determine the accuracy and performance of a product design without building physical prototypes. Regarding the methodologies for fatigue analysis, some identified challenges are (1) the characterization of dynamic loads, (2) the formulation of finite element models which can be aligned with applications or testing scenarios, and (3) the verification and validation of simulations. To make a simulation-based fatigue analysis more practical for real-world product designs, the solutions to the aforementioned problems must be found. This thesis aims to establish a systematic methodology to perform the fatigue analysis for product design with any material, carbon steel material is used for the present case study to illustrate and verify the proposed methodology for fatigue analysis. Major tasks involved in this thesis study are: 1).The method for the characterization of dynamic loads. It is a numerical method to simulate the kinematic and dynamic behaviors subjected to the given motion, and it is expected to extract interacting dynamic forces of components to be analyzed. 2).The systematic method and procedure to formulate the problem of fatigue analysis as a finite element analysis model and find the solution of fatigue life of product.3).The procedure and approaches are developed to verify and validate fatigue analysis models and procedure used for the present case study.4).The parametric studies with a set of design variables to show the feasibility and flexibility of using simulation methods to evaluate the influence of multiple design variables on wheel products.<br>
132

Environment Analysis of Higher-Order Languages

Might, Matthew Brendon 29 June 2007 (has links)
Any analysis of higher-order languages must grapple with the tri-facetted nature of lambda. In one construct, the fundamental control, environment and data structures of a language meet and intertwine. With the control facet tamed nearly two decades ago, this work brings the environment facet to heel, defining the environment problem and developing its solution: environment analysis. Environment analysis allows a compiler to reason about the equivalence of environments, i.e., name-to-value mappings, that arise during a program's execution. In this dissertation, two different techniques-abstract counting and abstract frame strings-make this possible. A third technique, abstract garbage collection, makes both of these techniques more precise and, counter to intuition, often faster as well. An array of optimizations and even deeper analyses which depend upon environment analysis provide motivation for this work. In an abstract interpretation, a single abstract entity represents a set of concrete entities. When the entities under scrutiny are bindings-single name-to-value mappings, the atoms of environment-then determining when the equality of two abstract bindings infers the equality of their concrete counterparts is the crux of environment analysis. Abstract counting does this by tracking the size of represented sets, looking for singletons, in order to apply the following principle: If {x} = {y}, then x = y. Abstract frame strings enable environmental reasoning by statically tracking the possible stack change between the births of two environments; when this change is effectively empty, the environments are equivalent. Abstract garbage collection improves precision by intermittently removing unreachable environment structure during abstract interpretation.
133

Linear Static Analysis Of Large Structural Models On Pc Clusters

Ozmen, Semih 01 July 2009 (has links) (PDF)
This research focuses on implementing and improving a parallel solution framework for the linear static analysis of large structural models on PC clusters. The framework consists of two separate programs where the first one is responsible from preparing data for the parallel solution that involves partitioning, workload balancing, and equation numbering. The second program is a fully parallel nite element program that utilizes substructure based solution approach with direct solvers. The first step of data preparation is partitioning the structure into substructures. After creating the initial substructures, the estimated imbalance of the substructures is adjusted by iteratively transferring nodes from the slower substructures to the faster ones. Once the final substructures are created, the solution phase is initiated. Each processor assembles its substructure&#039 / s stiffness matrix and condenses it to the interfaces. The interface equations are then solved in parallel with a block-cyclic dense matrix solver. After computing the interface unknowns, each processor calculates the internal displacements and element stresses or forces. Comparative tests were done to demonstrate the performance of the solution framework.
134

Slope Stability Assessment Along The Bursa-inegol-bozuyuk Road At Km: 72+000-72+200

Oztepe, Damla Gaye 01 September 2009 (has links) (PDF)
The purpose of this study is to determine the most suitable remediation technique via geotechnical assessment of the landslide that occurred during the construction of Bursa-ineg&ouml / l-Boz&uuml / y&uuml / k Road at KM: 72+000-72+200 in an ancient landslide area. For this purpose, the geotechnical parameters of the mobilized soil along the slide surface was determined by back analyses of the landslide at four profiles by utilizing the Slope/W software. The landslide was then modeled using coupled analyses (with the Seep/W and Slope/W softwares) along the most representative profile of the study area by considering the landslide mechanism, the parameters determined from the geotechnical investigations, the size of the landslide and the location of the slip circle. In addition, since the study area is located in a second degree earthquake hazard region, pseudo-static stability analyses using the Slope/W software were performed incorporating the earthquake potential. The most suitable slope remediation technique was determined to be a combination of surface and subsurface drainage, application of rock buttress at the toe of the slide and unloading of the landslide material. A static and dynamic analyses of the landslide was also performed through utilizing finite element analyses. The static analyses were calibrated using the inclinometer readings in the field. After obtaining a good agreement with the inclinometer readings and finite element analyses results, the dynamic analyses were performed using acceleration time histories, which were determined considering the seismic characteristics of the study area.
135

Analysis of cross-system porting and porting errors in software projects

Ray, Baishakhi 11 November 2013 (has links)
Software forking---creating a variant product by copying and modifying an existing project---is often considered an ad hoc, low cost alternative to principled product line development. To maintain forked projects, developers need to manually port existing features or bug-fixes from one project to another. Such manual porting is not only tedious but also error-prone. When the contexts of the ported code vary, developers often have to adapt the ported code to fit its surroundings. Faulty adaptations or inconsistent updates of the ported code could potentially introduce subtle inconsistencies in the codebase. To build a deeper understanding to cross-system porting and porting related errors, this dissertation investigates: (1) How can we identify ported code from software version histories? (2) What is the overhead of cross-system porting required to maintain forked projects? (3) What is the extent and characteristics of porting errors that occur in practice? and (4) How can we detect and characterize potential porting errors? As a first step towards assessing the overhead of cross-system porting, we implement REPERTOIRE, a tool to analyze repeated work of cross-system porting across peer projects. REPERTOIRE can detect ported edits between program patches with high accuracy of 94% precision and 84% recall. Using REPERTOIRE, we study the temporal, spatial, and developer dimensions of cross-system porting using 18 years of parallel evolution history of the BSD product family. Our study finds that cross-system porting happens periodically and the porting rate does not necessarily decrease over time. The upkeep work of porting changes from peer projects is significant and currently, porting practice seems to heavily depend on developers doing their porting job on time. Analyzing version histories of Linux and FreeBSD, we derive five categories of porting errors, including incorrect control- and data-flow, code redundancy, and inconsistent identifier and token renamings. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. SPA detects porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% recall on average. In a comparison with two existing error detection tools, SPA outperforms them with 14% to 17% better precision. / text
136

Verification of sequential and concurrent libraries

Deshmukh, Jyotirmoy Vinay 02 August 2011 (has links)
The goal of this dissertation is to present new and improved techniques for fully automatic verification of sequential and concurrent software libraries. In most cases, automatic software verification is plagued by undecidability, while in many others it suffers from prohibitively high computational complexity. Model checking -- a highly successful technique used for verifying finite state hardware circuits against logical specifications -- has been less widely adapted for software, as software verification tends to involve reasoning about potentially infinite state-spaces. Two of the biggest culprits responsible for making software model checking hard are heap-allocated data structures and concurrency. In the first part of this dissertation, we study the problem of verifying shape properties of sequential data structure libraries. Such libraries are implemented as collections of methods that manipulate the underlying data structure. Examples of such methods include: methods to insert, delete, and update data values of nodes in linked lists, binary trees, and directed acyclic graphs; methods to reverse linked lists; and methods to rotate balanced trees. Well-written methods are accompanied by documentation that specifies the observational behavior of these methods in terms of pre/post-conditions. A pre-condition [phi] for a method M characterizes the state of a data structure before the method acts on it, and the post-condition [psi] characterizes the state of the data structure after the method has terminated. In a certain sense, we can view the method as a function that operates on an input data structure, producing an output data structure. Examples of such pre/post-conditions include shape properties such as acyclicity, sorted-ness, tree-ness, reachability of particular data values, and reachability of pointer values, and data structure-specific properties such as: "no red node has a red child'', and "there is no node with data value 'a' in the data structure''. Moreover, methods are often expected not to violate certain safety properties such as the absence of dangling pointers, absence of null pointer dereferences, and absence of memory leaks. We often assume such specifications as implicit, and say that a method is incorrect if it violates such specifications. We model data structures as directed graphs, and use the two terms interchangeably. Verifying correctness of methods operating on graphs is an instance of the parameterized verification problem: for every input graph that satisfies [phi], we wish to ensure that the corresponding output graph satisfies [psi]. Control structures such as loops and recursion allow an arbitrary method to simulate a Turing Machine. Hence, the parameterized verification problem for arbitrary methods is undecidable. One of the main contributions of this dissertation is in identifying mathematical conditions on a programming language fragment for which parameterized verification is not only decidable, but also efficient from a complexity perspective. The decidable fragment we consider can be broadly sub-divided into two categories: the class of iterative methods, or methods which use loops as a control flow construct to traverse a data structure, and the class of recursive methods, or methods that use recursion to traverse the data structure. We show that for an iterative method operating on a directed graph, if we are guaranteed that if the number of destructive updates that a method performs is bounded (by a constant, i.e., O(1)), and is guaranteed to terminate, then the correctness of the method can be checked in time polynomial in the size of the method and its specifications. Further, we provide a well-defined syntactic fragment for recursive methods operating on tree-like data structures, which assures that any method in this fragment can be verified in time polynomial in the size of the method and its specifications. Our approach draws on the theory of tree automata, and we show that parameterized correctness can be reduced to emptiness of finite-state, nondeterministic tree automata that operate on infinite trees. We then leverage efficient algorithms for checking the emptiness of such tree automata to obtain a tractable verification framework. Our prototype tool demonstrates the low theoretical complexity of our technique by efficiently verifying common methods that operate on data structures. In the second part of the dissertation, we tackle another obstacle for tractable software verification: concurrency. In particular, we explore application of a static analysis technique based on interprocedural dataflow analysis to predict and document deadlocks in concurrent libraries, and analyze deadlocks in clients that use such libraries. The kind of deadlocks that we focus result from circular dependencies in the acquisition of shared resources (such as locks). Well-written applications that use several locks implicitly assume a certain partial order in which locks are acquired by threads. A cycle in the lock acquisition order is an indicator of a possible deadlock within the application. Methods in object-oriented concurrent libraries often encapsulate internal synchronization details. As a result of information hiding, clients calling the library methods may cause thread safety violations by invoking methods in a manner that violates the partial ordering between lock acquisitions that is implicit within the library. Given a concurrent library, we present a technique for inferring interface contracts that speciy permissible concurrent method calls and patterns of aliasing among method arguments that guarantee deadlock-free execution for the methods in the library. The contracts also help client developers by documenting required assumptions about the library methods. Alternatively, the contracts can be statically enforced in the client code to detect potential deadlocks in the client. Our technique combines static analysis with a symbolic encoding for tracking lock dependencies, allowing us to synthesize contracts using a satisfiability modulo theories (SMT) solver. Additionally, we investigate extensions of our technique to reason about deadlocks in libraries that employ signalling primitives such as wait-notify for cooperative synchronization. We demonstrate its scalability and efficiency with a prototype tool that analyzed over a million lines of code for some widely-used open-source Java libraries in less than 50 minutes. Furthermore, the contracts inferred by our approach have been able to pinpoint real bugs, i.e. deadlocks that have been reported by users of these libraries. / text
137

Understanding and Improving Object-Oriented Software Through Static Software Analysis

Irwin, Warwick Allan January 2007 (has links)
Software engineers need to understand the structure of the programs they construct. This task is made difficult by the intangible nature of software, and its complexity, size and changeability. Static analysis tools can help by extracting information from source code and conveying it to software engineers. However, the information provided by typical tools is limited, and some potentially rich veins of information - particularly metrics and visualisations - are under-utilised because developers cannot easily acquire or make use of the data. This thesis documents new tools and techniques for static analysis of software. It addresses the problem of generating parsers directly from standard grammars, thus avoiding the com-mon practice of customising grammars to comply with the limitations of a given parsing al-gorithm, typically LALR(1). This is achieved by a new parser generator that applies a range of bottom-up parsing algorithms to produce a hybrid parsing automaton. Consequently, we can generate more powerful deterministic parsers - up to and including LR(k) - without incurring the combinatorial explosion that makes canonical LR(k) parsers impractical. The range of practical parsers is further extended to include GLR, which was originally developed for natural language parsing but is shown here to also have advantages for static analysis of programming languages. This emphasis on conformance to standard grammars im-proves the rigour of static analysis tools and allows clearer definition and communication of derived information, such as metrics. Beneath the syntactic structure of software (exposed by parsing) lies the deeper semantic structure of declarations, scopes, classes, methods, inheritance, invocations, and so on. In this work, we present a new tool that performs semantic analysis on parse trees to produce a comprehensive semantic model suitable for processing by other static analysis tools. An XML pipeline approach is used to expose the syntactic and semantic models of the software and to derive metrics and visualisations. The approach is demonstrated producing several types of metrics and visualisations for real software, and the value of static analysis for informing software engineering decisions is shown.
138

Static analysis of numerical properties in the presence of pointers

Fu, Zhoulai 22 July 2013 (has links) (PDF)
The fast and furious pace of change in computing technology has become an article of faith for many. The reliability of computer-based systems cru- cially depends on the correctness of its computing. Can man, who created the computer, be capable of preventing machine-made misfortune? The theory of static analysis strives to achieve this ambition. The analysis of numerical properties of programs has been an essential research topic for static analysis. These kinds of properties are commonly modeled and handled by the concept of numerical abstract domains. Unfor- tunately, lifting these domains to heap-manipulating programs is not obvious. On the other hand, points-to analyses have been intensively studied to an- alyze pointer behaviors and some scale to very large programs but without inferring any numerical properties. We propose a framework based on the theory of abstract interpretation that is able to combine existing numerical domains and points-to analyses in a modular way. The static numerical anal- ysis is prototyped using the SOOT framework for pointer analyses and the PPL library for numerical domains. The implementation is able to analyze large Java program within several minutes. The second part of this thesis consists of a theoretical study of the com- bination of the points-to analysis with another pointer analysis providing information called must-alias. Two pointer variables must alias at some pro- gram control point if they hold equal reference whenever the control point is reached. We have developed an algorithm of quadruple complexity that sharpens points-to analysis using must-alias information. The algorithm is proved correct following a semantics-based formalization and the concept of bisimulation borrowed from the game theory, model checking etc.
139

Analysis of Thick Laminated Composite Beams using Variational Asymptotic Method

Ameen, Maqsood Mohammed January 2016 (has links) (PDF)
An asymptotically-exact methodology is presented for obtaining the cross-sectional stiffness matrix of a pre-twisted, moderately-thick beam having rectangular cross sections and made of transversely isotropic material. The beam is modelled with-out assumptions from 3-D elasticity. The strain energy of the beam is computed making use of the constitutive law and the kinematical relations derived with the inclusion of geometrical nonlinearities and initial twist. Large displacements and rotations are allowed, but small strain is assumed. The Variational Asymptotic Method (VAM) is used to minimize the energy functional, thereby reducing the cross section to a point on the reference line with appropriate properties, yielding a 1-D constitutive law. In this method as applied herein, the 2-D cross-sectional analysis is performed asymptotically by taking advantage of a material small parameter and two geometric small parameters. 3-D strain components are derived using kinematics and arranged as orders of the small parameters. Warping functions are obtained by the minimisation of strain energy subject to certain set of constraints that renders the 1-D strain measures well-defined. Closed-form expressions are derived for the 3-D non-linear warping and stress fields. The model is capable of predicting interlaminar and transverse shear stresses accurately up to first order.
140

Le domaine abstrait des polyèdres revisité : représentation par contraintes et preuve formelle / Revisiting the abstract domain of polyhedra : constraints-only representation and formal proof

Fouilhé, Alexis 15 October 2015 (has links)
Cette thèse revisite de deux manières le domaine abstrait des polyèdres utilisé pour l'analyse statique de programmes.D'abord, elle montre comment utiliser l'assistant à la preuve Coq pour apporter des garanties sur la correction des opérations sur les polyèdres sans compromettre l'efficacité de l'outil VP Lissu de ces travaux.L'outil est fondé sur le principe de la vérification de résultats :un oracle, auquel on ne fait pas confiance, fait les calculs,puis les résultats sont vérifiés par un validateur dont la correction est prouvée avec Coq. De plus, l'oracle fournit des témoins de la correction des résultats afin d'accélérer la vérification.L'autre caractéristique de VPL est l' utilsation de la seule représentation par contraintes des polyèdres,par opposition à l'approche habituelle qui consiste à utiliser à la fois des contraintes et des générateurs.Malgré ce choix inhabituel,les performances de VPL s'avèrent compétitives.Comme on pouvait le prévoir,l'opérateur "join",qui calcule l'enveloppe convexe de deux polyèdres,est le plus coûteux.Puisqu'il nécessite un grand nombre de projections,cette thèse explore plusieurs nouvelles approches de l'opérateur de projection,basées sur la programmation linéaire paramétrique.Elle propose une synthèse des variantes et des combinaisons possibles.La thèse se termine sur les éléments clés d'un nouvel algorithme de résolution tirant parti des spécificités de l'encodage afin d'obtenir de bonnes performances. / The work reported in this thesis revisits in two waysthe abstract domain of polyhedraused for static analysis of programs.First, strong guarantees are provided on the soundness of the operationson polyhedra,by using of the Coq proof assistant to check the soundness proofs.The means used to ensure correctnessdon't hinder the performance of the resultingVerimag Polyhedra Library (VPL).It is built on the principle of result verification:computations are performed by an untrusted oracleand their results are verified by a checkerwhose correctness is proved in Coq.In order to make verification cheap,the oracle computes soundness witnesses along with the results.The other distinguishing feature of VPL is thatit relies only on the constraint representation of polyhedra,as opposed to the common practice of using both constraints and generators.Despite this unusual choice,VPL turns out to be a competitive abstract domain of polyhedra,performance-wise.As expected, the join operator of VPL,which performs the convex hull of two polyhedra,is the costliest operator.Since it builds on the projection operator,this thesis also investigates a new approach toperforming projections,based on parametric linear programming.A new understanding of projection encoded asa parametric linear problem is presented.The thesis closes on a progress report in the design of a new solvingalgorithm,tailored to the specifics of the encodingso as to achieve good performance.

Page generated in 0.4526 seconds