• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 34
  • 22
  • 12
  • 11
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 303
  • 303
  • 86
  • 55
  • 55
  • 51
  • 50
  • 47
  • 45
  • 44
  • 41
  • 34
  • 30
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

A Hybrid Software Change Impact Analysis for Large-scale Enterprise Systems

Chen, Wen 11 1900 (has links)
This work is concerned with analysing the potential impact of direct changes to large- scale enterprise systems, and, in particular, how to minimise testing efforts on such changes. A typical enterprise system may consist of hundreds of thousands of classes and millions of methods. Thus, it is extremely costly and difficult to apply conventional testing techniques to such a system. Retesting everything after a change is very expensive, and in practice generally not necessary. Selective testing can be more effective. However, it requires a deep understanding of the target system and a lack of that understanding can lead to insufficient test coverage. Change Impact Analysis can be used to estimate the impacts of the changes to be applied, providing developers/testers with confidence in selecting necessary tests and identifying untested entities. Conventional change impact analysis approaches include static analysis, dynamic analysis or a hybrid of the two analyses. They have proved to be useful on small or medium size programs, providing users an inside view of the system within an acceptable running time. However, when it comes to large-scale enterprise systems, the sizes of the programs are orders of magnitude larger. Conventional approaches often run into resource problems such as insufficient memory and/or unacceptable running time (up to weeks). More critically, a large number of false-negatives and false-positives can be generated from those approaches.In this work, a conservative static analysis with the capability of dealing with inheritance was conducted on an enterprise system and associated changes to obtain all the potential impacts. Later an aspect-based dynamic analysis was used to instrument the system and collect a set of dynamic impacts at run-time. We are careful not to discard impacts unless we can show that they are definitely not impacted by the change. Reachability analysis examines the program to see “Whether a given path in a program representation corresponds to a possible execution path”. In other words, we employ reachability analysis to eliminate infeasible paths (i.e., miss-matched calls and returns) that are identified in the control-flow of the program. Furthermore, in the phase of alias analysis, we aim at identifying paths that are feasible but cannot be affected by the direct changes to the system, by searching a set of possible pairs of accesses that may be aliased at each program point of interest. Our contributions are, we designed a hybrid approach that combines static anal- ysis and dynamic analysis with reachability analysis and alias/pointer analysis, it can be used to (1) solve the scalability problem on large-scale systems, (2) reduce false-positives and not introduce false-negatives, (3) extract both direct and indirect changes, and (4) identify impacts even before making the changes. Using our approach, organizations can focus on a much smaller, relevant subset of the overall test suite instead of blindly doing their entire suite of tests. Also it enables testers to augment the test suite with tests applying to uncovered impacts. We include an empirical study that illustrates the savings that can be attained. / Thesis / Doctor of Philosophy (PhD)
122

Detecting access to sensitive data in software extensions through static analysis / Att upptäcka åtkomst till känslig information i mjukvarutillägg genom statisk analys

Hedlin, Johan, Kahlström, Joakim January 2019 (has links)
Static analysis is a technique to automatically audit code without having to execute or manually read through it. It is highly effective and can scan large amounts of code or text very quickly. This thesis uses static analysis to find potential threats within a software's extension modules. These extensions are developed by third parties and should not be allowed to access information belonging to other extensions. However, due to the structure of the software there is no easy way to restrict this and still keep the software's functionality intact. The use of a static analysis tool could detect such threats by analyzing the code of an extension before it is published online, and therefore keep all current functionality intact. As the software is based on a lesser known language and there is a specific threat by way of information disclosure, a new static analysis tool has to be developed. To achieve this, a combination of language specific functionality and features available in C++ are combined to create an extendable tool which has the capability to detect cross-extension data access.
123

Improving the Static Resolution of Dynamic Java Features

Sawin, Jason E. 11 September 2009 (has links)
No description available.
124

Analyzing Large-Scale Object-Oriented Software to Find and Remove Runtime Bloat

Xu, Guoqing 27 September 2011 (has links)
No description available.
125

Analysis of R/C frames considering cracking effect and plastic hinge formation

Kara, Ilker F., Ashour, Ashraf, Dundar, C. 10 September 2017 (has links)
Yes / The design of reinforced concrete buildings must satisfy the serviceability stiffness criteria in terms of maximum lateral deflections and inter story drift in order to prevent both structural and non-structural damages. Consideration of plastic hinge formation is also important to obtain accurate failure mechanism and ultimate strength of reinforced concrete frames. In the present study, an iterative procedure has been developed for the analysis of reinforced concrete frames with cracked elements and consideration of plastic hinge formation. The ACI and probability-based effective stiffness models are used for the effective moment of inertia of cracked members. Shear deformation effect is also considered, and the variation of shear stiffness due to cracking is evaluated by reduced shear stiffness models available in the literature. The analytical procedure has been demonstrated through the application to three reinforced concrete frame examples available in the literature. It has been shown that the iterative analytical procedure can provide accurate and efficient predictions of deflections and ultimate strength of the frames studied under lateral and vertical loads. The proposed procedure is also efficient from the viewpoint of computational time and convergence rate. The developed technique was able to accurately predict the locations and sequential development of plastic hinges in frames. The results also show that shear deformation can contribute significantly to frame deflections.
126

Secure Coding Practice in Java: Automatic Detection, Repair, and Vulnerability Demonstration

Zhang, Ying 12 October 2023 (has links)
The Java platform and third-party open-source libraries provide various Application Programming Interfaces (APIs) to facilitate secure coding. However, using these APIs securely is challenging for developers who lack cybersecurity training. Prior studies show that many developers use APIs insecurely, thereby introducing vulnerabilities in their software. Despite the availability of various tools designed to identify API insecure usage, their effectiveness in helping developers with secure coding practices remains unclear. This dissertation focuses on two main objectives: (1) exploring the strengths and weaknesses of the existing automated detection tools for API-related vulnerabilities, and (2) creating better tools that detect, repair, and demonstrate these vulnerabilities. Our research started with investigating the effectiveness of current tools in helping with developers' secure coding practices. We systematically explored the strengths and weaknesses of existing automated tools for detecting API-related vulnerabilities. Through comprehensive analysis, we observed that most existing tools merely report misuses, without suggesting any customized fixes. Moreover, developers often rejected tool-generated vulnerability reports due to their concerns on the correctness of detection, and the exploitability of the reported issues. To address these limitations, the second work proposed SEADER, an example-based approach to detect and repair security-API misuses. Given an exemplar ⟨insecure, secure⟩ code pair, SEADER compares the snippets to infer any API-misuse template and corresponding fixing edit. Based on the inferred information, given a program, SEADER performs inter-procedural static analysis to search for security-API misuses and to propose customized fixes. The third work leverages ChatGPT-4.0 to automatically generate security test cases. These test cases can demonstrate how vulnerable API usage facilitates supply chain attacks on specific software applications. By running such test cases during software development and maintenance, developers can gain more relevant information about exposed vulnerabilities, and may better create secure-by-design and secure-by-default software. / Doctor of Philosophy / The Java platform and third-party open-source libraries provide various Application Pro- gramming Interfaces (APIs) to facilitate secure coding. However, using these APIs securely can be challenging, especially for developers who aren't trained in cybersecurity. Prior work shows that many developers use APIs insecurely, consequently introducing vulnerabilities in their software. Despite the availability of various tools designed to identify API insecure usage, it is still unclear how well they help developers with secure coding practices. This dissertation focuses on (1) exploring the strengths and weaknesses of the existing au- tomated detection tools for API-related vulnerabilities, and (2) creating better tools that detect, repair, and demonstrate these vulnerabilities. We first systematically evaluated the strengths and weaknesses of the existing automated API-related vulnerability detection tools. We observed that most existing tools merely report misuses, without suggesting any cus- tomized fixes. Additionally, developers often reject tool-generated vulnerability reports due to their concerns about the correctness of detection, and whether the reported vulnerabil- ities are truly exploitable. To address the limitations found in our study, the second work proposed a novel example-based approach, SEADER, to detect and repair API insecure usage. The third work leverages ChatGPT-4.0 to automatically generate security test cases, and to demonstrate how vulnerable API usage facilitates the supply chain attacks to given software applications.
127

Design Validation of RTL Circuits using Binary Particle Swarm Optimization and Symbolic Execution

Puri, Prateek 05 August 2015 (has links)
Over the last two decades, chip design has been conducted at the register transfer (RT) Level using Hardware Descriptive Languages (HDL), such as VHDL and Verilog. The modeling at the behavioral level not only allows for better representation and understanding of the design, but also allows for encapsulation of the sub-modules as well, thus increasing productivity. Despite these benefits, validating a RTL design is not necessarily easier. Today, design validation is considered one of the most time and resource consuming aspects of hardware design. The high costs associated with late detection of bugs can be enormous. Together with stringent time to market factors, the need to guarantee the correct functionality of the design is more critical than ever. The work done in this thesis tackles the problem of RTL design validation and presents new frameworks for functional test generation. We use branch coverage as our metric to evaluate the quality of the generated test stimuli. The initial effort for test generation utilized simulation based techniques because of their scalability with design size and ease of use. However, simulation based methods work on input spaces rather than the DUT's state space and often fail to traverse very narrow search paths in large input spaces. To encounter this problem and enhance the ability of test generation framework, in the following work in this thesis, certain design semantics are statically extracted and recurrence relationships between different variables are mined. Information such as relations among variables and loops can be extremely valuable from test generation point of view. The simulation based method is hybridized with Z3 based symbolic backward execution engine with feedback among different stages. The hybridized method performs loop abstraction and is able to traverse narrow design paths without performing costly circuit analysis or explicit loop unrolling. Also structural and functional unreachable branches are identified during the process of test generation. Experimental results show that the proposed techniques are able to achieve high branch coverage on several ITC'99 benchmark circuits and their modified variants, with significant speed up and reduction in the sequence length. / Master of Science
128

An examination of analysis and optimization procedures within a PBSD framework

Cott, Andrew January 1900 (has links)
Master of Science / Department of Architectural Engineering and Construction Science / Kimberly W. Kramer / The basic tenets of performance based seismic design (PBSD) are introduced. This includes a description of the underlying philosophy of PBSD, the concept of performance objectives, and a description of hazard levels and performance indicators. After establishing the basis of PBSD, analysis procedures that fit well within the PBSD framework are introduced. These procedures are divided into four basic categories: linear static, linear dynamic, nonlinear static, and nonlinear static. Baseline FEMA requirements are introduced for each category. Each analysis category is then expanded to include a detailed description of and variations on the basic procedure. Finally, optimization procedures that mesh well with a PBSD framework are introduced and described. The optimization discussion focuses first on the solution tools needed to effectively execute a PBSD multi-objective optimization procedure, namely genetic and evolutionary strategies algorithms. Next, multiple options for defining objective functions and constraints are presented to illustrate the versatility of structural optimization. Taken together, this report illustrates the unique aspects of PBSD. As PBSD moves to the forefront of design methodology, the subjects discussed serve to familiarize engineers with the advantages, possibilities, and finer workings of this powerful new design methodology.
129

Stress and Sliding Stability Analysis of Songlin Rock-Filled Concrete Gravity Dam

Sundström, Max, Ivedal, Max January 2016 (has links)
The construction of Songlin rock-filled concrete gravity dam, located in the Yunnan province, China began in the end of 2015. In this master thesis the finite element method (FEM) based software Abaqus has been used to perform a computational analysis on tension stresses, compression stresses and sliding stability for static conditions. One overflow section and one non-overflow section of the dam have been analysed. The results of the analysis have been evaluated by comparing with Chinese standards for dam safety and is intended to help engineers with making decisions in the construction process of the dam. The measured compressive stress values of both the overflow and non-overflow section are not evaluated to be within safe levels, further evaluation is required to ensure the safety of the dam. The dam is considered to be safe from vertical tension in the analysed region, however an extended evaluation including the whole dam base is recommended. The analysed cross-sections for sliding stability can be considered safe, but further analysis is required to make a conclusion of the sliding stability of the full dam base.
130

[en] EVALUATION OF STATIC ANALYSIS IN DATA TYPE SEMANTIC CONFLICT DETECTION / [pt] AVALIAÇÃO DO USO DE ANÁLISE ESTÁTICA NA DETECÇÃO DE CONFLITOS SEMÂNTICOS EM TIPOS DE DADOS

RAFAEL DE PINHO ANDRE 21 August 2014 (has links)
[pt] Em um sistema de informação, falhas podem ocorrer pela diferença de entendimento das partes envolvidas em relação ao significado de um dado. Este é um problema bem conhecido pela engenharia de software, e defeitos deste tipo já foram responsáveis por falhas catastróficas, como a do Mars Climate Orbiter em 1999. O atual cenário de intercâmbio e processamento de dados, com grande volume de informação e heterogeneidade de participantes, cria um estado de suscetibilidade a estes defeitos. Entretanto, as técnicas de garantia de qualidade de software são tipicamente dirigidas à estrutura e às propriedades físicas dos dados, e não são eficientes ao observar questões semânticas. Este trabalho tem como intuito avaliar o uso de análise estática na detecção de conflitos semânticos em tipos de dados, e para validar sua eficácia esta abordagem foi comparada com outras técnicas de garantia de qualidade em um estudo qualitativo. A ferramenta de análise estática VERITAS (VERIficador estTÁtico Semântico) e a notação SemTypes foram desenvolvidas exclusivamente para tratar do problema de conflitos semânticos, adicionando controle de tipo semântico aos tipos reconhecidos por compiladores, e são apresentadas neste trabalho. / [en] Within information system, faults can occur by the difference in understanding of the parties involved regarding the meaning of data. This is a well-known problem for software engineering and defects of this type have been responsible for catastrophic failures, such as the Mars Climate Orbiter in 1999. The current scenario of data processing and exchange, with high information traffic volume and heterogeneous participants, increases system’s vulnerability to these defects. Besides that, techniques of software quality assurance are typically oriented to data structure and physical properties, failing to efficiently address semantics issues. This work has the objective to evaluate the use of static analysis to detect semantic conflicts in data types, investigating its efficacy through an qualitative study comparing different software quality assurance approaches. The static analysis tool VERITAS (VERIficador esTÁtico Semântico) and the SemTypes notation were exclusively developed to address the problem of semantic conflicts - adding a semantic control to the types recognized by compilers – and are presented in this work.

Page generated in 0.0792 seconds