• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 5
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 65
  • 65
  • 39
  • 29
  • 21
  • 19
  • 19
  • 16
  • 12
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Code Profiling : Static Code Analysis

Borchert, Thomas January 2008 (has links)
Capturing the quality of software and detecting sections for further scrutiny within are of high interest for industry as well as for education. Project managers request quality reports in order to evaluate the current status and to initiate appropriate improvement actions and teachers feel the need of detecting students which need extra attention and help in certain programming aspects. By means of software measurement software characteristics can be quantified and the produced measures analyzed to gain an understanding about the underlying software quality. In this study, the technique of code profiling (being the activity of creating a summary of distinctive characteristics of software code) was inspected, formulized and conducted by means of a sample group of 19 industry and 37 student programs. When software projects are analyzed by means of software measurements, a considerable amount of data is produced. The task is to organize the data and draw meaningful information from the measures produced, quickly and without high expenses. The results of this study indicated that code profiling can be a useful technique for quick program comparisons and continuous quality observations with several application scenarios in both industry and education.
22

Static Code Analysis: A Systematic Literature Review and an Industrial Survey

Ilyas, Bilal, Elkhalifa, Islam January 2016 (has links)
Context: Static code analysis is a software verification technique that refers to the process of examining code without executing it in order to capture defects in the code early, avoiding later costly fixations. The lack of realistic empirical evaluations in software engineering has been identified as a major issue limiting the ability of research to impact industry and in turn preventing feedback from industry that can improve, guide and orient research. Studies emphasized rigor and relevance as important criteria to assess the quality and realism of research. The rigor defines how adequately a study has been carried out and reported, while relevance defines the potential impact of the study on industry. Despite the importance of static code analysis techniques and its existence for more than three decades, the number of empirical evaluations in this field are less in number and do not take into account the rigor and relevance into consideration. Objectives: The aim of this study is to contribute toward bridging the gap between static code analysis research and industry by improving the ability of research to impact industry and vice versa. This study has two main objectives. First, developing guidelines for researchers, which will explore the existing research work in static code analysis research to identify the current status, shortcomings, rigor and industrial relevance of the research, reported benefits/limitations of different static code analysis techniques, and finally, give recommendations to researchers to help improve the future research to make it more industrial oriented. Second, developing guidelines for practitioners, which will investigate the adoption of different static code analysis techniques in industry and identify benefits/limitations of these techniques as perceived by industrial professionals. Then cross-analyze the findings of the SLR and the surbvey to draw final conclusions, and finally, give recommendations to professionals to help them decide which techniques to adopt. Methods: A sequential exploratory strategy characterized by the collection and analysis of qualitative data (systematic literature review) followed by the collection and analysis of quantitative data (survey), has been used to conduct this research. In order to achieve the first objective, a thorough systematic literature review has been conducted using Kitchenham guidelines. To achieve the second study objective, a questionnaire-based online survey was conducted, targeting professionals from software industry in order to collect their responses regarding the usage of different static code analysis techniques, as well as their benefits and limitations. The quantitative data obtained was subjected to statistical analysis for the further interpretation of the data and draw results based on it. Results: In static code analysis research, inspection and static analysis tools received significantly more attention than the other techniques. The benefits and limitations of static code analysis techniques were extracted and seven recurrent variables were used to report them. The existing research work in static code analysis field significantly lacks rigor and relevance and the reason behind it has been identified. Somre recommendations are developed outlining how to improve static code analysis research and make it more industrial oriented. From the industrial point of view, static analysis tools are widely used followed by informal reviews, while inspections and walkthroughs are rarely used. The benefits and limitations of different static code analysis techniques, as perceived by industrial professionals, have been identified along with the influential factors. Conclusions: The SLR concluded that the techniques having a formal, well-defined process and process elements have receive more attention in research, however, this doesn’t necessarily mean that technique is better than the other techniques. The experiments have been used widely as a research method in static code analysis research, but the outcome variables in the majority of the experiments are inconsistent. The use of experiments in academic context contributed nothing to improve the relevance, while the inadequate reporting of validity threats and their mitigation strategies contributed significantly to poor rigor of research. The benefits and limitations of different static code analysis techniques identified by the SLR could not complement the survey findings, because the rigor and relevance of most of the studies reporting them was weak. The survey concluded that the adoption of static code analysis techniques in the industry is more influenced by the software life-cycle models in practice in organizations, while software product type and company size do not have much influence. The amount of attention a static code analysis technique has received in research doesn’t necessarily influence its adoption in industry which indicates a wide gap between research and industry. However, the company size, product type, and software life-cycle model do influence professionals perception on benefits and limitations of different static code analysis techniques.
23

Ověřování asercí kódu pomocí zpětné symbolické exekuce / Code Assertions Verification Using Backward Symbolic Execution

Husák, Robert January 2017 (has links)
In order to prevent, detect and fix errors in software, various tools for programmers are available, while some of them are able to reason about the behaviour of the program. In the case of C# programming language, the main representatives are Microsoft FxCop, Code Contracts and Pex. Those tools can, indeed, help to build a highly reliable software. However, when a company wants to include them in the software development process, there is a significant overhead involved. Therefore, we created a "light- weight" assertion verification tool called AskTheCode that can help the user to focus on a particular problem at a time that needs to be solved. Because of its goal-driven approach, we decided to implement it using backward symbolic execution. Although it can currently handle only basic C# statements and data types, the evaluation against the existing tools shows that it has the potential to eventually provide significant added value to the user once developed further. Powered by TCPDF (www.tcpdf.org)
24

The VICS Test: Does Operational Code Analysis Falter for The Populist Right?

January 2020 (has links)
abstract: Operational code analysis (OCA) is a common method of content analysis within the foreign policy analysis (FPA) literature used to determine the “operational code” of state leaders and, by extension, the foreign policy behaviors of their respective state. It has been tried and tested many times before, on many different world leaders from many different time periods, to predict what the foreign policy behavior of a state/organization might be based on the philosophical and instrumental beliefs of their leader about the political universe. This paper, however, questions if there might be types of politicians that OCA, conducted using the automated Verbs In Context System (VICS), has problems delivering accurate results for. More specifically, I have theoretical reasons for thinking that populist leaders, who engage in a populist style of communication, confound VICS’ analysis primarily because the simplistic speaking style of populists obscures an underlying context (and by extension meaning) to that leader’s words. Because the computer cannot understand this underlying context and takes the meaning of the words said at face value, it fails to code the speeches of populists accurately and thus makes inaccurate predictions about that leader’s foreign policy. To test this theory, I conduct the content analysis on speeches made by three individuals: Donald Trump, Boris Johnson, and Narendra Modi, before and after they became the executives of their respective countries, and compared them to a “norming “ group representing the average world leader. The results generally support my hypotheses but with a few caveats. For the cases of Trump and Johnson, VICS found them to be a lot more cooperative than what I would expect, but it was also able to track changes in their operational code - as they transition into the role of chief executive – in the expected direction. The opposite was the case for Modi’s operational code. All-in-all, I provide suggestive evidence that OCA using VICS has trouble providing valid results for populist leaders. / Dissertation/Thesis / Summary Statistics and Calculations of Indices / Masters Thesis Political Science 2020
25

Security smells in open-source infrastructure as code scripts : A replication study

Hortlund, Andreas January 2021 (has links)
With the rising number of servers used in productions, virtualization technology engineers needed a new a tool to help them manage the rising configuration workload. Infrastructure as code(IaC), a term that consists mainly of techniques and tools to define wanted configuration states of servers in machine readable code files, which aims at solving the high workload induced by the configuration of several servers. With new tools, new challenges rise regarding the security of creating the infrastructure as code scripts that will take over the processing load. This study is about finding out how open-source developers perform when creating IaC scripts in regard to how many security smells they insert into their scripts in comparison to previous studies and such how developers can mitigate these risks. Security smells are code patterns that show vulnerability and can lead to exploitation. Using data gathered from GitHub with a web scraper tool created for this study, the author analyzed 400 repositories from Ansible and Puppet with a second tool created, tested and validated from previous study. The Security Linter for Infrastructure as Code uses static code analysis on these repositories and tested these against a certain ruleset for weaknesses in code such as default admin and hard-coded password among others. The present study used both qualitative and quantitative methods to analyze the data. The results show that developers that actively participated in developing these repositories with a creation date of at latest 2019-01-01 produced less security smells than Rahman et al (2019b, 2020c) with a data source ranging to November 2018. While Ansible produced 9,2 compared to 28,8 security smells per thousand lines of code and Puppet 13,6 compared to 31,1. Main limitation of the study come mainly in looking only at the most popular and used tools of the time of writing, being Ansible and Puppet. Further mitigation on results from both studies can be achieved through training and education. As well as the use of tools such as SonarQube for static code analysis against custom rulesets before the scripts are being pushed to public repositories.
26

Locating SQL Injection Vulnerabilities in Java Byte Code Using Natural Language Techniques

Jackson, Kevin A., Bennett, Brian T. 01 October 2018 (has links)
With so much our daily lives relying on digital devices like personal computers and cell phones, there is a growing demand for code that not only functions properly, but is secure and keeps user data safe. However, ensuring this is not such an easy task, and many developers do not have the required skills or resources to ensure their code is secure. Many code analysis tools have been written to find vulnerabilities in newly developed code, but this technology tends to produce many false positives, and is still not able to identify all of the problems. Other methods of finding software vulnerabilities automatically are required. This proof-of-concept study applied natural language processing on Java byte code to locate SQL injection vulnerabilities in a Java program. Preliminary findings show that, due to the high number of terms in the dataset, using singular decision trees will not produce a suitable model for locating SQL injection vulnerabilities, while random forest structures proved more promising. Still, further work is needed to determine the best classification tool.
27

Identifying and documenting false positive patterns generated by static code analysis tools

Reynolds, Zachary P. 18 July 2017 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Static code analysis tools are known to flag a large number of false positives. A false positive is a warning message generated by a static code analysis tool for a location in the source code that does not have any known problems. This thesis presents our approach and results in identifying and documenting false positives generated by static code analysis tools. The goal of our study was to understand the different kinds of false positives generated so we can (1) automatically determine if a warning message from a static code analysis tool truly indicates an error, and (2) reduce the number of false positives developers must triage. We used two open-source tools and one commercial tool in our study. Our approach led to a hierarchy of 14 core false positive patterns, with some patterns appearing in multiple variations. We implemented checkers to identify the code structures of false positive patterns and to eliminate them from the output of the tools. Preliminary results showed that we were able to reduce the number of warnings by 14.0%-99.9% with a precision of 94.2%-100.0% by applying our false positive filters in different cases.
28

Using Machine Learning Techniques to Improve Static Code Analysis Tools Usefulness

Alikhashashneh, Enas A. 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This dissertation proposes an approach to reduce the cost of manual inspections for as large a number of false positive warnings that are being reported by Static Code Analysis (SCA) tools as much as possible using Machine Learning (ML) techniques. The proposed approach neither assume to use the particular SCA tools nor depends on the specific programming language used to write the target source code or the application. To reduce the number of false positive warnings we first evaluated a number of SCA tools in terms of software engineering metrics using a highlighted synthetic source code named the Juliet test suite. From this evaluation, we concluded that the SCA tools report plenty of false positive warnings that need a manual inspection. Then we generated a number of datasets from the source code that forced the SCA tool to generate either true positive, false positive, or false negative warnings. The datasets, then, were used to train four of ML classifiers in order to classify the collected warnings from the synthetic source code. From the experimental results of the ML classifiers, we observed that the classifier that built using the Random Forests (RF) technique outperformed the rest of the classifiers. Lastly, using this classifier and an instance-based transfer learning technique, we ranked a number of warnings that were aggregated from various open-source software projects. The experimental results show that the proposed approach to reduce the cost of the manual inspection of the false positive warnings outperformed the random ranking algorithm and was highly correlated with the ranked list that the optimal ranking algorithm generated.
29

MLpylint: Automating the Identification of Machine Learning-Specific Code Smells

Hamfelt, Peter January 2023 (has links)
Background. Machine learning (ML) has rapidly grown in popularity, becoming a vital part of many industries. This swift expansion has brought about new challenges to technical debt, maintainability and the general software quality of ML systems. With ML applications becoming more prevalent, there is an emerging need for extensive research to keep up with the pace of developments. Currently, the research on code smells in ML applications is limited and there is a lack of tools and studies that address these issues in-depth. This gap in the research highlights the necessity for a focused investigation into the validity of ML-specific code smells in ML applications, setting the stage for this research study. Objectives. Addressing the limited research on ML-specific code smells within Python-based ML applications. To achieve this, the study begins with the identification of these ML-specific code smells. Once recognized, the next objective is to choose suitable methods and tools to design and develop a static code analysis tool based on code smell criteria. After development, an empirical evaluation will assess both the tool’s efficacy and performance. Additionally, feedback from industry professionals will be sought to measure the tool’s feasibility and usefulness. Methods. This research employed Design Science Methodology. In the problem identification phase, a literature review was conducted to identify ML-specific code smells. In solution design, a secondary literature review and consultations with experts were performed to select methods and tools for implementing the tool. Additionally, 160 open-source ML applications were sourced from GitHub. The tool was empirically tested against these applications, with a focus on assessing its performance and efficacy. Furthermore, using the static validation method, feedback on the tool’s usefulness was gathered through an expert survey, which involved 15 ML professionals from Ericsson. Results. The study introduced MLpylint, a tool designed to identify 20 ML-specific code smells in Python-based ML applications. MLpylint effectively analyzed 160ML applications within 36 minutes, identifying in total 5380 code smells, although, highlighting the need for further refinements to each code smell checker to accurately identify specific patterns. In the expert survey, 15 ML professionals from Ericsson acknowledged the tool’s usefulness, user-friendliness and efficiency. However, they also indicated room for improvement in fine-tuning the tool to avoid ambiguous smells. Conclusions. Current studies on ML-specific code smells are limited, with few tools addressing them. The development and evaluation of MLpylint is a significant advancement in the ML software quality domain, enhancing reliability and reducing associated technical debt in ML applications. As the industry integrates such tools, it’s vital they evolve to detect code smells from new ML libraries. Aiding developers in upholding superior software quality but also promoting further research in the ML software quality domain.
30

On the notions and predictability of Technical Debt

Dalal, Varun January 2023 (has links)
Technical debt (TD) is a by-product of short-term optimisation that results in long-term disadvantages. Because every system gets more complicated while it is evolving, technical debt can emerge naturally. The impact of technical debt is great on the financial cost of development, management, and deployment, it also has an impact on the time needed to maintain the project. As technical debt affects all parts of a development cycle for any project, it is believed that it is a major aspect of measuring the long-term quality of a software project. It is still not clear what aspect of a project impact and build upon the existing measure of technical debt. Hence this experiment, the ultimate task is to try and estimate the generalisation error in predicting technical debt using software metrics, and adaptive learning methodology. As software metrics are considered to be absolute regardless of how they are estimated. The software metrics were compiled from an established data set; Qualitas.classCorpus, and the notions of technical debt were collected from three different Staticanalysis tools; SonarQube, Codiga, and CodeClimate.The adaptive learning methodology uses multiple parameters and multiple machine learning models, to remove any form of bias regarding the estimation. The outcome suggests that it is not feasible to predict technical debt for small-sized projects using software-level metrics for now, but for bigger projects, it can be a good idea to have a rough estimation in terms of the number of hours needed to maintain the project.

Page generated in 0.0513 seconds