• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 241
  • 142
  • 127
  • 117
  • 16
  • 15
  • 14
  • 12
  • 7
  • 6
  • 6
  • 5
  • 4
  • 2
  • 2
  • Tagged with
  • 773
  • 203
  • 164
  • 124
  • 108
  • 107
  • 104
  • 89
  • 82
  • 73
  • 71
  • 70
  • 69
  • 62
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Automated Software Defect Localization

Ye, Xin 23 September 2016 (has links)
No description available.
462

Optimum Part Build Orientation in Additive Manufacturing for Minimizing Part Errors and Build Time

Das, Paramita 12 September 2016 (has links)
No description available.
463

Solving Stochastic Differential Equations Using General Purpose Graphics Processing Unit

Neiman, Lev Alexandrovich 18 April 2012 (has links)
No description available.
464

Building a Modular Analytics Platform for 3D Home Design

Öhman, Jesper January 2022 (has links)
With society undergoing a large-scale technological revolution, companies often find themselves having to adapt by developing digitized products and applications. However, these applications typically produce large amounts of data. An additional problem that companies in the 3D Home Design industry face is having to provide a massive amount of options for their products, which without proper usage metrics can lead to large amounts of wasted resources. This thesis aims to combat these problems by designing a prototype of a modular analytics platform, consisting of a data parser, a back-end server & API as well as a web interface. The system is capable of displaying highly customizable visual graphs of user statistics as well as breakdowns for how much each product gets picked by the client's users. The thesis system is built on a MERN stack, consisting of MongoDB, Express.js, React.js and Node.js, and is written purely in JavaScript. The thesis achieved moderate success by successfully implementing a modular analytics platform as well as correctly providing tools that can identify obsolete products, which in return could potentially reduce the amount of resources wasted by companies that adopt the solution.
465

Code Clone Detection for Equivalence Assurance

Ersson, Sara January 2020 (has links)
To support multiple programming languages, the concept of offering applicationprogramming interfaces (APIs) in multiple programming languages hasbecome commonplace. However, this also brings the challenge of ensuringthat the APIs are equivalent regarding their interface. To achieve this, codeclone detection techniqueswere adapted to match similar function declarationsin the APIs. Firstly, existing code clone detection tools were investigated. Asthey did not perform well, a tree-based syntactic approach was used, where allheader files were compiled with Clang. The abstract syntax trees, which wereobtained during the compilation, were then traversed to locate the functiondeclaration nodes, and to store function names and parameter variable names.When matching the function names, a textual approach was used, transformingthe function names according to a set of implemented rules.A strict rule compares transformations of full function names in a preciseway, whereas a loose rule only compares transformations of parts of functionnames, and matches anything for the remainder. The rules were appliedboth by themselves, and in different combinations, starting with the strictestrule, followed by the second strictest rule, and so fourth.The best-matching rules showed to be the ones which are strict, and are notaffected by the order of the functions in which they are matched. These rulesshowed to be very robust to API evolution, meaning an increase in number ofpublic functions. Rules which are less strict and stable, and not robust to APIevolution, can still be used, such as matching functions on the first or last wordin the function names, but preferably as a complement to the stricter and morestable rules, when most of the functions already have been matched.The tool has been evaluated on the two APIs in King’s software developmentkit, and covered 94% of the 124 available function matches. / För att stödja flera olika programmingsspråk har det blivit alltmer vanligt atterbjuda applikationsprogrammeringsgränssnitt (API:er) på olika programmeringsspråk.Detta resulterar dock i utmaningen att säkerställa att API:erna ärekvivalenta angående deras gränssnitt. För att uppnå detta har kodklonsdetekteringsteknikeranpassats, för att matcha liknande funktionsdeklarationeri API:erna. Först undersöktes existerande kodklonsverktyg. Eftersom de intepresterade bra, användes ett trädbaserat syntaktiskt tillvägagångssätt, där allaheader-filer kompilerades med Clang. De abstrakta syntaxträden, som erhöllsunder kompileringen, traverserades sedan för att lokalisera funktionsdeklarationsnoderna,och för att lagra funktionsnamnen och parametervariabelnamnen.När funktionsnamnen matchades, användes ett textbaserat tillvägagångssätt,som omvandlade funktionsnamnen enligt en uppsättning implementeraderegler.En strikt regel jämför omvandlingar av hela funktionsnamn på ett exakt sätt,medan en lös regel bara jämför omvandlingar av delar of funktionsnamn, ochmatchar den resterande delen med vadsomhelst. Reglerna applicerades bådasjälva och i olika kombinationer, där den striktaste regeln applicerades först,följt av den näst strikaste, och så vidare.De regler som matchar bäst visade sig vara de som är striktast, och som intepåverkas av ordningen på funktionerna i vilken de matchas. Dessa reglervisade sig vara väldigt robusta mot API-evolution, dvs. ett ökat antal publikafunktioner i API:erna. Regler som är mindre strikta och stabila, och interobusta mot API-evolution kan fortfarande användas, men helst som ett komplementtill de striktare och mer stabila reglerna, när de flesta av funktionernaredan har blivit matchade.Verktyget har evaluerats på de två API:erna i Kings mjukvaruutvecklarkit, ochtäckte 94% av de tillgängliga funktionsmatchningarna.
466

An Exact Assessment of the Two-Stage EPI Sampling Method

Bharaj, Atinder 07 1900 (has links)
The Expanded Program on Immunization Sampling Method (known simply as EPI sampling) is a two-stage sampling procedure originally intended for quick estimation of disease prevalence in large geographical regions. The method was developed in the 1970s and all the subsequent assessments of its performance have been conducted by simulation. In her master's thesis, Reyes (2016) studied in detail the second-stage sampling of the method by developing formulas for the exact calculation of the household inclusion probabilities when sectors are used to identify the initial household to generate the EPI samples. The inclusion probabilities were used in turn to obtain exact mean, bias, variance and mean square error of any estimator of disease prevalence in the population. Thus, no extensive simulations are required and the results are exact rather than just estimates. This thesis is an extension of Reyes' (2016) work. The extension is two-fold; (a) employing strips rather than sectors because they narrow the geographic area for field workers and to use strips to select the first household for the EPI sample at the secondary stage, and (b) carrying out an analysis on simulated population and sampling plans, using both stages of the EPI method. Analyzing the simulated populations showed that equal weight estimator that samples primary units with replacement with probability proportional to size (EW1) should be used when the target characteristic is thought to be spread randomly throughout the population, and the Horvitz-Thompson estimator that samples primary units systematically with replacement (HTSYS) should be used when the disease is believed to spread from a central location or through pocketing. Comparing the strip and sector sampling methods at the secondary stage using their effective areas leads to a comparative basis in which the inclusion probabilities are identical for both methods. / Thesis / Master of Science (MSc)
467

Evaluation of Unsupervised Anomaly Detection in Structured API Logs : A Comparative Evaluation with a Focus on API Endpoints

Hult, Gabriel January 2024 (has links)
With large quantities of API logs being stored, it becomes difficult to manually inspect them and determine whether the requests are benign or anomalies, indicating incorrect access to an application or perhaps actions with malicious intent. Today, companies can rely on third-party penetration testers who occasionally attempt various techniques to find vulnerabilities in software applications. However, to be a self-sustainable company, implementing a system capable of detecting abnormal traffic which could be malicious would be beneficial. By doing so, attacks can be proactively prevented, mitigating risks faster than waiting for third parties to detect these issues. A potential solution is applying machine learning, specifically anomaly detection, which detects patterns that do not conform to normal standards. This thesis covers the process of having structured log data to find anomalies in the log data. Various unsupervised anomaly detection models were evaluated on their capabilities of detecting anomalies in API logs. These models were K-means, Gaussian Mixture Model, Isolation Forest and One-Class Support Vector Machine. The findings from the evaluation show that the Gaussian Mixture Model was the best baseline model, reaching a precision of 63%, a recall of 72%, resulting in an F1-score of 0.67, an AUC score of 0.76 and an accuracy of 0.71. By tuning the models, Isolation Forest performed the best with a precision of 67% and a recall of 80%, resulting in an F1-score of 0.73, an AUC score of 0.83 and an accuracy of 0.75. The pros and cons of each model are presented and discussed along with insights related to anomaly detection and its applicability in API log analysis and API security.
468

Visualizing Memory Utilization for the Purpose of Vulnerability Analysis

McConnell, William Charles 02 July 2008 (has links)
The expansion of the internet over recent years has resulted in an increase in digital attacks on computers. Most attacks, including the more dangerous ones, directly target program vulnerabilities. The increase in attacks has prompted a need to develop new ways to classify, detect, and avoid vulnerabilities. The effectiveness of these goals relies on the development of new methods and tools that facilitate the process of detecting vulnerabilities and exploits. This thesis presents the development of a tool that provides a visual representation of main memory for the purpose of security analysis. The tool provides new insight into memory utilization by software; users are able to see memory utilization as execution time progression, visually distinguish between memory behaviors (allocations, writes, etc), and visually observe special relationships between memory locations. The insight enables users to search for visual evidence that software is vulnerable, violated, or utilizing memory incorrectly. The development process for our visual tool has three stages: (1) identifying the memory utilization policies of the Windows 32-bit operating system; (2) identifying the data required for visual representations of memory and then implementing one possible method to capture the data; and (3) enumerating and implementing requirements for a memory tool that generates visual representations of memory for the purpose of vulnerability and exploit analysis. / Master of Science
469

From Theory to Practice: Deployment-grade Tools and Methodologies for Software Security

Rahaman, Sazzadur 25 August 2020 (has links)
Following proper guidelines and recommendations are crucial in software security, which is mostly obstructed by accidental human errors. Automatic screening tools have great potentials to reduce the gap between the theory and the practice. However, the goal of scalable automated code screening is largely hindered by the practical difficulty of reducing false positives without compromising analysis quality. To enable compile-time security checking of cryptographic vulnerabilities, I developed highly precise static analysis tools (CryptoGuard and TaintCrypt) that developers can use routinely. The main technical enabler for CryptoGuard is a set of detection algorithms that refine program slices by leveraging language-specific insights, where TaintCrypt relies on symbolic execution-based path-sensitive analysis to reduce false positives. Both CryptoGuard and TaintCrypt uncovered numerous vulnerabilities in real-world software, which proves the effectiveness. Oracle has implemented our cryptographic code screening algorithms for Java in its internal code analysis platform, Parfait, and detected numerous vulnerabilities that were previously unknown. I also designed a specification language named SpanL to easily express rules for automated code screening. SpanL enables domain experts to create domain-specific security checking. Unfortunately, tools and guidelines are not sufficient to ensure baseline security in internet-wide ecosystems. I found that the lack of proper compliance checking induced a huge gap in the payment card industry (PCI) ecosystem. I showed that none of the PCI scanners (out of 6), we tested are fully compliant with the guidelines, issuing certificates to merchants that still have major vulnerabilities. Consequently, 86% (out of 1,203) of the e-commerce websites we tested, are non-compliant. To improve the testbeds in the light of our work, the PCI Security Council shared a copy of our PCI measurement paper to the dedicated companies that host, manage, and maintain the PCI certification testbeds. / Doctor of Philosophy / Automatic screening tools have great potentials to reduce the gap between the theory and the practice of software security. However, the goal of scalable automated code screening is largely hindered by the practical difficulty of reducing false positives without compromising analysis quality. To enable compile-time security checking of cryptographic vulnerabilities, I developed highly precise static analysis tools (CryptoGuard and TaintCrypt) that developers can use routinely. Both CryptoGuard and TaintCrypt uncovered numerous vulnerabilities in real-world software, which proves the effectiveness. Oracle has implemented our cryptographic code screening algorithms for Java in its internal code analysis platform, Parfait, and detected numerous vulnerabilities that were previously unknown. I also designed a specification language named SpanL to easily express rules for automated code screening. SpanL enables domain experts to create domain-specific security checking. Unfortunately, tools and guidelines are not sufficient to ensure baseline security in internet-wide ecosystems. I found that the lack of proper compliance checking induced a huge gap in the payment card industry (PCI) ecosystem. I showed that none of the PCI scanners (out of 6), we tested are fully compliant with the guidelines, issuing certificates to merchants that still have major vulnerabilities. Consequently, 86% (out of 1,203) of the e-commerce websites we tested, are non-compliant. To improve the testbeds in the light of our work, the PCI Security Council shared a copy of our PCI measurement paper to the dedicated companies that host the PCI certification testbeds.
470

Secure Coding Practice in Java: Automatic Detection, Repair, and Vulnerability Demonstration

Zhang, Ying 12 October 2023 (has links)
The Java platform and third-party open-source libraries provide various Application Programming Interfaces (APIs) to facilitate secure coding. However, using these APIs securely is challenging for developers who lack cybersecurity training. Prior studies show that many developers use APIs insecurely, thereby introducing vulnerabilities in their software. Despite the availability of various tools designed to identify API insecure usage, their effectiveness in helping developers with secure coding practices remains unclear. This dissertation focuses on two main objectives: (1) exploring the strengths and weaknesses of the existing automated detection tools for API-related vulnerabilities, and (2) creating better tools that detect, repair, and demonstrate these vulnerabilities. Our research started with investigating the effectiveness of current tools in helping with developers' secure coding practices. We systematically explored the strengths and weaknesses of existing automated tools for detecting API-related vulnerabilities. Through comprehensive analysis, we observed that most existing tools merely report misuses, without suggesting any customized fixes. Moreover, developers often rejected tool-generated vulnerability reports due to their concerns on the correctness of detection, and the exploitability of the reported issues. To address these limitations, the second work proposed SEADER, an example-based approach to detect and repair security-API misuses. Given an exemplar ⟨insecure, secure⟩ code pair, SEADER compares the snippets to infer any API-misuse template and corresponding fixing edit. Based on the inferred information, given a program, SEADER performs inter-procedural static analysis to search for security-API misuses and to propose customized fixes. The third work leverages ChatGPT-4.0 to automatically generate security test cases. These test cases can demonstrate how vulnerable API usage facilitates supply chain attacks on specific software applications. By running such test cases during software development and maintenance, developers can gain more relevant information about exposed vulnerabilities, and may better create secure-by-design and secure-by-default software. / Doctor of Philosophy / The Java platform and third-party open-source libraries provide various Application Pro- gramming Interfaces (APIs) to facilitate secure coding. However, using these APIs securely can be challenging, especially for developers who aren't trained in cybersecurity. Prior work shows that many developers use APIs insecurely, consequently introducing vulnerabilities in their software. Despite the availability of various tools designed to identify API insecure usage, it is still unclear how well they help developers with secure coding practices. This dissertation focuses on (1) exploring the strengths and weaknesses of the existing au- tomated detection tools for API-related vulnerabilities, and (2) creating better tools that detect, repair, and demonstrate these vulnerabilities. We first systematically evaluated the strengths and weaknesses of the existing automated API-related vulnerability detection tools. We observed that most existing tools merely report misuses, without suggesting any cus- tomized fixes. Additionally, developers often reject tool-generated vulnerability reports due to their concerns about the correctness of detection, and whether the reported vulnerabil- ities are truly exploitable. To address the limitations found in our study, the second work proposed a novel example-based approach, SEADER, to detect and repair API insecure usage. The third work leverages ChatGPT-4.0 to automatically generate security test cases, and to demonstrate how vulnerable API usage facilitates the supply chain attacks to given software applications.

Page generated in 0.1036 seconds