• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 8
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 103
  • 103
  • 43
  • 33
  • 22
  • 21
  • 17
  • 16
  • 15
  • 14
  • 14
  • 14
  • 13
  • 12
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

An Exploratory Study of the Remixing Practices in the Scratch Programming Community: Trends, Causalities, and Influences

Khawas, Prapti Prakash 11 June 2019 (has links)
One of the greatest achievements of Scratch as an educational tool is the eager willingness of programmers to use existing projects as the starting point for their own projects, a practice known as remixing. Despite the importance of remixing as a foundation of collaborative and communal learning, the practice remains poorly understood. Without a clear picture of how and why Scratch programmers remix a project as a starting point of their own projects, this programming community would remain in the dark about which programming practices encourage and facilitate remixing. The designers of programming environments for blocks lack feedback on how the remixing facility is used in the wild. To gain a deeper insight into remixing, this thesis presents the results of a comprehensive study of this practice in Scratch that investigates the following heretofore unexplored dimensions of remixing: (1) the prevailing modifications that remixes perform on existing projects, (2) the impact of the original project's code quality on the granularity, extent, and development time of the modifications in the remixes, and (3) the propensity of the dominant programming practices in the original project to remain so in the remixes. Our findings can be used to promote those programming practices in the Scratch community that encourage remixing while also improving this practice's effectiveness, thus benefiting the educational and end-user programming communities. / Master of Science / The Scratch programming language has become an intrinsically important tool in introductory CS education. A visual, block-based language, Scratch is web-based, featuring an enormous online programming community, through which projects are eagerly shared. One of the unique learning provisions of Scratch is the ability to easily start a project by modifying someone else’s project, a practice referred to as remixing. Despite the central role that remixing plays in enabling the communal and collaborative learning styles in the Scratch community, the practice of remixing remains inadequately understood. This knowledge gap leaves the Scratch community in the dark about which programming practices encourage and facilitate remixing, as well as deprives Scratch environment designers from actionable feedback on how the remixing facility is used in the wild. To address this problem, this thesis reports on the results of an exploratory study of remixing in Scratch that investigates three heretofore unexplored dimensions of this practice. First, we study the general remixing trends in terms of how remixes modify the original projects. Second, we infer the impact of a project’s code quality on the modifications in its remixes and the development time. Finally, we investigate whether programmers adopt the techniques and practices of the remixed projects. Computing educators can apply our findings to enhance the educational effectiveness of Scratch by encouraging the practice and magnitude of remixing.
52

Anomaly Detection Through System and Program Behavior Modeling

Xu, Kui 15 December 2014 (has links)
Various vulnerabilities in software applications become easy targets for attackers. The trend constantly being observed in the evolution of advanced modern exploits is their growing sophistication in stealthy attacks. Code-reuse attacks such as return-oriented programming allow intruders to execute mal-intended instruction sequences on a victim machine without injecting external code. Successful exploitation leads to hijacked applications or the download of malicious software (drive-by download attack), which usually happens without the notice or permission from users. In this dissertation, we address the problem of host-based system anomaly detection, specifically by predicting expected behaviors of programs and detecting run-time deviations and anomalies. We first introduce an approach for detecting the drive-by download attack, which is one of the major vectors for malware infection. Our tool enforces the dependencies between user actions and system events, such as file-system access and process execution. It can be used to provide real time protection of a personal computer, as well as for diagnosing and evaluating untrusted websites for forensic purposes. We perform extensive experimental evaluation, including a user study with 21 participants, thousands of legitimate websites (for testing false alarms), 84 malicious websites in the wild, as well as lab reproduced exploits. Our solution demonstrates a usable host-based framework for controlling and enforcing the access of system resources. Secondly, we present a new anomaly-based detection technique that probabilistically models and learns a program's control flows for high-precision behavioral reasoning and monitoring. Existing solutions suffer from either incomplete behavioral modeling (for dynamic models) or overestimating the likelihood of call occurrences (for static models). We introduce a new probabilistic anomaly detection method for modeling program behaviors. Its uniqueness is the ability to quantify the static control flow in programs and to integrate the control flow information in probabilistic machine learning algorithms. The advantage of our technique is the significantly improved detection accuracy. We observed 11 up to 28-fold of improvement in detection accuracy compared to the state-of-the-art HMM-based anomaly models. We further integrate context information into our detection model, which achieves both strong flow-sensitivity and context-sensitivity. Our context-sensitive approach gives on average over 10 times of improvement for system call monitoring, and 3 orders of magnitude for library call monitoring, over existing regular HMM methods. Evaluated with a large amount of program traces and real-world exploits, our findings confirm that the probabilistic modeling of program dependences provides a significant source of behavior information for building high-precision models for real-time system monitoring. Abnormal traces (obtained through reproducing exploits and synthesized abnormal traces) can be well distinguished from normal traces by our model. / Ph. D.
53

User-Intention Based Program Analysis for Android Security

Elish, Karim Omar Mahmoud 29 July 2015 (has links)
The number of mobile applications (i.e., apps) is rapidly growing, as the mobile computing becomes an integral part of the modern user experience. Malicious apps have infiltrated open marketplaces for mobile platforms. These malicious apps can exfiltrate user's private data, abuse of system resources, or disrupting regular services. Despite the recent advances on mobile security, the problem of detecting vulnerable and malicious mobile apps with high detection accuracy remains an open problem. In this thesis, we address the problem of Android security by presenting a new quantitative program analysis framework for security vetting of Android apps. We first introduce a highly accurate proactive detection solution for detecting individual malicious apps. Our approach enforces benign property as opposed of chasing malware signatures, and uses one complex feature rather than multi-feature as in the existing malware detection methods. In particular, we statically extract a data-flow feature on how user inputs trigger sensitive critical operations, a property referred to as the user-trigger dependence. This feature is extracted through nontrivial Android-specific static program analysis, which can be used in various quantitative analytical methods. Our evaluation on thousands of malicious apps and free popular apps gives a detection accuracy (2% false negative rate and false positive rate) that is better than, or at least competitive against, the state-of-the-art. Furthermore, our method discovers new malicious apps available in the Google Play store that have not been previously detected by anti-virus scanning tools. Second, we present a new app collusion detection approach and algorithms to analyze pairs or groups of communicating apps. App collusion is a new technique utilized by the attackers to evade standard detection. It is a new threat where two or more apps, appearing benign, communicate to perform malicious task. Most of the existing solutions assume the attack model of a stand-alone malicious app, and hence cannot detect app collusion. We first demonstrate experimental evidence on the technical challenges associated with detecting app collusion. Then, we address these challenges by introducing a scalable and an in-depth cross-app static flow analysis approach to identify the risk level associated with communicating apps. Our approach statically analyzes the sensitivity and the context of each inter-app communication with low analysis complexity, and defines fine-grained security policies for the inter-app communication risk detection. Our evaluation results on thousands of free popular apps indicate that our technique is effective. It generates four times fewer false positives compared to the state-of-the-art collusion-detection solution, enhancing the detection capability. The advantages of our inter-app communication analysis approach are the analysis scalability with low complexity, and the substantially improved detection accuracy compared to the state-of-the-art solution. These types of proactive defenses solutions allow defenders to stay proactive when defending against constantly evolving malware threats. / Ph. D.
54

From Theory to Practice: Deployment-grade Tools and Methodologies for Software Security

Rahaman, Sazzadur 25 August 2020 (has links)
Following proper guidelines and recommendations are crucial in software security, which is mostly obstructed by accidental human errors. Automatic screening tools have great potentials to reduce the gap between the theory and the practice. However, the goal of scalable automated code screening is largely hindered by the practical difficulty of reducing false positives without compromising analysis quality. To enable compile-time security checking of cryptographic vulnerabilities, I developed highly precise static analysis tools (CryptoGuard and TaintCrypt) that developers can use routinely. The main technical enabler for CryptoGuard is a set of detection algorithms that refine program slices by leveraging language-specific insights, where TaintCrypt relies on symbolic execution-based path-sensitive analysis to reduce false positives. Both CryptoGuard and TaintCrypt uncovered numerous vulnerabilities in real-world software, which proves the effectiveness. Oracle has implemented our cryptographic code screening algorithms for Java in its internal code analysis platform, Parfait, and detected numerous vulnerabilities that were previously unknown. I also designed a specification language named SpanL to easily express rules for automated code screening. SpanL enables domain experts to create domain-specific security checking. Unfortunately, tools and guidelines are not sufficient to ensure baseline security in internet-wide ecosystems. I found that the lack of proper compliance checking induced a huge gap in the payment card industry (PCI) ecosystem. I showed that none of the PCI scanners (out of 6), we tested are fully compliant with the guidelines, issuing certificates to merchants that still have major vulnerabilities. Consequently, 86% (out of 1,203) of the e-commerce websites we tested, are non-compliant. To improve the testbeds in the light of our work, the PCI Security Council shared a copy of our PCI measurement paper to the dedicated companies that host, manage, and maintain the PCI certification testbeds. / Doctor of Philosophy / Automatic screening tools have great potentials to reduce the gap between the theory and the practice of software security. However, the goal of scalable automated code screening is largely hindered by the practical difficulty of reducing false positives without compromising analysis quality. To enable compile-time security checking of cryptographic vulnerabilities, I developed highly precise static analysis tools (CryptoGuard and TaintCrypt) that developers can use routinely. Both CryptoGuard and TaintCrypt uncovered numerous vulnerabilities in real-world software, which proves the effectiveness. Oracle has implemented our cryptographic code screening algorithms for Java in its internal code analysis platform, Parfait, and detected numerous vulnerabilities that were previously unknown. I also designed a specification language named SpanL to easily express rules for automated code screening. SpanL enables domain experts to create domain-specific security checking. Unfortunately, tools and guidelines are not sufficient to ensure baseline security in internet-wide ecosystems. I found that the lack of proper compliance checking induced a huge gap in the payment card industry (PCI) ecosystem. I showed that none of the PCI scanners (out of 6), we tested are fully compliant with the guidelines, issuing certificates to merchants that still have major vulnerabilities. Consequently, 86% (out of 1,203) of the e-commerce websites we tested, are non-compliant. To improve the testbeds in the light of our work, the PCI Security Council shared a copy of our PCI measurement paper to the dedicated companies that host the PCI certification testbeds.
55

TECHNIQUES TO SECURE AND MONITOR CLIENT DATABASE APPLICATIONS

Daren Khaled Fadolalkarim (19200958) 23 July 2024 (has links)
<p dir="ltr">In this thesis, we aim at securing database applications in different ways. We have designed, implemented and experimentally evaluated two systems, AD-PROM and DCAFixer. AD-PROM has the goal to monitor database application while running to detect changes in applications’ behaviors at run time. DCAFixer, focus on securing database applications at the early development stages, i.e., coding and testing.</p>
56

Collecting and representing parallel programs with high performance instrumentation

Railing, Brian Paul 07 January 2016 (has links)
Computer architecture has looming challenges with finding program parallelism, process technology limits, and limited power budget. To navigate these challenges, a deeper understanding of parallel programs is required. I will discuss the task graph representation and how it enables programmers and compiler optimizations to understand and exploit dynamic aspects of the program. I will present Contech, which is a high performance framework for generating dynamic task graphs from arbitrary parallel programs. The Contech framework supports a variety of languages and parallelization libraries, and has been tested on both x86 and ARM. I will demonstrate how this framework encompasses a diversity of program analyses, particularly by modeling a dynamically reconfigurable, heterogeneous multi-core processor.
57

Postavení atletiky ve školním vzdělávacím programu na 1.stupni základní školy / Position of Athletics in the School Curricula in the First Five Years of Primary School

Šašková, Veronika January 2014 (has links)
On the basis of the analysis of school educational programmes and as well based on the statements I gained from the teachers I'm trying to find out and verify the representation of athletics in first five years of primary schools. Title of diploma thesis: The position of athletics in the school curricula in the first five years of primary school Student: Veronika Šašková Supervisor: Mgr. Zdeňka Engelthalerová Objective of the work: Determine the position of athletics in first five years of primary school Key words: athletics, school age, framework educational program, the primary school curriculum, school educational program
58

Dynamic program analysis algorithms to assist parallelization

Kim, Minjang 24 August 2012 (has links)
All market-leading processor vendors have started to pursue multicore processors as an alternative to high-frequency single-core processors for better energy and power efficiency. This transition to multicore processors no longer provides the free performance gain enabled by increased clock frequency for programmers. Parallelization of existing serial programs has become the most powerful approach to improving application performance. Not surprisingly, parallel programming is still extremely difficult for many programmers mainly because thinking in parallel is simply beyond the human perception. However, we believe that software tools based on advanced analyses can significantly reduce this parallelization burden. Much active research and many tools exist for already parallelized programs such as finding concurrency bugs. Instead we focus on program analysis algorithms that assist the actual parallelization steps: (1) finding parallelization candidates, (2) understanding the parallelizability and profits of the candidates, and (3) writing parallel code. A few commercial tools are introduced for these steps. A number of researchers have proposed various methodologies and techniques to assist parallelization. However, many weaknesses and limitations still exist. In order to assist the parallelization steps more effectively and efficiently, this dissertation proposes Prospector, which consists of several new and enhanced program analysis algorithms. First, an efficient loop profiling algorithm is implemented. Frequently executed loop can be candidates for profitable parallelization targets. The detailed execution profiling for loops provides a guide for selecting initial parallelization targets. Second, an efficient and rich data-dependence profiling algorithm is presented. Data dependence is the most essential factor that determines parallelizability. Prospector exploits dynamic data-dependence profiling, which is an alternative and complementary approach to traditional static-only analyses. However, even state-of-the-art dynamic dependence analysis algorithms can only successfully profile a program with a small memory footprint. Prospector introduces an efficient data-dependence profiling algorithm to support large programs and inputs as well as provides highly detailed profiling information. Third, a new speedup prediction algorithm is proposed. Although the loop profiling can give a qualitative estimate of the expected profit, obtaining accurate speedup estimates needs more sophisticated analysis. Prospector introduces a new dynamic emulation method to predict parallel speedups from annotated serial code. Prospector also provides a memory performance model to predict speedup saturation due to increased memory traffic. Compared to the latest related work, Prospector significantly improves both prediction accuracy and coverage. Finally, Prospector provides algorithms that extract hidden parallelism and advice on writing parallel code. We present a number of case studies how Prospector assists manual parallelization in particular cases including privatization, reduction, mutex, and pipelining.
59

Statistical causal analysis for fault localization

Baah, George Kofi 08 August 2012 (has links)
The ubiquitous nature of software demands that software is released without faults. However, software developers inadvertently introduce faults into software during development. To remove the faults in software, one of the tasks developers perform is debugging. However, debugging is a difficult, tedious, and time-consuming process. Several semi-automated techniques have been developed to reduce the burden on the developer during debugging. These techniques consist of experimental, statistical, and program-structure based techniques. Most of the debugging techniques address the part of the debugging process that relates to finding the location of the fault, which is referred to as fault localization. The current fault-localization techniques have several limitations. Some of the limitations of the techniques include (1) problems with program semantics, (2) the requirement for automated oracles, which in practice are difficult if not impossible to develop, and (3) the lack of theoretical basis for addressing the fault-localization problem. The thesis of this dissertation is that statistical causal analysis combined with program analysis is a feasible and effective approach to finding the causes of software failures. The overall goal of this research is to significantly extend the state of the art in fault localization. To extend the state-of-the-art, a novel probabilistic model that combines program-analysis information with statistical information in a principled manner is developed. The model known as the probabilistic program dependence graph (PPDG) is applied to the fault-localization problem. The insights gained from applying the PPDG to fault localization fuels the development of a novel theoretical framework for fault localization based on established causal inference methodology. The development of the framework enables current statistical fault-localization metrics to be analyzed from a causal perspective. The analysis of the metrics show that the metrics are related to each other thereby allowing the unification of the metrics. Also, the analysis of metrics from a causal perspective reveal that the current statistical techniques do not find the causes of program failures instead the techniques find the program elements most associated with failures. However, the fault-localization problem is a causal problem and statistical association does not imply causation. Several empirical studies are conducted on several software subjects and the results (1) confirm our analytical results, (2) demonstrate the efficacy of our causal technique for fault localization. The results demonstrate the research in this dissertation significantly improves on the state-of-the-art in fault localization.
60

Program analysis to support quality assurance techniques for web applications

Halfond, William G. J. 20 January 2010 (has links)
As web applications occupy an increasingly important role in the day-to-day lives of millions of people, testing and analysis techniques that ensure that these applications function with a high level of quality are becoming even more essential. However, many software quality assurance techniques are not directly applicable to modern web applications. Certain characteristics, such as the use of HTTP and generated object programs, can make it difficult to identify software abstractions used by traditional quality assurance techniques. More generally, many of these abstractions are implemented differently in web applications, and the lack of techniques to identify them complicates the application of existing quality assurance techniques to web applications. This dissertation describes the development of program analysis techniques for modern web applications and shows that these techniques can be used to improve quality assurance. The first part of the research focuses on the development of a suite of program analysis techniques that identifies useful abstractions in web applications. The second part of the research evaluates whether these program analysis techniques can be used to successfully adapt traditional quality assurance techniques to web applications, improve existing web application quality assurance techniques, and develop new techniques focused on web application-specific issues. The work in quality assurance techniques focuses on improving three different areas: generating test inputs, verifying interface invocations, and detecting vulnerabilities. The evaluations of the resulting techniques show that the use of the program analyses results in significant improvements in existing quality assurance techniques and facilitates the development of new useful techniques.

Page generated in 0.0457 seconds