• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2268
  • 421
  • 390
  • 368
  • 45
  • 41
  • 27
  • 18
  • 9
  • 8
  • 7
  • 6
  • 5
  • 3
  • 2
  • Tagged with
  • 4098
  • 4098
  • 1846
  • 1804
  • 994
  • 583
  • 521
  • 460
  • 432
  • 425
  • 407
  • 401
  • 391
  • 312
  • 289
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Verification of programs with Z3

Romanowicz, Ewa 06 1900 (has links)
<p>Fixing the errors in programs is usually very labor-intensive and thus an expensive task. It is also known to be prone to human error thus not fully reliable. There have been many methods of program verification developed, however they still require a lot of human input and interaction throughout the process. There is an increasing need for an automated software verification tool that would reduce human interaction to the minimum. Satisfiability Modulo Theories (SMT) solvers, a series of SAT-solvers such as Z3 looked initially to be a proper and easy to use tool. Its syntax is fairly uncomplicated and it seems to be quite efficient. In this thesis, Z3 is used to find loop invariants, prove some properties of concurrent programs written in Owicki-Gries style and prove some properties of recursive programs. It appears that - in general- Z3 does not work as well as expected in all areas to which it was applied.</p> / Master of Science (MS)
62

A Generative Approach to Meshing Geometry

Elsheikh, Mustafa 09 1900 (has links)
<p>This thesis presents the design and implementation of a generative geometric kernel suitable for supporting a family of mesh generation programs. The kernel is designed as a program generator which is generic, parametric, type-safe. and maintainable. The generator can generate specialized code that has minimal traces of the design abstractions. We achieve genericity, understandability, and maintainability in the generator by a layered design that adopts its concepts from the affine geometry domain. We achieve parametricity and type-safety by using MetaOCaml's module system and its support for higher order modules. The cost of adopting natural domain abstractions is reduced by combining MetaOCaml's support for multi-stage programming with the technique of abstract interpretation.</p> / Master of Applied Science (MASc)
63

A Model-Based Approach to Formal Assurance Cases

Annable, Nicholas January 2020 (has links)
The rapidly increasing complexity of safety-critical embedded systems has been the cause of difficulty in assuring the safety of safety-critical embedded systems and managing their documentation. More specifically, current approaches to safety assurance are struggling to keep up with the complex relationships be- tween the ever growing number of components and the sheer amount of code underlying safety-critical embedded systems such as road vehicles. We believe that an approach to safety assurance able to cope with this complexity must: i) have sound mathematical foundations on which safety assurance can be built; and ii) provide a formal framework with precisely defined semantics in which the assurance can be represented. In doing this, assurance can be made less ad-hoc, more precise and more repeatable. Sound mathematical foundations also facilitate the creation of tools that automate many aspects of assurance, which will be invaluable in coping with the complexity of modern-day and future embedded systems. The model-based framework that achieves this is + Workflow . This framework is rigorous, developed on proven notations from model-based methodologies, comprehensively integrates assurance within the development activities, and provides the basis for more formal assurance cases. / Thesis / Master of Applied Science (MASc)
64

Developing Scientific Computing Software: Current Processes and Future Directions

Tang, Jin January 2008 (has links)
<p>Considerable emphasis in scientific computing (SC) software development has been placed on the software qualities of performance and correctness. How ever, other software qualities have received less attention, such as the qualities of usability, maintainability, testability and reusability.</p> <p>Presented in this work is a survey titled "Survey on Developing Scientific Computing Software", which is apparently the first conducted to explore the current approaches to SC software development and to determine which qualities of SC software are in most need of improvement. From the survey we found that systematic development process is frequently not adopted in the SC software community, since 58% of respondents mentioned that their entire development process potentially consists only of coding and debugging. Moreover, semi-formal and formal specification is rarely used when developing SC software, which is suggested by the fact that 70% of respondents indicate that they only use informal specification.</p> <p>In terms of the problems in SC software development, which are discovered by analyzing the survey results, a solution is proposed to improve the quality of SC software by using SE methodologies, concretely, using a modified Parnas' Rational Design Process (PRDP) and the Unified Software Development Process (USDP). A comparison of the two candidate processes is provided to help SC software practitioners determine which of the two pro cesses fits their particular situation. To clarify the discussion of PRDP and USDP for SC software and to help SC software practitioners better understand how to use PRDP and USDP in SC software, a completely documented one-dimensional numerical integration solver (ONIS) example is presented for both PRDP and USDP.</p> / Master of Applied Science (MASc)
65

POWER-AWARE SCHEDULING FOR SERVER CLUSTERS

AL-DAOUD, HADIL January 2010 (has links)
<p>For the past few years, research in the area of computer clusters has been a hot topic. The main focus has been towards on how to achieve the best performance in such systems. While this problem has been well studied, many of the solutions maximize performance at the expense of increasing the amount of power consumed by the cluster and consequently raising the cost of power usage. Therefore, power management (PM) in such systems has become necessary. Many PM policies are proposed in the literature to achieve this goal for both homogeneous and heterogeneous clusters.</p> <p>In this work, in the case of homogeneous clusters, we review two applicable policies that have been proposed in the literature for reducing power consumption. We also propose a power saving policy, that uses queueing theory formulas, which attempts to minimize power consumption while satisfying given performance constraints. We evaluate this policy by using simulation and compare it to other applicable policies.</p> <p>Our main contribution is for heterogeneous clusters. We suggest a task distribution policy in order to reduce power consumption. Our suggested policy requires solving two linear programming problems (LPs). Our simulation experiments show that our proposed policy is successful in terms of achieving a significant power savings in comparison to other distribution policies, especially in the case of highly heterogeneous clusters.</p> / Master of Applied Science (MASc)
66

FixEval: Execution-based Evaluation of Program Fixes for Competitive Programming Problems

Haque, Md Mahim Anjum 14 November 2023 (has links)
In a software life-cycle Source code repositories serve as vast storage areas for program code, ensuring its maintenance and version control throughout the development process. It is not uncommon for these repositories to house programs with hidden errors, which only manifest under specific input conditions, causing the program to deviate from its intended functionality. The growing intricacy of software design has amplified the time and resources required to pinpoint and rectify these issues. These errors, often unintended by developers, can be challenging to identify and correct. While there are techniques to auto-correct faulty code, the expansive realm of potential solutions for a single bug means there's a scarcity of tools and datasets for effective evaluation of the corrected code. This study presents FIXEVAL, a benchmark that includes flawed code entries from competitive coding challenges and their corresponding corrections. FIXEVAL offers an extensive test suite that not only gauges the accuracy of fixes generated by models but also allows for the assessment of a program's functional correctness. This suite further sheds light on time, memory limits, and acceptance based on specific outcomes. We utilize cutting-edge language models, trained on coding languages, as our reference point and juxtapose them using match-based (essentially token similarity) and execution-based (focusing on functional assessment) criteria. Our research indicates that while match-based criteria might not truly represent the functional precision of fixes generated by models, execution-based approaches offer a comprehensive evaluation tailored to the solution. Consequently, we posit that FIXEVAL paves the way for practical automated error correction and assessment of code generated by models. Dataset and models for all of our experiments are made publicly available at https://github.com/mahimanzum/FixEval. / Master of Science / Think of source code repositories as big digital libraries where computer programs are kept safe and updated. Sometimes, these programs have hidden mistakes that only show up under certain conditions, making the program act differently than planned which we call bugs or errors. As software gets more complex, it takes more time and effort to find and fix these mistakes. Even though there are ways to automatically fix these errors, finding the best solution can be like looking for a needle in a haystack. That's why there aren't many tools to check if the automatic fixes are right. Enter FIXEVAL: our new tool that tests and compares faulty computer code from coding competitions and their fixes. It has a set of tests to see how well the fixed code works and gives insights into its performance and results. We used the latest computer language tools to see how well they fix code, comparing them in two ways: by looking at the code's structure and by testing its function. Our findings? Just looking at the code's structure isn't enough; we need to test how it works in action. We believe FIXEVAL is a big step forward in making sure automatic code fixes are spot-on. Dataset and models for all of our experiments are made publicly available at https://github.com/mahimanzum/FixEval.
67

Computer-Supported Collaborative Work and Its Application to Software Engineering in a Case Environment

Bailey, Janet L. 05 1900 (has links)
This study investigated, in the context of a field-based case study, possibilities for formation of a synergistic union between CSCW and CASE tools. A major dimension of today's software challenge is in gearing up for large-scale system development necessitating large teams of systems engineers. The principal goal of this research was to advance the body of knowledge regarding the nature of collaborative technological support in the software development process. Specifically, the study was designed to evaluate the potential for using a CSCW tool as an effective front-end to a CASE tool in the furtherance of SDLC goals.
68

A Cognitively Motivated System for Software Component Reuse

Mateas, Michael Joseph 30 July 1993 (has links)
Software reuse via component libraries suffers from the twin problems of code location and comprehension. The Intelligent Code Object Planner (ICOP) is a cognitively motivated system that facilitates code reuse by answering queries about how to produce an effect with the library. It can plan for effects which are not primitive with respect to the library by building a plan that incorporates multiple components. The primary subsystems of ICOP are a knowledge base which describes the ontology of the library, a natural language interface which translates user queries into a formal effect language (predicates), a planner which accepts the effect and produces a plan utilizing the library components, and an explanation generator which accepts the plan and produces example code illustrating the plan. ICOP is currently implemented in Prolog and supports a subset of the Windows 3.0 APL
69

A Case Study of a Very Large Organization

Werner, Colin Mark 20 December 2011 (has links)
Very Large Organization (VLO) is an organization that produces hardware and software, which together form products. VLO granted access to data pertaining to seven different products and their development projects. One particular product is of interest to VLO since it was not as successful as the other products. The focus of this thesis is to study the problematic product and compare it to the other six products in order to draw some conclusions regarding the problematic product. The goal of this study is to indicate areas of improvement, which can help VLO improve future products. This thesis explores and answers the following research questions focused around the problematic product. Was the product indeed a failure? If so, what caused the product to fail? What indications that the product would fail were evident during the product’s development? What could VLO have done in order to prevent the product from becoming a failure? What can VLO learn from the failure? Are there data from the non-problematic products that indicate what VLO excels at? This thesis analyzes the data from all seven products and their projects in order to answer the research questions. Analyzing the non-problematic products is important in order to draw comparisons to the problematic product. As a result of this research, this thesis uncovers a variety of issues with the problematic product and identifies six areas for possible improvement. These six areas are: hardware research and development, decoupling of software from hardware, requirements management, maximal use of resources, developer order and priority of vital features, and schedule alignment. This thesis concludes that even though none of these six problematic areas can be pinpointed as the singular root cause of the problematic product’s failure, addressing these problems will improve the likelihood of product success.
70

A Case Study of a Very Large Organization

Werner, Colin Mark 20 December 2011 (has links)
Very Large Organization (VLO) is an organization that produces hardware and software, which together form products. VLO granted access to data pertaining to seven different products and their development projects. One particular product is of interest to VLO since it was not as successful as the other products. The focus of this thesis is to study the problematic product and compare it to the other six products in order to draw some conclusions regarding the problematic product. The goal of this study is to indicate areas of improvement, which can help VLO improve future products. This thesis explores and answers the following research questions focused around the problematic product. Was the product indeed a failure? If so, what caused the product to fail? What indications that the product would fail were evident during the product’s development? What could VLO have done in order to prevent the product from becoming a failure? What can VLO learn from the failure? Are there data from the non-problematic products that indicate what VLO excels at? This thesis analyzes the data from all seven products and their projects in order to answer the research questions. Analyzing the non-problematic products is important in order to draw comparisons to the problematic product. As a result of this research, this thesis uncovers a variety of issues with the problematic product and identifies six areas for possible improvement. These six areas are: hardware research and development, decoupling of software from hardware, requirements management, maximal use of resources, developer order and priority of vital features, and schedule alignment. This thesis concludes that even though none of these six problematic areas can be pinpointed as the singular root cause of the problematic product’s failure, addressing these problems will improve the likelihood of product success.

Page generated in 0.0741 seconds