• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1353
  • 192
  • 73
  • 30
  • 27
  • 11
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 3605
  • 3605
  • 1069
  • 940
  • 902
  • 710
  • 706
  • 509
  • 447
  • 442
  • 396
  • 344
  • 291
  • 263
  • 263
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Compliance verification of a design model with respect to its specification model in the context of software defined radios : a model transformation approach /

Zamora Zapata, Juan Pablo, January 1900 (has links)
Thesis (Ph.D.) - Carleton University, 2005. / Includes bibliographical references (p. 255-258). Also available in electronic format on the Internet.
72

Verification and validation in software product line engineering

Addy, Edward A. January 1999 (has links)
Thesis (Ph. D.)--West Virginia University, 1999. / Title from document title page. Document formatted into pages; contains vi, 75 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 35-39).
73

Programontwikkelingsmetodologieë

17 March 2015 (has links)
M.Sc. (Computer Science) / The data processing crisis in software development today can be ascribed firstly to insufficient requirements defmition, which results from a lack of communication between developer and user, and secondly to insufficient project management. During the last decade we succeeded in adding more control and discipline to the traditional software development life cycle, but requirements specification remains a problem. The traditional software development life-cycle is long and inflexible and the results do not satisfy the requirements of the user. The prototyping approach can be part of a solution to the problems posed by this situation. The author proposes a four-dimensional conceptual model as a framework for a Prototyping methodology that was developed as basis for this study. In business practice today, confusion exists as to what prototypes are the best to use - prototypes that are developed to become the complete system, or prototypes that are thrown away. Dimension one of the model is discussed in terms of type of prototype. With type of prototype is meant one of the different approaches to prototyping in the software development process. The author standardized on throw-away prototypes and evolutionary prototypes. The most general and well-known usage of prototyping is during the requirements :definition phase. However, this is not the only use of prototyping. Dimension two of the model describes the different areas of usage of prototyping, e.g. requirements definition, as technique during JAD sessions, during simulation, during the minimizing of risk and in the development of working models. The development of prototypes should be an easy and rapid process, however, this is dependent on the tools that are used in the process. Dimension three of the model is discussed in terms of tools.
74

An experimental study of cost cognizant test case prioritization

Goel, Amit 02 December 2002 (has links)
Test case prioritization techniques schedule test cases for regression testing in an order that increases their ability to meet some performance goal. One performance goal, rate of fault detection, measures how quickly faults are detected within the testing process. The APFD metric had been proposed for measuring the rate of fault detection. This metric applies, however, only in cases in which test costs and fault costs are uniform. In practice, fault costs and test costs are not uniform. For example, some faults which lead to system failures might be more costly than faults which lead to minor errors. Similarly, a test case that runs for several hours is much more costly than a test case that runs for a few seconds. Previous work has thus provided a second, metric APFD[subscript c], for measuring rate of fault detection, that incorporates test costs and fault costs. However, studies of this metric thus far have been limited to abstract distribution models of costs. These distribution models did not represent actual fault costs and test costs for software systems. In this thesis, we describe some practical ways to estimate real fault costs and test costs for software systems, based on operational profiles and test execution timings. Further we define some new cost-cognizant prioritization techniques which focus on the APFD[subscript c] metric. We report results of an empirical study investigating the rate of "units-of-fault-cost-detected-per-unit-test-cost" across various cost-cognizant prioritization techniques and tradeoffs between techniques. The results of our empirical study indicate that cost-cognizant test case prioritization techniques can substantially improve the rate of fault detection of test suites. The results also provide insights into the tradeoffs among various prioritization techniques. For example: (1) techniques incorporating feedback information (information from previous tests) outperformed those without any feedback information; (2) technique effectiveness differed most when faults are relatively difficult to detect; (3) in most cases, technique performance was similar at function and statement level; (4) surprisingly, techniques considering change location did not perform as well as expected. The study also reveals several practical issues that might arise in applying test case prioritization, as well as opportunities for future work. / Graduation date: 2003
75

Test case prioritization

Chu, Chengyun, 1974- 01 June 1999 (has links)
Prioritization techniques are used to schedule test cases to execute in a specific order to maximize some objective function. There are a variety of possible objective functions, such as a function that measures how quickly faults can be detected within the testing process, or a function that measures how fast coverage of the program can be increased. In this paper, we describe several test case prioritization techniques, and empirical studies performed to investigate their relative abilities to improve how quickly faults can be detected by test suites. An improved rate of fault detection during regression testing can provide faster feedback about a system under regression test and let debuggers begin their work earlier than might otherwise be possible. The results of our studies indicate that test case prioritization techniques can substantially improve the rate of fault detection of test suites. The results also provide insights into the tradeoff's among various prioritization techniques. / Graduation date: 2000
76

Measurement based continuous assessment of software engineering processes /

Järvinen, Janne. January 2000 (has links) (PDF)
Thesis (doctoral)--University of Oulu, 2000. / Includes bibliographical references. Also available on the World Wide Web.
77

Marketing of fourth generation software products in Hong Kong /

To, Chi-cheung, Solomon. January 1987 (has links)
Thesis (M.B.A.)--University of Hong Kong, 1987.
78

Integration of model checking into software development processes

Xie, Fei 28 August 2008 (has links)
Not available / text
79

Testing concurrent software systems

Kilgore, Richard Brian 28 August 2008 (has links)
Not available
80

Design metrics forensics : an analysis of the primitive metrics in the Zage design metrics

Kwan, Pak Leung January 1994 (has links)
The Software Engineering Research Center (SERC) Design Metrics Research Team at Ball State University has developed a design metric D(G) of the form:D(G) = D~ + DiWhere De is the architectural design metric (external design metric) and D; is the detailed design metric (internal design metric).Questions to be investigated in this thesis are:Why can D, be an indicator of the potential error modules?Why can D; be an indicator of the potential error modules?Are there any significant factors that dominate the design metrics?In this thesis, the report of the STANFINS data is evaluated by using correlation analysis, regression analysis, and several other statistical techiques. The STANFINS study is chosen because it contains approximately 532 programs, 3,000 packages and 2,500,000 lines of Ada.The design metrics study was completed on 21 programs (approximately 24,000 lines of code) which were selected by CSC development teams. Error reports were also provided by CSC personnel. / Department of Computer Science

Page generated in 0.0814 seconds