• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7538
  • 5171
  • 1361
  • 678
  • 657
  • 587
  • 436
  • 370
  • 206
  • 103
  • 92
  • 92
  • 92
  • 87
  • 75
  • Tagged with
  • 21236
  • 7165
  • 5838
  • 2354
  • 2065
  • 2051
  • 1984
  • 1932
  • 1741
  • 1682
  • 1476
  • 1246
  • 1179
  • 1135
  • 1134
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Design-time performance testing

Hopkins, Ian Keith 01 April 2009 (has links)
Software designers make decisions between alternate approaches early in the development of a software application and these decisions can be difficult to change later. Designers make these decisions based on estimates of how alternatives affect software qualities. One software quality that can be difficult to predict is performance, that is, the efficient use of resources in the system. It is particularly challenging to estimate the performance of large, interconnected software systems composed of components. With the proliferation of class libraries, middle-ware systems, web services, and third party components, many software projects rely on third party services to meet their requirements. Often choosing between services involves considering both the functionality and performance of the services. To help software developers compare their designs and third-party services, I propose using performance prototypes of alternatives and test suites to estimate performance trade-offs early in the development cycle, a process called Design-Time Performance Testing (DTPT).<p> Providing software designers with performance evidence based on prototypes will allow designers to make informed decisions regarding performance trade-offs. To show how DTPT can help inform real design decisions. In particular: a process for DTPT, a framework implementation written in Java, and experiments to verify and validate the process and implementation. The implemented framework assists when designing, running, and documenting performance test suites, allowing designers to make accurate comparisons between alternate approaches. Performance metrics are captured by instrumenting and running prototypes.<p> This thesis describes the process and framework for gathering software performance estimates at design-time using prototypes and test suites.
472

Study on an Architecture-Oriented Software Testing Management Model

Li, Fu-shiau 07 June 2007 (has links)
There are many approaches in testing software. To effectively test a complicated software product, it is not merely assessing the software development process, but grasping the quality in all manners. Managing software test is an extremely important topic in testing software. Nowaday, V Model is mostly used as the Software Testing Management Model. Therefore, this thesis targets V Model as the one, which should be improved. This research suggests that testing software needs a simple and clear management model to follow. If it does not know "what to do" or "how to go", then the testing results often violate the requirement. Thereafter, it also produces very serious problems as to the quality of the software. Through collecting: academic documents, system and architecture definitions, software test management theory, software testing standard norms, software testing management model, etc., this thesis channels into a software test management with the concept of architecture, proposes the Architecture-Oriented Software Testing Management Model (AOSTMM). AOSTMM is capable of describing "what to do" or "how to go" in software testing. This is the contribution of this research.
473

Integration of a Standard-Based Quality Assessment into the VizzAnalyzer

Ruiz de Azua, David January 2006 (has links)
<p>More than half of the total costs in ownership of a software system are maintenance costs. Reverse engineering is becoming more important and complex for huge systems, and tools for reverse engineering are necessary for system evaluation.</p><p>The ISO/IEC 9126 standard defines software quality and The VizzAnalyzer</p><p>Framework is a stand-alone tool for analyzing and visualizing large software systems’ structures.</p><p>In this thesis, we describe the design and implementation of plug-ins for the VizzAnalyzer Framework, a system for reverse engineering, extending their features under the standards of software quality. It has proven to be useful in applying the new features into the VizzAnalyzer Framework being the first tool that includes a software</p><p>quality model.</p>
474

The role of software engineering process in research & development and prototyping organizations

Willis, Michael Brian, 1980- 05 January 2011 (has links)
Software Research and Development Organizations (or SRDs) have unique goals that differ from the goals of Production Software Organizations. SRDs focus on exploring the unknown, while Production Software Organizations focus on implementing solutions to known problems. These unique goals call for reevaluating the role of Software Engineering Process for SRDs. This paper presents six common Software Engineering Processes then analyzes their strengths and weaknesses for SRDs. The processes presented include: Waterfall, Rational Unified Process (RUP), Evolutionary Delivery Cycle (EDLC), Team Software Process (TSP), Agile Development and Extreme Programming (XP). The results indicate that an ideal software process for SRDs is iterative, emphasizes visual models, uses a simple organization structure, produces working software (with limited functionality) early in the lifecycle, exploits individual capabilities, minimizes artifacts, adapts to new discoveries and requirements, and utilizes collective code ownership among developers. The results also indicate that an ideal software process for SRDs does NOT define rigid personnel roles or rigid artifacts, is NOT metric-driven and does NOT implement pair programming. This paper justifies why SRDs require a unique software process, outlines the ideal SRD software process, and shows how to tailor existing software processes to meet the unique needs of SRDs. / text
475

Exploiting structure for scalable software verification

Domagoj, Babić 11 1900 (has links)
Software bugs are expensive. Recent estimates by the US National Institute of Standards and Technology claim that the cost of software bugs to the US economy alone is approximately 60 billion USD annually. As society becomes increasingly software-dependent, bugs also reduce our productivity and threaten our safety and security. Decreasing these direct and indirect costs represents a significant research challenge as well as an opportunity for businesses. Automatic software bug-finding and verification tools have a potential to completely revolutionize the software engineering industry by improving reliability and decreasing development costs. Since software analysis is in general undecidable, automatic tools have to use various abstractions to make the analysis computationally tractable. Abstraction is a double-edged sword: coarse abstractions, in general, yield easier verification, but also less precise results. This thesis focuses on exploiting the structure of software for abstracting away irrelevant behavior. Programmers tend to organize code into objects and functions, which effectively represent natural abstraction boundaries. Humans use such structural abstractions to simplify their mental models of software and for constructing informal explanations of why a piece of code should work. A natural question to ask is: How can automatic bug-finding tools exploit the same natural abstractions? This thesis offers possible answers. More specifically, I present three novel ways to exploit structure at three different steps of the software analysis process. First, I show how symbolic execution can preserve the data-flow dependencies of the original code while constructing compact symbolic representations of programs. Second, I propose structural abstraction, which exploits the structure preserved by the symbolic execution. Structural abstraction solves a long-standing open problem --- scalable interprocedural path- and context-sensitive program analysis. Finally, I present an automatic tuning approach that exploits the fine-grained structural properties of software (namely, data- and control-dependency) for faster property checking. This novel approach resulted in a 500-fold speedup over the best previous techniques. Automatic tuning not only redefined the limits of automatic software analysis tools, but also has already found its way into other domains (like model checking), demonstrating the generality and applicability of this idea.
476

A Mode-Based Pattern for Feature Requirements, and a Generic Feature Interface

Dietrich, David January 2013 (has links)
Feature-oriented requirements decompose a system's requirements into individual bundles of functionality called features, where each feature's behaviour can be expressed as a state-machine model. However, state machines are difficult to write; determining how to decompose behaviour into states is not obvious, different stakeholders will have different opinions on how to structure the state machine, and the state machines can easily become too complex. This thesis proposes a pattern for decomposing and structuring the model of a feature's behavioural requirements, based on modes of operation (e.g., Active, Inactive, Failed) that are common to features in multiple domains. Interestingly, the highest-level modes of the pattern can serve as a generic behavioural interface for all features that adhere to the pattern. The thesis proposes also several pattern extensions that provide guidance on how to structure the Active and Inactive behaviour of the feature. The pattern was applied to model the behavioural requirements of 21 automotive features that were specified in 7 production-grade requirements documents. The pattern was applicable to all 21 features, and the proposed generic feature interface was applicable to 50 out of 58 inter-feature references. A user study with 18 participants evaluated whether use of the pattern made it easier than otherwise to write state machines for features and whether feature state machines written with the help of the pattern are more readable than those written without the help of the pattern. The results of the study indicate that use of the pattern facilitates writing of feature state machines.
477

Software reliability prediction based on design metrics

Stineburg, Jeffrey January 1999 (has links)
This study has presented a new model for predicting software reliability based on design metrics. An introduction to the problem of software reliability is followed by a brief overview of software reliability models. A description of the models is given, including a discussion of some of the issues associated with them. The intractability of validating life-critical software is presented. Such validation is shown to require extended periods of test time that are impractical in real world situations. This problem is also inherent in fault tolerant software systems of the type currently being implemented in critical applications today. The design metrics developed at Ball State University is proposed as the basis of a new model for predicting software reliability from information available during the design phase of development. The thesis investigates the proposition that a relationship exists between the design metric D(G) and the errors that are found in the field. A study, performed on a subset of a large defense software system, discovered evidence to support the proposition. / Department of Computer Science
478

Using the Design Metrics Analyzer to improve software quality

Wilburn, Cathy A. January 1994 (has links)
Effective software engineering techniques are needed to increase the reliability of software systems, to increase the productivity of development teams, and to reduce the costs of software development. Companies search for an effective software engineering process as they strive to reach higher process maturity levels and produce better software. To aid in this quest for better methods of software engineering. the Design Metrics Research Team at Ball State University has analyzed university and industry software to be able to detect error-prone modules. The research team has developed, tested and validated their design metrics and found them to be highly successful. These metrics were typically collected and calculated by hand. So that these metrics can be collected more consistently, more accurately and faster, the Design Metrics Analyzer for Ada (DMA) was created. The DMA collects metrics from the files submitted based on a subprogram level. The metrics results are then analyzed to yield a list of stress points, which are modules that are considered to be error-prone or difficult for developers. This thesis describes the Design Metrics Analyzer, explains its output and how it functions. Also, ways that the DMA can be used in the software development life cycle are discussed. / Department of Computer Science
479

Improving Recurrent Software Development: A Contextualist Inquiry Into Release Cycle Management

Kamran, Syed M 15 April 2014 (has links)
Software development is increasingly conducted in a recurrent fashion, where the same product or service is continuously being developed for the marketplace. Still, we lack detailed studies about this particular context of software development. Against this backdrop, this dissertation presents an action research study into Software Inc., a large multi-national software provider. The research addressed the challenges the company faced in managing releases and organizing software process improvement (SPI) to help recurrently develop and deliver a specific product, Secure-on-Request, to its customers and the wider marketplace. The initial problem situation was characterized by recent acquisition of additional software, complexity of service delivery, new engineering and product management teams, and low software development process maturity. Asking how release management can be organized and improved in the context of recurrent development of software, we draw on Pettigrew’s contextualist inquiry to focus on the ongoing interaction between the contents, context and process to organize and improve release cycle practices and outcomes. As a result, the dissertation offers two contributions. Practically, it contributes to the resolution of the problem situation at Software Inc. Theoretically, it introduces a new software engineering discipline, release cycle management (RCM), focused on recurrent delivery of software, including SPI as an integral part, and grounded in the specific experiences at Software Inc.
480

A comprehensive approach for software dependency resolution

Zhang, Hanyu 28 July 2011 (has links)
Software reuse is prevalent in software development. It is not uncommon that one software product may depend on numerous libraries/products in order to build, install, or run. Software reuse is difficult due to the complex interdependency relationships between software packages. In this work, we presented four approaches to retrieve such dependency information, each technique focuses on retrieving software dependency from a specific source, including source code, build scripts, binary files, and Debian spec. The presented techniques were realized by a prototype tool, DEx, which is applied to a large collection of Debian projects in a comprehensive evaluation. Through the comprehensive analysis, we evaluate the presented techniques, and compare them from various aspects. / Graduate

Page generated in 0.0401 seconds