• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 16
  • 7
  • 6
  • 4
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 131
  • 131
  • 41
  • 24
  • 15
  • 13
  • 13
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Moops: A web implementation of the Personal Software Process reporting system

Gigler, Thomas Russell, III. 01 January 2008 (has links)
The purpose of Moops is to bridge the gap between PSP Scriber, geared very specifically to the CSCI655 class, and other available PSP implications which are so general they are difficult to use immediately without valuable time spent learning the software. Moops is a PHP/MySQL based web application designed to provide the students taking the CSCI655 graduate software engineering course at CSUSB with an intuitive, easy to use tool to implement the Personal Software Process (PSP). Moops eliminates the possibility of errors in calculations by completing all calculations for the user.
42

LDPL: A Language Designer's Pattern Language

Winn, Tiffany Rose, winn@infoeng.flinders.edu.au January 2006 (has links)
Patterns provide solutions to recurring design problems in a variety of domains, including that of software design. The best patterns are generative: they show how to build the solution they propose, rather than just explaining it. A collection of patterns that work together to generate a complex system is called a pattern language. Pattern languages have been written for domains as diverse as architecture and computer science, but the process of developing pattern languages is not well understood. This thesis focuses on defining both the structure of pattern languages and the processes by which they are built. The theoretical foundation of the work is existing theory on symmetry breaking. The form of the work is itself a pattern language: a Language Designer's Pattern Language (LDPL). LDPL itself articulates the structure of pattern languages and the key processes by which they form and evolve, and thus guides the building of a properly structured pattern language. LDPL uses multidisciplinary examples to validate the claims made, and an existing software pattern language is analyzed using the material developed. A key assumption of this thesis is that a pattern language is a structural entity; a pattern is not just a transformation on system structure, but also the resultant structural configuration. Another key assumption is that it is valid to treat a pattern language itself as a complex, designed system, and therefore valid to develop a pattern language for building pattern languages. One way of developing a pattern language for building pattern languages would be to search for underlying commonality across a variety of existing, well known pattern languages. Such underlying commonality would form the basis for patterns in LDPL. This project has not directly followed this approach, simply because very few pattern languages that are genuinely structural have currently been explicitly documented. Instead, given that pattern languages articulate structure and behavior of complex systems, this research has investigated existing complex systems theory - in particular, symmetry-breaking - and used that theory to underpin the pattern language. The patterns in the language are validated by examples of those patterns within two well known pattern languages, and within several existing systems whose pattern languages have not necessarily been explicitly documented as such, but the existence of which is assumed in the analysis. In addition to developing LDPL, this project has used LDPL to critique an existing software pattern language, and to show how that software pattern language could potentially have been generated using LDPL. Existing relationships between patterns in the software language have been analyzed and, in some cases, changes to patterns and their interconnections have been proposed as a way of improving the language. This project makes a number of key contributions to pattern language research. It provides a basis for semantic analysis of pattern languages and demonstrates the validity of using a pattern language to articulate the structure of pattern languages and the processes by which they are built. The project uses symmetry-breaking theory to analyze pattern languages and applies that theory to the development of a language. The resulting language, LDPL, provides language developers with a tool they can use to help build pattern languages.
43

Towards guidelines for development of energy conscious software / Mot riktlinjer för utveckling av enegisnål mjukvara

Carlstedt-Duke, Edward, Elfström, Erik January 2009 (has links)
<p>In recent years, the drive for ever increasing energy efficiency has intensified. The main driving forces behind this development are the increased innovation and adoption of mobile battery powered devices, increasing energy costs, environmental concerns, and strive for denser systems.</p><p>This work is meant to serve as a foundation for exploration of energy conscious software. We present an overview of previous work and a background to energy concerns from a software perspective. In addition, we describe and test a few methods for decreasing energy consumption with emphasis on using software parallelism. The experiments are conducted using both a simulation environment and real hardware. Finally, a method for measuring energy consumption on a hardware platform is described.</p><p>We conclude that energy conscious software is very dependent on what hardware energy saving features, such as frequency scaling and power management, are available. If the software has a lot of unnecessary, or overcomplicated, work, the energy consumption can be lowered to some extent by optimizing the software and reducing the overhead. If the hardware provides software-controllable energy features, the energy consumption can be lowered dramatically.</p><p>For suitable workloads, using parallelism and multi-core technologies seem very promising for producing low power software. Realizing this potential requires a very flexible hardware platform. Most important is to have fine grained control over power management, and voltage and frequency scaling, preferably on a per core basis.</p>
44

Mixed-fidelity prototyping of user interfaces

Petrie, Jennifer 08 February 2006
<p> This research presents a new technique for user interface prototyping, called mixed-fidelity prototyping. Mixed-fidelity prototyping combines low-, medium-, and high-fidelity interface elements within a single prototype in a lightweight manner, supporting independent refinement of individual elements. The approach allows designers to investigate alternate designs, including more innovative designs, and elicit feedback from stakeholders without having to commit too early in the process. As well, the approach encourages collaboration among a diverse group of stakeholders throughout the design process. For example, individuals who specialize in specific fidelities, such as high-fidelity components, are able to become involved earlier on in the process. </p> <p> We developed a conceptual model called the Region Model and implemented a proof-of-concept system called ProtoMixer. We demonstrated the mixed-fidelity approach by using ProtoMixer to design an example application. </p> <p> ProtoMixer has several benefits over other existing prototyping tools. With ProtoMixer, prototypes can be composed of multiple fidelities, and elements are easily refined and transitioned between different fidelities. Individual elements can be tied into data and functionality, and can be executed inside prototypes. As well, traditional informal practices such as sketching and storyboarding are supported. Furthermore, ProtoMixer is designed for collaborative use on a high-resolution, large display workspace. </p>
45

Improving maintainability on modern cross-platform projects

Berglund, Dan January 2013 (has links)
As software systems grow in size they will also grow in complexity. If the increased complexity is not managed the system will be increasingly difficult to maintain. The effect of unmaintainable software is even more distinct when using a agile development process. By increasing the maintainability of the system these problems will be dealt with and the system can be extended with sustained efficiency. This thesis will evaluate the development process of a modern, agile company in order to find changes that will promote increased maintainability. The result is an modified process that will increase the maintainability with the smallest possible overhead for the development organisation. The result is based on earlier studies of development technologies that have proven to increase the maintainability. The implementation of these technologies are adjusted to fit the development team, and some of the technologies that are not suitable for the team are rejected.
46

Detection of Feature Interactions in Automotive Active Safety Features

Juarez Dominguez, Alma L. January 2012 (has links)
With the introduction of software into cars, many functions are now realized with reduced cost, weight and energy. The development of these software systems is done in a distributed manner independently by suppliers, following the traditional approach of the automotive industry, while the car maker takes care of the integration. However, the integration can lead to unexpected and unintended interactions among software systems, a phenomena regarded as feature interaction. This dissertation addresses the problem of the automatic detection of feature interactions for automotive active safety features. Active safety features control the vehicle's motion control systems independently from the driver's request, with the intention of increasing passengers' safety (e.g., by applying hard braking in the case of an identified imminent collision), but their unintended interactions could instead endanger the passengers (e.g., simultaneous throttle increase and sharp narrow steering, causing the vehicle to roll over). My method decomposes the problem into three parts: (I) creation of a definition of feature interactions based on the set of actuators and domain expert knowledge; (II) translation of automotive active safety features designed using a subset of Matlab's Stateflow into the input language of the model checker SMV; (III) analysis using model checking at design time to detect a representation of all feature interactions based on partitioning the counterexamples into equivalence classes. The key novel characteristic of my work is exploiting domain-specific information about the feature interaction problem and the structure of the model to produce a method that finds a representation of all different feature interactions for automotive active safety features at design time. My method is validated by a case study with the set of non-proprietary automotive feature design models I created. The method generates a set of counterexamples that represent the whole set of feature interactions in the case study.By showing only a set of representative feature interaction cases, the information is concise and useful for feature designers. Moreover, by generating these results from feature models designed in Matlab's Stateflow translated into SMV models, the feature designers can trace the counterexamples generated by SMV and understand the results in terms of the Stateflow model. I believe that my results and techniques will have relevance to the solution of the feature interaction problem in other cyber-physical systems, and have a direct impact in assessing the safety of automotive systems.
47

Mixed-fidelity prototyping of user interfaces

Petrie, Jennifer 08 February 2006 (has links)
<p> This research presents a new technique for user interface prototyping, called mixed-fidelity prototyping. Mixed-fidelity prototyping combines low-, medium-, and high-fidelity interface elements within a single prototype in a lightweight manner, supporting independent refinement of individual elements. The approach allows designers to investigate alternate designs, including more innovative designs, and elicit feedback from stakeholders without having to commit too early in the process. As well, the approach encourages collaboration among a diverse group of stakeholders throughout the design process. For example, individuals who specialize in specific fidelities, such as high-fidelity components, are able to become involved earlier on in the process. </p> <p> We developed a conceptual model called the Region Model and implemented a proof-of-concept system called ProtoMixer. We demonstrated the mixed-fidelity approach by using ProtoMixer to design an example application. </p> <p> ProtoMixer has several benefits over other existing prototyping tools. With ProtoMixer, prototypes can be composed of multiple fidelities, and elements are easily refined and transitioned between different fidelities. Individual elements can be tied into data and functionality, and can be executed inside prototypes. As well, traditional informal practices such as sketching and storyboarding are supported. Furthermore, ProtoMixer is designed for collaborative use on a high-resolution, large display workspace. </p>
48

Practical software testing for an FDA-regulated environment

Vadysirisack, Pang Lithisay 27 February 2012 (has links)
Unlike hardware, software does not degrade over time or frequency use. This is good for software. Also unlike hardware, software can be easily changed. This unique characteristic gives software much of its power, but is also responsible for possible failures in software applications. When software is used within medical devices, software failures may result in bodily injury or death. As a result, regulations have been imposed on the makers of medical devices to ensure their safety, which includes the safety of the devices’ software. The U.S. Food and Drug Administration requires establishment of systems and control processes to ensure quality devices. A principal part of the quality assurance effort is testing. This paper explores the unique role of software testing in the design, development, and release of software used for medical devices and applications. It also provides practical, industry-driven guidance on medical device software testing techniques and strategies. / text
49

Cutout Manager : a stand-alone software system to calculate output factors for arbitrarily shaped electron beams using Monte Carlo simulation

Last, Jürgen. January 2008 (has links)
In external electron beam therapy arbitrarily shaped inserts (cutouts) are used to define the contours of the irradiated field. This thesis describes the implementation and verification of a software system to calculate output factors for cutouts using Monte Carlo simulations. The design goals were: (1) A stand-alone software system running on a single workstation. (2) Task oriented graphical user interface with shape input capability. (3) Implementation on Mac OS XRTM (10A.x Tiger). (4) CPU multicore support by job splitting. (5) EGSnrc (Patch level V4-r2-2-5) for particle transport and dose scoring. (6) Validation for clinical use. / The system, called Cutout Manager, can calculate output factors with 1% statistical error in 20 minutes on Mac Pro computer (Intel XeonRTM, 4 cores). When the BEAMnrc linac model correctly reproduces percentage depth doses in the buildup region and around R100, calculated and measured output factors are in good agreement with precision measurements of circular cutouts at 100 cm source-to-surface distance (SSD) and extended SSD. Cutout Manager simulations are consistent with measurements of clinical cutouts within a 2% error margin.
50

Automated test of evolving software

Shaw, Hazel Anne January 2005 (has links)
Computers and the software they run are pervasive, yet released software is often unreliable, which has many consequences. Loss of time and earnings can be caused by application software (such as word processors) behaving incorrectly or crashing. Serious disruption can occur as in the l4th August 2003 blackouts in North East USA and Canadal, or serious injury or death can be caused as in the Therac-25 overdose incidents. One way to improve the quality of software is to test it thoroughly. However, software testing is time consuming, the resources, capabilities and skills needed to carry it out are often not available and the time required is often curtailed because of pressures to meet delivery deadlines3. Automation should allow more thorough testing in the time available and improve the quality of delivered software, but there are some problems with automation that this research addresses. Firstly, it is difficult to determine ifthe system under test (SUT) has passed or failed a test. This is known as the oracle problem4 and is often ignored in software testing research. Secondly, many software development organisations use an iterative and incremental process, known as evolutionary development, to write software. Following release, software continues evolving as customers demand new features and improvements to existing ones5. This evolution means that automated test suites must be maintained throughout the life ofthe software. A contribution of this research is a methodology that addresses automatic generation of the test cases, execution of the test cases and evaluation of the outcomes from running each test. "Predecessor" software is used to solve the oracle problem. This is software that already exists, such as a previous version of evolving software, or software from a different vendor that solves the same, or similar, problems. However, the resulting oracle is assumed not be perfect, so rules are defined in an interface, which are used by the evaluator in the test evaluation stage to handle the expected differences. The interface also specifies functional inputs and outputs to the SUT. An algorithm has been developed that creates a Markov Chain Transition Matrix (MCTM) model of the SUT from the interface. Tests are then generated automatically by making a random walk of the MCTM. This means that instead of maintaining a large suite of tests, or a large model of the SUT, only the interface needs to be maintained.

Page generated in 0.0559 seconds