• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 104
  • 29
  • 12
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 342
  • 342
  • 342
  • 112
  • 105
  • 88
  • 78
  • 60
  • 56
  • 47
  • 46
  • 46
  • 40
  • 40
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

An exploratory study of software development measures across COBOL programs

Veeder, Nadine M January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Computer Science.
62

A requirements specification software cost estimation tool

Schneider, Gary David January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Computer Science.
63

A numerical procedure for computing probability of detection for a wideband pulse receiver

Briles, Scott D January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Electrical and Computer Engineering.
64

Making Software More Reliable by Uncovering Hidden Dependencies

Bell, Jonathan Schaffer January 2016 (has links)
As software grows in size and complexity, it also becomes more interdependent. Multiple internal components often share state and data. Whether these dependencies are intentional or not, we have found that their mismanagement often poses several challenges to testing. This thesis seeks to make it easier to create reliable software by making testing more efficient and more effective through explicit knowledge of these hidden dependencies. The first problem that this thesis addresses, reducing testing time, directly impacts the day-to-day work of every software developer. The frequency with which code can be built (compiled, tested, and package) directly impacts the productivity of developers: longer build times mean a longer wait before determining if a change to the application being build was successful. We have discovered that in the case of some languages, such as Java, the vast majority of build time is spent running tests. Therefore, it's incredibly important to focus on approaches to accelerating testing, while simultaneously making sure that we do not inadvertently cause tests to erratically fail (i.e. become flaky). Typical techniques for accelerating tests (like running only a subset of them, or running them in parallel) often can't be applied soundly, since there may be hidden dependencies between tests. While we might think that each test should be independent (i.e. that a test's outcome isn't influenced by the execution of another test), we and others have found many examples in real software projects where tests truly have these dependencies: some tests require others to run first, or else their outcome will change. Previous work has shown that these dependencies are often complicated, unintentional, and hidden from developers. We have built several systems, VMVM and ElectricTest, that detect different sorts of dependencies between tests and use that information to soundly reduce testing time by several orders of magnitude. In our first approach, Unit Test Virtualization, we reduce the overhead of isolating each unit test with a lightweight, virtualization-like container, preventing these dependencies from manifesting. Our realization of Unit Test Virtualization for Java, VMVM eliminates the need to run each test in its own process, reducing test suite execution time by an average of 62% in our evaluation (compared to execution time when running each test in its own process). However, not all test suites isolate their tests: in some, dependencies are allowed to occur between tests. In these cases, common test acceleration techniques such as test selection or test parallelization are unsound in the absence of dependency information. When dependencies go unnoticed, tests can unexpectedly fail when executed out of order, causing unreliable builds. Our second approach, ElectricTest, soundly identifies data dependencies between test cases, allowing for sound test acceleration. To enable more broad use of general dependency information for testing and other analyses, we created Phosphor, the first and only portable and performant dynamic taint tracking system for the JVM. Dynamic taint tracking is a form of data flow analysis that applies labels to variables, and tracks all other variables derived from those tagged variables, propagating those tags. Taint tracking has many applications to software engineering and software testing, and in addition to our own work, researchers across the world are using Phosphor to build their own systems. Towards making testing more effective, we also created Pebbles, which makes it easy for developers to specify data-related test oracles on mobile devices by thinking in terms of high level objects such as emails, notes or pictures.
65

The systems resource dictionary : a synergism of artificial intelligence, database management and software engineering methodologies

Salberg, Randall N January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Computer Science.
66

Application of project management software and its influence on project success : a case of NPOs in the Western Cape

Magwali, Silibaziso Nobukhosi January 2018 (has links)
Thesis (MTech (Business Administration in Project Management))--Cape Peninsula University of Technology, 2018. / Though strides have been taken to ensure the availability and application of technology, there still exists some disparity between the envisaged use compared to the actual one (Ross, Romich & Pena, 2016:48). The application of technology, such as project management software (PMS), could be the answer to unlocking success in projects especially where a large scope and high degree of complexity can sometimes prove to be very challenging. The research explored how the application of PMS influences project success. A case of NPOs in the Western Cape Province, South Africa was used. The research objectives were to (1) establish if PMS is applied in the NPO’s work, (2) determine employees’ interactions with PMS relative to project success, and (3) identify the limitations of current PMS being used. A non-experimental and quantitative approach was taken to conduct the research. Out of a potential 200 units of analysis, a sample group consisting of 132 project-implementing NPOs in the Western Cape was used. Ninety-four responses were received setting the response rate at 71%. The research instruments used were questionnaires, which were administered physically and online. The data was analysed using the Statistical Package for Social Sciences (SPSS) software. There is high project success rate among NPO projects in the Western Cape at 77%. The research revealed that PMS is utilised in a significant number of organisations with the most popular ones used being Microsoft Project, Project Manager and Jira. Most project offices utilise PMS on a weekly or monthly basis especially during the project planning and execution stages. The limitations of the software include that it can over-complicate issues, be time-consuming, and costly. In light of the above, respondents revealed that they believe PMS does have a positive influence on project success. Furthermore, based on the findings and conclusions derived from this study, the researcher made a few recommendations. For example, persons in academia need to widen the scope of the study to different geographical locations and use a different research approach. Another is that software engineers/developers must consider localised support for PMS as well as improve on scalability issues. To NPOs, recommendations were made on potential training sessions to capacitate the sector to be more adept to information and communication technology (ICT) and eventually make more use of PMS.
67

A collaboration framework of selecting software components based on behavioural compatibility with user requirements

Wang, Lei Unknown Date (has links)
Building software systems from previously existing components can save time and effort while increasing productivity. The key to a successful Component-Based Development (CBD) is to get the required components. However, components obtained from other developers often show different behaviours than what are required. Thus adapting the components into the system being developed becomes an extra development and maintenance cost. This cost often offsets the benefits of CBD. Our research goal is to maximise the possibility of finding components that have the required behaviours, so that the component adaptation cost can be minimised. Imprecise component specifications and user requirements are the main reasons that cause the difficulty of finding the required components. Furthermore, there is little support for component users and developers to collaborate and clear the misunderstanding when selecting components, as CBD has two separate development processes for them. In this thesis, we aim at building a framework in which component users and developers can collaborate to select components with tools support, by exchanging component and requirement specifications. These specifications should be precise enough so that behavioural mismatches can be detected. We have defined Simple Component Interface Language (SCIL) as the communication and specification language to capture component behaviours. A combined SCIL specification of component and requirement can be translated to various existing modelling languages. Thus various properties that are supported by those languages can be checked by the related model checking tools. If all the user-required properties are satisfied, then the component is compatible to the user requirement at the behavioural level. Thus the component can be selected. Based on SCIL, we have developed a prototype component selection system and used it in two case studies: finding a spell checker component and searching for the components for a generic e-commerce application. The results of the case studies indicate that our approach can indeed find components that have the required behaviours. Compared to the traditional way of searching by keywords, our approach is able to get more relevant results, so the cost of component adaptation can be reduced. Furthermore, with a collaborative selection process this cost can be minimised. However, our approach has not achieved complete automation due to the modelling inconsistency from different people. Some manual work to adjust user requirements is needed when using our system. The future work will focus on solving this remaining problem of inconsistent modelling, providing an automatic trigger to select proper tools, etc.
68

Structured graphs: a visual formalism for scalable graph based tools and its application to software structured analysis

January 1996 (has links)
Very large graphs are difficult for a person to browse and edit on a computer screen. This thesis introduces a visual formalism, structured graphs, which supports the scalable browsing and editing of very large graphs. This approach is relevant to a given application when it incorporates a large graph which is composed of named nodes and links, and abstraction hierarchies which can be defined on these nodes and links. A typical browsing operation is the selection of an arbitrary group of nodes and the display of the network of nodes and links for these nodes. Typical editing operations is: adding a new link between two nodes, adding a new node in the hierarchy, and moving sub-graphs to a new position in the node hierarchy. These operations are scalable when the number of user steps involved remains constant regardless of how large the graph is. This thesis shows that with structured graphs, these operations typically take one user step. We demonstrate the utility of structured graph formalism in an application setting. Computer aided software engineering tools, and in particular, structured analysis tools, are the chosen application area for this thesis, as they are graph based, and existing tools, though adequate for medium sized systems, lack scalability. In this thesis examples of an improved design for a structured analysis tool, based on structured graphs, is given. These improvements include scalable browsing and editing operations to support an individual software analyst, and component composition operations to support the construction of large models by a group of software analysts. Finally, we include proofs of key properties and descriptions of two text based implementations.
69

Advanced separation of concerns and the compatibility of aspect-orientation

Dechow, Doug 18 March 2005 (has links)
The appropriate separation of concerns is a fundamental engineering principle. A concern, for software developers, is that which must be represented by code in a program; by extension, separation of concerns is the ability to represent a single concern in a single appropriate programming language construct. Advanced separation of concerns is a relatively recent technique in software development for dealing with the complexity of systems that contain crosscutting concerns, namely those individual concerns that cut across programs. Aspect-oriented programming (AOP), which is the area of this dissertation, offers a form of advanced separation of concerns in which primary and crosscutting concerns can be separated during problem solving. An aspect gathers into one place a concern that is or would otherwise be scattered throughout an object-oriented program or system. The primary aim of this dissertation-the AOPy project-is to investigate the usefulness of advanced separation of concerns that aspect-oriented programming offers. In other words, the AOPy Project determines whether the potential usefulness of aspect-oriented programming is currently actualized in practice. In determining its current practical usefulness, this dissertation also determines characteristics of and obstacles to usefulness of aspect-orientation in software development. Perhaps the most important contribution to understanding and addressing the problem of complexity in software systems that this dissertation makes is that the AOPy research project establishes a definition of compatibility of aspect-orientation and provides an analysis of sample instances during problem solving that indicate evidence of compatibility between object-orientation and aspect-orientation. Compatibility, as defined by the AOPy Project, exists when aspect-oriented ideas, terminology, and techniques are appropriately employed in the experimental problem-solving session. The primary scientific contribution of this dissertation, therefore, is a narrative description of the actual use of aspect-oriented programming in a series of controlled, problem-solving scenarios. Theories describing the use of aspect-oriented ideas, terminology, and techniques are generated and refined by means of Grounded Theory, a qualitative data analysis technique. Because this dissertation 1) analytically explores areas of compatibility of aspect-orientation with object-orientation and 2) defines areas of compatibility thwarted in practice, this research project can serve as a foundation for the development of aspect-oriented programming-based design methodologies that encourage compatibility and discourage non-compatibility. Therefore, the AOPy Project establishes a foundation for future research in both its methodology and its results and for future software development in practice. By contributing a definition of aspect-oriented compatibility and a framework within which it can be understood, this dissertation fosters the progression toward a seamless use of aspect-orientation between developer and task. / Graduation date: 2005
70

Strategies and behaviors of end-user programmers with interactive fault localization

Prabhakararao, Shreenivasarao 03 December 2003 (has links)
End-user programmers are writing an unprecedented number of programs, due in large part to the significant effort put forth to bring programming power to end users. Unfortunately, this effort has not been supplemented by a comparable effort to increase the correctness of these often faulty programs. To address this need, we have been working towards bringing fault localization techniques to end users. In order to understand how end users are affected by and interact with such techniques, we conducted a think-aloud study, examining the interactive, human-centric ties between end-user debugging and a fault localization technique for the spreadsheet paradigm. Our results provide insights into the contributions such techniques can make to an interactive end-user debugging process. / Graduation date: 2004

Page generated in 0.1107 seconds