• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 120
  • 104
  • 29
  • 12
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 341
  • 341
  • 341
  • 112
  • 105
  • 87
  • 78
  • 60
  • 56
  • 47
  • 46
  • 46
  • 40
  • 40
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

A framework for a successful collaboration culture in software development and operations (DevOps) environments

Masombuka, Koos Themba 03 1900 (has links)
Traditional software development methodologies are historically used for the creation of software products in separate departments, namely development and operations departments. The development department typically codes and tests the software, whilst the operations department is responsible for its deployment. This siloed arrangement is not aligned to modern practices, which require a timeous response to changes without necessarily delaying the product release. DevOps culture addresses this silos problem by creating an enabling environment for the two departments to collaborate throughout the software development life cycle. The successful implementation of the DevOps culture should give an organisation a competitive advantage over its rivals by responding to changes much faster than when traditional methodologies are employed. However, there is no coherent framework on how organisations should implement DevOps culture. Hence, this study was aimed at developing a framework for the implementation of DevOps culture by identifying important factors that should be included in the framework. The literature survey revealed that open communication, roles and responsibility alignment, respect and trust are the main factors that constitute DevOps collaboration culture. The proposed framework was underpinned by the Information System Development Model which suggests that the acceptance of a new technology by software developers is influenced by social norm, organisational usefulness and perceived behavioural control. A sequential mixed method was used to survey and interview respondents from South Africa, which were selected using convenience and purposive sampling. Statistical analysis of the quantitative data acquired through the questionnaire followed by a qualitative analysis of interviews were undertaken. The results showed that open communication, respect and trust are the key success factors to be included in the framework. The role and responsibility factor was found not to be statistically significant. This study contributes towards the understanding of factors necessary for the acceptance of DevOps culture in a software development organisation. DevOps managers can use the results of this study to successfully adopt and implement DevOps culture. This study also contributes to the theoretical literature on software development by identifying factors that are important in the acceptance of DevOps collaboration culture. / School of Computing / Ph. D. (Computer Science)
162

Programming the INTEL 8086 microprocessor for GRADS : a graphic real-time animation display system

Haag, Roger. January 1985 (has links)
No description available.
163

MDE-URDS-A Mobile Device Enabled Service Discovery System

Pradhan, Ketaki A. 16 August 2011 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Component-Based Software Development (CSBD) has gained widespread importance in recent times, due to its wide-scale applicability in software development. System developers can now pick and choose from the pre-existing components to suit their requirements in order to build their system. For the purpose of developing a quality-aware system, finding the suitable components offering services is an essential and critical step. Hence, Service Discovery is an important step in the development of systems composed from already existing quality-aware software services. Currently, there is a plethora of new-age devices, such as PDAs, and cell phones that automate daily activities and provide a pervasive connectivity to users. The special characteristics of these devices (e.g., mobility, heterogeneity) make them as attractive choices to host services. Hence, they need to be considered and integrated in the service discovery process. However, due to their limitations of battery life, intermittent connectivity and processing capabilities this task is not a simple one. This research addresses this challenge of including resource constrained devices by enhancing the UniFrame Resource Discovery System (URDS) architecture. This enhanced architecture is called Mobile Device Enabled Service Discovery System (MDE-URDS). The experimental validation of the MDE-URDS suggests that it is a scalable and quality-aware system, handling the limitations of mobile devices using existing and well established algorithms and protocols such as Mobile IP.
164

The identification of semantics for the file/database problem domain and their use in a template-based software environment /

Shubra, Charles John January 1984 (has links)
No description available.
165

Supervisory methodology and notation (SUPERMAN) for human-computer system development

Yunten, Tamer January 1985 (has links)
The underlying goal of SUPERvisory Methodology And Notation (SUPERMAN) is to enhance productive operation of human-computer system developers by providing easy-to-use concepts and automated tools for developing high-quality (e.g., human-engineered, cost-effective, easy-to-maintain) target systems. The supervisory concept of the methodology integrates functions of many modeling techniques, and allows complete representation of the designer's conceptualization of a system's operation. The methodology views humans as functional elements of a system in addition to computer elements. Parts of software which implement human-computer interaction are separated from the rest of software. A single, unified system representation is used throughout a system lifecycle. The concepts of the methodology are notationally built into a graphical programming language. The use of this language in developing a system leads to a natural and orderly application of the methodology. / Ph. D. / incomplete_metadata
166

An object-oriented software development environment for geometric modeling in intelligent computer aided design

Lin, Wenhsyong 14 December 2006 (has links)
The concept of intelligent CAD systems to assist a designer in automating the design process has been discussed for years. It has been recognized that knowledge engineering techniques and the study of design theory can provide certain solutions to this approach. A major issue in developing intelligent CAD systems for geometric modeling is the integration of the design geometry with the representation of the design constraints. Current commercial computer aided design (CAD) systems are used primarily for recording the results of the design process. Using conventional CAD systems, a design engineer either must create the geometry of the design object with precise coordinates and dimensions, or start his design from an existing geometry of a previous design. It is difficult to propagate a dimensional change throughout an entire model -- especially solid models. This rigidity imposed by conventional CAD systems discourages a designer from exploring different approaches in creating a novel product. / Ph. D.
167

A validation software package for discrete simulation models

Florez, Rossanna E. January 1986 (has links)
This research examined the simulation model validation process. After a model is developed, its reliability should be evaluated using validation techniques. This research was concerned with the validation of discrete simulation models which simulate an existing physical system. While there are many validation techniques available in the literature, only the techniques which compare available real system data to model data were considered by this research. Three of the techniques considered were selected and automated in a micro-computer software package. The package consists of six programs which are intended to aid the user in the model validation process. DATAFILE allows for real and model data input, and creates files using a DIF format. DATAGRAF plots real against model system responses and provides histograms of the variables. These two programs are based on the approach used in McNichol's statistical software. Hypothesis tests comparing real and model responses are conducted using TESTHYPO. The potential cost of using an invalid model, in conjunction with the determination of the alpha level of significance, is analyzed in COSTRISK. A non-parametric hypothesis test can be performed using NOTPARAM. Finally, a global validity measure can be obtained using VALSCORE. The software includes brief explanations of each technique and its use. The software was written in the BASIC computer language. The software was demonstrated using a simulation model and hypothetical but realistic system data. The hardware chosen for the package use was the IBM Personal Computer with 256k memory. / M.S.
168

Behavioral demonstration: an approach to rapid prototyping and requirements execution

Callan, James E. January 1985 (has links)
This thesis presents an approach to rapid prototyping called behavioral demonstration that allows a system to be demonstrated at any point during its development. This approach is based on the operational specification approach to software design and uses a new, automation based life-cycle paradigm. This work describes a tool that supports behavioral demonstration called the behavioral demonstrator that collects and manages information typically lost during software system design but critically needed during maintenance. The tool also supports project-personnel management and software complexity and cost estimation. The research takes place in the context of a dialogue management system and software design methodology that features the logical and physical separation of the input, processing, and output components of interactive systems. / M.S.
169

Flexible manufacturing system software development using simulation

Martin, Timothy Patrick January 1985 (has links)
This paper presents a hierarchical modeling method that can be used to simulate a Flexible Manufacturing System (FMS) at all levels of detail. The method was developed specifically to aid the software development needed for the hierarchy of computers that are present in an FMS. The method was developed by modeling an existing FMS. The models developed of the existing FMS are described in detail to provide an example of how to model other FMSs. The basic building blocks needed for designing other FMSs with this modeling method are provided. The models were written in the SIMAN simulation language. SIMAN was found to be an easy language to use for the hierarchical modeling of FMSs. / M.S.
170

The application of structure and code metrics to large scale systems

Canning, James Thomas January 1985 (has links)
This work extends the area of research termed software metrics by applying measures of system structure and measures of system code to three realistic software products. Previous research in this area has typically been limited to the application of code metrics such as: lines of code, McCabe's Cyclomatic number, and Halstead's software science variables. However, this research also investigates the relationship of four structure metrics: Henry's Information Flow measure, Woodfield's Syntactic Interconnection Model, Yau and Collofello's Stability measure and McClure's Invocation complexity, to various observed measures of complexity such as, ERRORS, CHANGES and CODING TIME. These metrics are referred to as structure measures since they measure control flow and data flow interfaces between system components. Spearman correlations between the metrics revealed that the code metrics were similar measures of system complexity, while the structure metrics were typically measuring different dimensions of software. Furthermore, correlating the metrics to observed measures of complexity indicated that the Information Flow metric and the Invocation Measure typically performed as well as the three code metrics when project factors and subsystem factors were taken into consideration. However, it was generally true that no single metric was able to satisfactorily identify the variations in the data for a single observed measure of complexity. Trends between many of the metrics and the observed data were identified when individual components were grouped together. Code metrics typically formed groups of increasing complexity which corresponded to increases in the mean values of the observed data. The strength of the Information Flow metric and the Invocation measure is their ability to form a group containing highly complex components which was found to be populated by outliers in the observed data. / Ph. D.

Page generated in 0.2024 seconds