• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 104
  • 29
  • 12
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 342
  • 342
  • 342
  • 112
  • 105
  • 88
  • 78
  • 60
  • 56
  • 47
  • 46
  • 46
  • 40
  • 40
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Real-time software development for data storage and event recording of a satellite ground control station

Patel, Prashant R. 01 July 2003 (has links)
No description available.
172

The application of structure and code metrics to large scale systems

Canning, James Thomas January 1985 (has links)
This work extends the area of research termed software metrics by applying measures of system structure and measures of system code to three realistic software products. Previous research in this area has typically been limited to the application of code metrics such as: lines of code, McCabe's Cyclomatic number, and Halstead's software science variables. However, this research also investigates the relationship of four structure metrics: Henry's Information Flow measure, Woodfield's Syntactic Interconnection Model, Yau and Collofello's Stability measure and McClure's Invocation complexity, to various observed measures of complexity such as, ERRORS, CHANGES and CODING TIME. These metrics are referred to as structure measures since they measure control flow and data flow interfaces between system components. Spearman correlations between the metrics revealed that the code metrics were similar measures of system complexity, while the structure metrics were typically measuring different dimensions of software. Furthermore, correlating the metrics to observed measures of complexity indicated that the Information Flow metric and the Invocation Measure typically performed as well as the three code metrics when project factors and subsystem factors were taken into consideration. However, it was generally true that no single metric was able to satisfactorily identify the variations in the data for a single observed measure of complexity. Trends between many of the metrics and the observed data were identified when individual components were grouped together. Code metrics typically formed groups of increasing complexity which corresponded to increases in the mean values of the observed data. The strength of the Information Flow metric and the Invocation measure is their ability to form a group containing highly complex components which was found to be populated by outliers in the observed data. / Ph. D.
173

Formal Verification of Quantum Software

Tao, Runzhou January 2024 (has links)
Real applications of near-term quantum computing are around the corner and quantum software is a key component. Unlike classical computing, quantum software is under the threat of both quantum hardware errors and human bugs due to the unintuitiveness of quantum physics theory. Therefore, trustworthiness and reliability are critical for the success of quantum computation. However, most traditional methods to ensure software reliability, like testing, do not transfer to quantum at scale because of the destructive and probabilistic nature of quantum measurement and the exponential-sized state space. In this thesis, I introduce a series of frameworks to ensure the trustworthiness of quantum computing software by automated formal verification. First, I present Giallar, a fully-automated verification toolkit for quantum compilers to formally prove that the compiler is bug-free. Giallar requires no manual specifications, invariants, or proofs, and can automatically verify that a compiler pass preserves the semantics of quantum circuits. To deal with unbounded loops in quantum compilers, Giallar abstracts three loop templates, whose loop invariants can be automatically inferred. To efficiently check the equivalence of arbitrary input and output circuits that have complicated matrix semantics representation, Giallar introduces a symbolic representation for quantum circuits and a set of rewrite rules for showing the equivalence of symbolic quantum circuits. With Giallar, I implemented and verified 44 (out of 56) compiler passes in 13 versions of the Qiskit compiler, the open-source quantum compiler standard, during which three bugs were detected in and confirmed by Qiskit. The evaluation shows that most of Qiskit compiler passes can be automatically verified in seconds and verification imposes only a modest overhead to compilation performance. Second, I introduce Gleipnir, an error analysis framework for quantum programs that enable scalable and adaptive verification of quantum error through the application of tensor networks. Giallar introduces the ( 𝜌̂, 𝛿)-diamond norm, an error metric constrained by a quantum predicate consisting of the approximate state 𝜌̂ and its distance 𝛿 to the ideal state 𝜌. This predicate ( 𝜌̂, 𝛿) can be computed adaptively using tensor networks based on Matrix Product States. Giallar features a lightweight logic for reasoning about error bounds in noisy quantum programs, based on the ( 𝜌̂, 𝛿)-diamond norm metric. The experimental results show that Giallar is able to efficiently generate tight error bounds for real-world quantum programs with 10 to 100 qubits, and can be used to evaluate the error mitigation performance of quantum compiler transformations. Finally, I present QSynth, a quantum program synthesis framework that synthesizes verified recursive quantum programs, including a new inductive quantum programming language, its specification, a sound logic for reasoning, and an encoding of the reasoning procedure into SMT instances. By leveraging existing SMT solvers, QSynth successfully synthesizes 10 quantum unitary programs including quantum arithmetic programs, quantum eigenvalue inversion, quantum teleportation and Quantum Fourier Transformation, which can be readily transpiled to executable programs on major quantum platforms, e.g., Q#, IBM Qiskit, and AWS Braket.
174

Program management practices in context of Scrum : a case study of two South African software development SMMEs

Singh, Alveen January 2015 (has links)
Submitted in fulfilment of the requirements for the degree of Doctor of Technology: Information Technology, Durban University of Technology, Durban, 2015. / Agile approaches have proliferated within the software development arena over the past decade. Derived mainly from Lean manufacturing principles, agile planning and control mechanisms appear minimal and fluid when compared to more traditional software engineering approaches. Scrum ranks among the more popular permutations of agile. Contemporary literature represents a rich source of contributions for agile in areas such as practice guidelines, experience reports, and methodology tailoring; but the vast majority of these publications focus on the individual project level only, leaving much uncertainty and persistent questions in the multi-project space. Questions have recently been raised, by both academics and practitioners alike, concerning the ability of Scrum to scale from the individual project level to the multi-project space. Program management is an area encompassing practice and research areas concerned mainly with harmonizing the existence of competing simultaneous projects. Existing literature on program management essentially perceives projects as endeavours that can be carefully planned at the outset, and controlled in accordance with strong emphasis placed on economic and schedule considerations. This complexion seems to be mostly a result of well-established and ingrained management frameworks like Project Management Institute (PMI), and is largely at odds with emerging practices like Scrum. This disparity represents a gap in the literature and supports the need for deeper exploration. The conduit for this exploration was found in two South African software development small to medium sized enterprises (SMMEs) practicing Scrum. The practical realities and constraints faced by these SMMEs elicited the need for more dynamic program management practices in support of their quest to maximize usage of limited resources. This thesis examines these practices with the aim of providing new insights into the program management discourse in the context of Scrum software development environments. The research approach is qualitative and interpretive in nature. The in-depth exploratory case study research employed the two software SMMEs as units of analysis. Traditional ethnographic techniques were commissioned alongside minimal researcher participation in project activities. Activity Theory honed the data analysis effort and helped to unearth the interrelationships between SMME characteristics, program management practices, and Scrum software development. The results of the data analysis are further refined and fashioned into eleven knowledge areas that represent containers of program management practices. This is the product of thematic analysis of literature and data generated from fieldwork. Seeing as the observed practices were highly dynamic in nature, concept analysis provided a mechanism by which to depict them as snapshots in time. As a theoretical contribution, proposed frameworks were crafted to show how program management practices might be understood in the context of organizations striving towards agile implementation. Furthermore, representations of the mutually influential interfaces of SMME characteristics and Scrum techniques that initiate the observed fluid nature of program management practices, are brought to the fore.
175

Multi-user game development

Hung, Cheng-Yu 01 January 2007 (has links)
This project included the development of a multi-user game that takes place in a 3 dimensional world of the computer science department. Basically, the game allows prospective students to meet existing students and faculty in a virtual open house that takes place within the third floor of Jack Brown Hall. Users can walk around Jack Brown Hall and type text messages to chat with each other.
176

Towards the elicitation of hidden domain factors from clients and users during the design of software systems

Friendrich, Wernher Rudolph 11 1900 (has links)
This dissertation focuses on how requirements for a new software development system are elicited and what pitfalls could cause a software development project to fail if the said requirements are not captured correctly. A number of existing requirements elicitation methods, namely: JAD (Joint Application Design), RAD (Rapid Application Development), a Formal Specifications Language (Z), Natural Language, UML (Unified Modelling Language) and Prototyping are covered. The aforementioned techniques are then integrated into existing software development life cycle models, such as the Waterfall model, Rapid Prototyping model, Build and Fix model, Spiral model, Incremental model and the V-Process model. Differences in the domains (knowledge and experience of an environment) of a client and that of the software development team are highlighted and this is done diagrammatically using the language of Venn diagrams. The dissertation also refers to a case study highlighting a number of problems during the requirements elicitation process, amongst other the problem of tacit knowledge not surfacing during elicitation. Two new requirements elicitation methodologies are proposed namely: the SRE (Solitary Requirements Elicitation) and the DDI (Developer Domain Interaction) methodology. These two methods could potentially be more time consuming than other existing requirements elicitation methods, but the benefits could outweigh the cost of their implementation, since the new proposed methods have the potential to further facilitate the successful completion of a software development project. Following the introduction of the new requirements elicitation methods, they are then applied to the aforementioned case study and highlight just how the hidden domain of the client may become more visible, because the software development team has gained a deeper understanding of the client’s working environment. They have therefore increased their understanding of how the final product needs to function in order to fulfil the set out requirements correctly. Towards the end of the dissertation a summary and a conclusion as well as future work that could be undertaken in this area are provided. / Computer Science / M. Sc. (Computer Science)
177

A model using ICT adoption and training to improve the research productivity of academics

Basak, Sujit Kumar January 2015 (has links)
Submitted in fulfilment of the requirement of the Doctor of Technology degree in Information Technology, Durban University of Technology, Durban, South Africa, 2015. / Research productivity is one of the core functions of a university and it plays a crucial role for a nation to develop and find its standing in our global world. This study examined the effect of ICT adoption and training on the research productivity of university academics. Much research has been done on using technology in research with a view to increase productivity. However, hardly any research could be found on the use of ICT combined with ICT training with a view to increase research productivity. This study addressed this gap in the literature. The study sought to design a model that can increase research productivity of academics while optimizing ICT adoption and training effects. The study was conducted at four public universities in KwaZulu-Natal, South Africa, whilst the part of the study on ICT training was conducted at one of the four universities. This study was conducted both in the form of a survey of 103 university academics and in the form of experimental sessions, where the use of ICT (EndNote, NVivo, AMOS, SPSS, and Turnitin) with training was used for research, the use of ICT without training was used for research and, finally, a session where a manual system (without using research software/tools and training) was used for research. The overall aim of the study was to investigate and design a model for the increase in research productivity of academics in universities after having adopted ICTs. The final results of the research revealed that the use of ICT tools (EndNote, NVivo, AMOS, SPSS, and Turnitin) with training increases research productivity as compared to using ICT tools without training, and/or using a manual system (without using research software/tools and training). A statistically proven model is recommended with a view to increase research productivity of academics.
178

FeatureIT : a platform for collaborative software development

Siller, Gavin George 24 October 2013 (has links)
The development of enterprise software is a complex activity that requires a diverse set of stakeholders to communicate and coordinate in order to achieve a successful outcome. In this dissertation I introduce a high-level physical architecture for a platform titled FeatureIT that has the goal of supporting the collaboration between stakeholders throughout the entire Software Development Life Cycle (SDLC). FeatureIT is the result of unifying the theoretical foundations of the multi-disciplinary field of Computer Supported Cooperative Work (CSCW) with the paradigm and associated technologies of Web 2.0. The architecture was borne out a study of literature in the fields of CSCW, Web 2.0 and software engineering, which facilitated the identification of functional and non-functional requirements necessary for the platform. The design science research methodology was employed to construct this architecture iteratively to satisfy the requirements while validating its efficacy against a comprehensive set of scenarios that typically occur in the SDLC. / Computing / M. Sc. (Information Systems)
179

An instrument analysis system based on a modern relational database and distributed software architecture

Brand, Jacobus Edwin 03 1900 (has links)
Thesis (MBA)--Stellenbosch University, 2003. / ENGLISH ABSTRACT: This document discusses the development of a personal computer based financial instrument analysis system, based on the information from a relatively old sequential file based data source. The aim is to modernise the system to use the latest software and data storage technology. The principles used for the design of the system are discussed in Chapter 2. Principles for the development of relational databases are discussed, where after the development of personal computer based software architecture is discussed, to explain the choices made in the design of the system. Chapter 3 discusses the design and implementation of the system in more detail, based on the principles discussed in Chapter 2. Recommendations include a possible shift in architectural layout as well as recommendations for expansion of both the data stored and the analysis performed on the information. / AFRIKAANSE OPSOMMING: Hierdie dokument bespreek die ontwikkeling van ‘n persoonlike rekenaar gebaseerde finansiële instrument analise stelsel, gebaseer op inligting uit ‘n relatiewe ou sekwensiële leêr gebaseerde databron. Die doel is om die stelsel te moderniseer om sodoende van die nuutste sagteware en hardeware tegnologie gebruik te maak. Die beginsels wat gebruik is vir die ontwerp van die stelsel word kortliks in Hoofstuk 2 bespreek. Die beginsels vir die ontwerp van ‘n relasionele databasis word bespreek. Hierna word die ontwikkeling van persoonlike rekenaar gebaseerde sagteware argitektuur bespreek om meer lig te werp op die keuses wat geneem is met ontwerp van die stelsel se argitektuur. Hoofstuk 3 bespreek die ontwerp en implementering van die stelsel in meer detail, gebaseer op die beginsels bespreek in Hoofstuk 2. Voorstelle vir verbetering van die stelsel sluit in detail veranderings aan die argitektuur van die stelsel, sowel as voorstelle vir die uitbreiding van die stelsel wat betref tipe data wat gestoor word en en die analitiese vermoëns van die stelsel.
180

A model using ICT adoption and training to improve the research productivity of academics

Basak, Sujit Kumar January 2015 (has links)
Submitted in fulfilment of the requirement of the Doctor of Technology degree in Information Technology, Durban University of Technology, Durban, South Africa, 2015. / Research productivity is one of the core functions of a university and it plays a crucial role for a nation to develop and find its standing in our global world. This study examined the effect of ICT adoption and training on the research productivity of university academics. Much research has been done on using technology in research with a view to increase productivity. However, hardly any research could be found on the use of ICT combined with ICT training with a view to increase research productivity. This study addressed this gap in the literature. The study sought to design a model that can increase research productivity of academics while optimizing ICT adoption and training effects. The study was conducted at four public universities in KwaZulu-Natal, South Africa, whilst the part of the study on ICT training was conducted at one of the four universities. This study was conducted both in the form of a survey of 103 university academics and in the form of experimental sessions, where the use of ICT (EndNote, NVivo, AMOS, SPSS, and Turnitin) with training was used for research, the use of ICT without training was used for research and, finally, a session where a manual system (without using research software/tools and training) was used for research. The overall aim of the study was to investigate and design a model for the increase in research productivity of academics in universities after having adopted ICTs. The final results of the research revealed that the use of ICT tools (EndNote, NVivo, AMOS, SPSS, and Turnitin) with training increases research productivity as compared to using ICT tools without training, and/or using a manual system (without using research software/tools and training). A statistically proven model is recommended with a view to increase research productivity of academics. / D

Page generated in 0.072 seconds