• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 629
  • 311
  • 65
  • 61
  • 41
  • 21
  • 17
  • 15
  • 9
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • Tagged with
  • 1398
  • 1398
  • 590
  • 425
  • 306
  • 266
  • 231
  • 228
  • 175
  • 167
  • 133
  • 126
  • 126
  • 120
  • 119
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
681

GitHub Uncovered: Revealing the Social Fabric of Software Development Communities

Al Rubaye, Abduljaleel 01 January 2024 (has links) (PDF)
The proliferation of open-source software development platforms has given rise to various online social communities where developers can seamlessly collaborate, showcase their projects, and exchange knowledge and ideas. GitHub stands out as a preeminent platform within this ecosystem. It offers developers a space to host and disseminate their code, participate in collaborative ventures, and engage in meaningful dialogues with fellow community members. This dissertation embarks on a comprehensive exploration of various facets of software development communities on GitHub, with a specific focus on innovation diffusion, repository popularity dynamics, code quality enhancement, and user commenting behaviors. This dissertation introduces a popularity-based model that elucidates the diffusion of innovation on GitHub. We scrutinize the influence of a repository's popularity on the transfer of knowledge and the adoption of innovative practices, relying on a dataset encompassing GitHub fork events. Through a meticulous analysis of developers' collaborative coding efforts, this dissertation furnishes valuable insights into the impact of social factors, particularly popularity, on the diffusion of innovation. Furthermore, we introduce a novel approach to computing a weight-based popularity score, denoted as the Weighted Trend Popularity Score (WTPS), derived from the historical trajectory of repository popularity indicators, such as fork and star counts. The accuracy of WTPS as a comprehensive repository popularity indicator is assessed, and the significance of having a singular metric to represent repository popularity is underscored. We delve into the realm of code quality on GitHub by examining it from the perspective of code reviews. Our analysis centers on understanding the code review process and presents an approach rooted in regularity to foster superior code quality by enforcing coding standards. In the concluding phase of our research, we investigate the intricacies of communication within technology-related online communities. Our attention is drawn to the impact of user popularity on communication, as elucidated through an examination of comment timelines and commenting communities. To contextualize our findings, we compare the behavioral patterns of GitHub developers and users on other platforms, such as Reddit and Stack Overflow.
682

An approach to evaluate UML case tools and their current limitations

Elkhawalka, Shaimaa 01 April 2003 (has links)
No description available.
683

Real-time software development for data storage and event recording of a satellite ground control station

Patel, Prashant R. 01 July 2003 (has links)
No description available.
684

The application of structure and code metrics to large scale systems

Canning, James Thomas January 1985 (has links)
This work extends the area of research termed software metrics by applying measures of system structure and measures of system code to three realistic software products. Previous research in this area has typically been limited to the application of code metrics such as: lines of code, McCabe's Cyclomatic number, and Halstead's software science variables. However, this research also investigates the relationship of four structure metrics: Henry's Information Flow measure, Woodfield's Syntactic Interconnection Model, Yau and Collofello's Stability measure and McClure's Invocation complexity, to various observed measures of complexity such as, ERRORS, CHANGES and CODING TIME. These metrics are referred to as structure measures since they measure control flow and data flow interfaces between system components. Spearman correlations between the metrics revealed that the code metrics were similar measures of system complexity, while the structure metrics were typically measuring different dimensions of software. Furthermore, correlating the metrics to observed measures of complexity indicated that the Information Flow metric and the Invocation Measure typically performed as well as the three code metrics when project factors and subsystem factors were taken into consideration. However, it was generally true that no single metric was able to satisfactorily identify the variations in the data for a single observed measure of complexity. Trends between many of the metrics and the observed data were identified when individual components were grouped together. Code metrics typically formed groups of increasing complexity which corresponded to increases in the mean values of the observed data. The strength of the Information Flow metric and the Invocation measure is their ability to form a group containing highly complex components which was found to be populated by outliers in the observed data. / Ph. D.
685

Formal Verification of Quantum Software

Tao, Runzhou January 2024 (has links)
Real applications of near-term quantum computing are around the corner and quantum software is a key component. Unlike classical computing, quantum software is under the threat of both quantum hardware errors and human bugs due to the unintuitiveness of quantum physics theory. Therefore, trustworthiness and reliability are critical for the success of quantum computation. However, most traditional methods to ensure software reliability, like testing, do not transfer to quantum at scale because of the destructive and probabilistic nature of quantum measurement and the exponential-sized state space. In this thesis, I introduce a series of frameworks to ensure the trustworthiness of quantum computing software by automated formal verification. First, I present Giallar, a fully-automated verification toolkit for quantum compilers to formally prove that the compiler is bug-free. Giallar requires no manual specifications, invariants, or proofs, and can automatically verify that a compiler pass preserves the semantics of quantum circuits. To deal with unbounded loops in quantum compilers, Giallar abstracts three loop templates, whose loop invariants can be automatically inferred. To efficiently check the equivalence of arbitrary input and output circuits that have complicated matrix semantics representation, Giallar introduces a symbolic representation for quantum circuits and a set of rewrite rules for showing the equivalence of symbolic quantum circuits. With Giallar, I implemented and verified 44 (out of 56) compiler passes in 13 versions of the Qiskit compiler, the open-source quantum compiler standard, during which three bugs were detected in and confirmed by Qiskit. The evaluation shows that most of Qiskit compiler passes can be automatically verified in seconds and verification imposes only a modest overhead to compilation performance. Second, I introduce Gleipnir, an error analysis framework for quantum programs that enable scalable and adaptive verification of quantum error through the application of tensor networks. Giallar introduces the ( 𝜌̂, 𝛿)-diamond norm, an error metric constrained by a quantum predicate consisting of the approximate state 𝜌̂ and its distance 𝛿 to the ideal state 𝜌. This predicate ( 𝜌̂, 𝛿) can be computed adaptively using tensor networks based on Matrix Product States. Giallar features a lightweight logic for reasoning about error bounds in noisy quantum programs, based on the ( 𝜌̂, 𝛿)-diamond norm metric. The experimental results show that Giallar is able to efficiently generate tight error bounds for real-world quantum programs with 10 to 100 qubits, and can be used to evaluate the error mitigation performance of quantum compiler transformations. Finally, I present QSynth, a quantum program synthesis framework that synthesizes verified recursive quantum programs, including a new inductive quantum programming language, its specification, a sound logic for reasoning, and an encoding of the reasoning procedure into SMT instances. By leveraging existing SMT solvers, QSynth successfully synthesizes 10 quantum unitary programs including quantum arithmetic programs, quantum eigenvalue inversion, quantum teleportation and Quantum Fourier Transformation, which can be readily transpiled to executable programs on major quantum platforms, e.g., Q#, IBM Qiskit, and AWS Braket.
686

An automatic test data generation from UML state diagram using genetic algorithm.

Doungsa-ard, Chartchai, Dahal, Keshav P., Hossain, M. Alamgir, Suwannasart, T. January 2007 (has links)
Yes / Software testing is a part of software development process. However, this part is the first one to miss by software developers if there is a limited time to complete the project. Software developers often finish their software construction closed to the delivery time, they usually don¿t have enough time to create effective test cases for testing their programs. Creating test cases manually is a huge work for software developers in the rush hours. A tool which automatically generates test cases and test data can help the software developers to create test cases from software designs/models in early stage of the software development (before coding). Heuristic techniques can be applied for creating quality test data. In this paper, a GA-based test data generation technique has been proposed to generate test data from UML state diagram, so that test data can be generated before coding. The paper details the GA implementation to generate sequences of triggers for UML state diagram as test cases. The proposed algorithm has been demonstrated manually for an example of a vending machine.
687

[en] SDIFF: A COMPARISON TOOL BASED IN SYNTACTICAL DOCUMENT STRUCTURE / [pt] SDIFF: UMA FERRAMENTA PARA COMPARAÇÃO DE DOCUMENTOS COM BASE NAS SUAS ESTRUTURAS SINTÁTICAS

THIAGO PINHEIRO DE ARAUJO 15 September 2010 (has links)
[pt] Associado a cada sistema de controle de versão existe uma ferramenta de comparação responsável pela extração das diferenças entre duas versões de um documento. Estas ferramentas costumam realizar a comparação baseando-se na informação textual dos documentos, em que o elemento indivisível na comparação é a linha ou a palavra. Porém, o conteúdo versionado normalmente é fortemente estruturado (como exemplo, linguagens de programação) e a utilização deste mecanismo pode desrespeitar limites sintáticos e outras propriedades do documento, dificultando a interpretação das alterações. Nesse trabalho foi construída uma ferramenta para identificar as diferenças entre duas versões de um documento utilizando um mecanismo de comparação baseado na sua estrutura sintática. Desta forma, é possível identificar com maior precisão as diferenças relevantes ao leitor, reduzindo o esforço para compreender a semântica das alterações. A ferramenta construída é capaz de suportar diferentes tipos de documentos a partir da implementação de componentes que tratem das sintaxes desejadas. O componente implementado como exemplo neste trabalho trata a sintaxe da linguagem de programação C++. / [en] Associated with each version control system there’s a comparison tool for extracting the differences between two versions of a document. These tools tend to make a comparison based on textual information from documents, in which the indivisible element is the line or word. But the content versioned is usually highly structured (for example, programming languages) and the use of this mechanism can disrespect syntactical limits and other properties of the document, becoming difficult to interpret what really changed. In this work we created a tool to identify differences between two versions of a document using a comparison mechanism based on the syntactic structure. Thus, it is possible to identify more precisely the relevant differences to the reader, reducing the effort to understand the semantics of the changes. The tool can support different types of documents by implementing components that interprets the desired syntax. The example syntax component implemented in this work deals with the syntax of the programming language C++.
688

Exploring User-Centered Agile Design : An Autoethnographic study

Sjöberg, Sebastian January 2024 (has links)
In the world of software development, the most common framework is the agileframework. This framework first arose as a counter-reaction to waterfall development,mostly to incorporate user-centered design. But today the user-centered design and theuser are often overlooked. Still, some believe that the integration of user-centereddesign and agile development could help improve software development. Thisintegration is called user-centered agile design or UCAD for short. The main reason forconsidering UCAD over agile development is that software stands and falls with gooduser experience. Something that the user-centered perspective can help with.This thesis therefore sought to research the usage of UCAD in daily work with anautoethnographic approach. This meant that the author could use their experiencesfrom developing a piece of software as the basis of the research. This software was awebshop application for Quintus Technologies AB that acts as a way for their customersto buy spare and wear parts. The results were therefore in the form of a story told infirst-person perspective about the whole development period of about 20 weeks.This project found amongst other things that UCAD does hold some merit and theauthor also enjoyed this way of working. There were of course some problems, with thebiggest challenge being the act of trying to balance UX and functional tasks. To combatthis problem the notion of gear-switching was conceptualized. One specific factor thatwas brought up by earlier research was also found to be important. This factor was thetask factor and most importantly this factor brought up the consideration of tasks overroles. Whilst this project indicates that UCAD might be a good evolution of the agileframework there is still a lot more research that needs to be done mostly in differentsettings. Such as bigger teams and with more mature software. Whilst more research isneeded, this project still shows great potential for the UCAD framework.
689

Large language models and variousprogramming languages : A comparative study on bug detection and correction

Gustafsson, Elias, Flystam, Iris January 2024 (has links)
This bachelor’s thesis investigates the efficacy of cutting-edge Large Language Models (LLMs) — GPT-4, Code Llama Instruct (7B parameters), and Gemini 1.0 — in detecting and correcting bugs in Java and Python code. Through a controlled experiment using standardized prompts and the QuixBugs dataset, each model's performance was analyzed and compared. The study highlights significant differences in the ability of these LLMs to correctly identify and fix programming bugs, showcasing a comparative advantage in handling Python over Java. Results suggest that while all these models are capable of identifying bugs, their effectiveness varies significantly between models. The insights gained from this research aim to aid software developers and AI researchers in selecting appropriate LLMs for integration into development workflows, enhancing the efficiency of bug management processes.
690

Content and Temporal Analysis of Communications to Predict Task Cohesion in Software Development Global Teams

Castro Hernandez, Alberto 05 1900 (has links)
Virtual teams in industry are increasingly being used to develop software, create products, and accomplish tasks. However, analyzing those collaborations under same-time/different-place conditions is well-known to be difficult. In order to overcome some of these challenges, this research was concerned with the study of collaboration-based, content-based and temporal measures and their ability to predict cohesion within global software development projects. Messages were collected from three software development projects that involved students from two different countries. The similarities and quantities of these interactions were computed and analyzed at individual and group levels. Results of interaction-based metrics showed that the collaboration variables most related to Task Cohesion were Linguistic Style Matching and Information Exchange. The study also found that Information Exchange rate and Reply rate have a significant and positive correlation to Task Cohesion, a factor used to describe participants' engagement in the global software development process. This relation was also found at the Group level. All these results suggest that metrics based on rate can be very useful for predicting cohesion in virtual groups. Similarly, content features based on communication categories were used to improve the identification of Task Cohesion levels. This model showed mixed results, since only Work similarity and Social rate were found to be correlated with Task Cohesion. This result can be explained by how a group's cohesiveness is often associated with fairness and trust, and that these two factors are often achieved by increased social and work communications. Also, at a group-level, all models were found correlated to Task Cohesion, specifically, Similarity+Rate, which suggests that models that include social and work communication categories are also good predictors of team cohesiveness. Finally, temporal interaction similarity measures were calculated to assess their prediction capabilities in a global setting. Results showed a significant negative correlation between the Pacing Rate and Task Cohesion, which suggests that frequent communications increases the cohesion between team members. The study also found a positive correlation between Coherence Similarity and Task Cohesion, which indicates the importance of establishing a rhythm within a team. In addition, the temporal models at individual and group-levels were found to be good predictors of Task Cohesion, which indicates the existence of a strong effect of frequent and rhythmic communications on cohesion related to the task. The contributions in this dissertation are three fold. 1) Novel use of Temporal measures to describe a team's rhythmic interactions, 2) Development of new, quantifiable factors for analyzing different characteristics of a team's communications, 3) Identification of interesting factors for predicting Task Cohesion levels among global teams.

Page generated in 0.0875 seconds