• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 95
  • 76
  • 11
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 235
  • 235
  • 81
  • 76
  • 64
  • 43
  • 39
  • 36
  • 34
  • 34
  • 33
  • 29
  • 27
  • 26
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

A CONCEPTUAL FRAMEWORK FOR DISTRIBUTED SOFTWARE QUALITY NETWORK

ANUSHKA HARSHAD PATIL (7036883) 12 October 2021 (has links)
The advancement in technology has revolutionized the role of software in recent years. Software usage is practically found in all areas of the industry and has become a prime factor in the overall working of the companies. Simultaneously with an increase in the utilization of software, the software quality assurance parameters have become more crucial and complex. Currently the quality measurement approaches, standards, and models that are applied in the software industry are extremely divergent. Many a time the correct approach will wind up to be a combination of di erent concepts and techniques from di erent software assurance approaches [1]. Thus, having a platform that provides a single workspace for incorporating multiple software quality assurance approaches will ease the overall software quality process. In this thesis we have proposed a theoretical framework for distributed software quality assurance, which will be able to continuously monitor a source code repository; create a snapshot of the system for a given commit (both past and present); the snapshot can be used to create a multi-granular blockchain of the system and its metrics (i.e.,metadata) which we believe will let the tool developers and vendors participate continuously in assuring quality and security of systems and in the process be accessible when required while being rewarded for their services.
102

Automated Measurement and Change Detection of an Application’s Network Activity for Quality Assistance / Automatisk mätning och förändringsdetektering av en applikations nätverksaktivitet för kvalitetsstöd

Nissa Holmgren, Robert January 2014 (has links)
Network usage is an important quality metric for mobile apps. Slow networks, low monthly traffic quotas and high roaming fees restrict mobile users’ amount of usable Internet traffic. Companies wanting their apps to stay competitive must be aware of their network usage and changes to it. Short feedback loops for the impact of code changes are key in agile software development. To notify stakeholders of changes when they happen without being prohibitively expensive in terms of manpower the change detection must be fully automated. To further decrease the manpower overhead cost of implementing network usage change detection the system need to have low configuration requirements, and keep the false positive rate low while managing to detect larger changes. This thesis proposes an automated change detection method for network activity to quickly notify stakeholders with relevant information to begin a root cause analysis after a change in the network activity is introduced. With measurements of the Spotify’s iOS app we show that the tool achieves a low rate of false positives while detecting relevant changes in the network activity even for apps with dynamic network usage patterns as Spotify. / Nätverksaktivitet är ett viktigt kvalitetsmått för mobilappar. Mobilanvändare begränsas ofta av långsamma nätverk, låg månatlig trafikkvot och höga roamingavgifter. Företag som vill ha konkurrenskraftiga appar behöver vara medveten om deras nätverksaktivitet och förändringar av den. Snabb återkoppling för effekten av kodändringar är vitalt för agil programutveckling. För att underrätta intressenter om ändringar när de händer utan att vara avskräckande dyrt med avseende på arbetskraft måste ändringsdetekteringen vara fullständigt automatiserad. För att ytterligare minska arbetskostnaderna för ändringsdetektering av nätverksaktivitet måste detekteringssystemet vara snabbt att konfigurera, hålla en låg grad av felaktig detektering samtidigt som den lyckas identifiera stora ändringar. Den här uppsatsen föreslår ett automatiserat förändringsdetekteringsverktyg för nätverksaktivitet för att snabbt meddela stakeholders med relevant information för påbörjan av grundorsaksanalys när en ändring som påverkar nätverksaktiviteten introduceras. Med hjälp av mätningar på Spotifys iOS-app visar vi att verktyget når en låg grad av felaktiga detekteringar medan den identifierar ändringar i nätverksaktiviteten även för appar med så dynamisk nätverksanvändning som Spotify.
103

Propuesta de implementación de un modelo para la evaluación de la calidad del producto de software para una empresa consultora TI

Huamaní Vargas, Andre Henry, Watanabe Navarro, Javier Danilo 14 May 2020 (has links)
El presente trabajo de investigación tiene como objetivo el estudio de la línea de servicio de Certificación de Software de la empresa consultora TI y requiere resolver la problemática en la evaluación de los productos de desarrollo de software que impactan directamente en el cumplimiento de sus SLA, generando grandes pérdidas económicas. En el análisis cuantitativo, se identificó que existen inadecuadas técnicas para la evaluación con un impacto en el cumplimiento de los umbrales definidos en cada SLA, lo cual ha generado grandes pérdidas económicas en los últimos años por penalización. Ante esta situación caótica, se propone la implementación de un modelo de evaluación de calidad del producto de Software, la cual propone lineamientos de acuerdo con estándares y prácticas internacionales para la evaluación de la calidad, ayudando a incrementar la calidad de sus productos y la satisfacción del cliente, debido que se evidencia pérdidas económicas que se están incrementando anualmente. En este trabajo de investigación se hace una revisión general de los estándares de evaluación de calidad de producto de Software, se realiza una evaluación del cumplimiento de la norma ISO/IEC 25010 en la empresa y se propone un plan de mejora. Como conclusión, se recomienda la ejecución de la propuesta de implementación como apoyo estratégico al cumplimiento de los objetivos estratégicos de la empresa, reduciendo el riesgo de pérdidas económicas e incrementar la capacidad para ejecutar nuevas STD (Solicitudes Técnicas de Desarrollo), que permitirá a la empresa ser más rentable y brindar un servicio de mejor calidad. / The following work of research has as main subject the study of the software certification service line from the consulting TI company and requires solving the problematic in the evaluation of software development products that has a direct impact in the fulfillment of their SLA, generating large economic losses. In the quantitative analysis, it was identified there are inadequate techniques for the evaluation with an impact of the fulfillment of the thresholds defined on each SLA, which has generated large economic losses in the last year due to penalty. Given this chaotic situation, the implementation of a software product quality evaluation model is proposed, which provides guidelines in accordance with international standards and practices for the evaluation of quality, helping to increase the quality of its products and customer satisfaction, because there is evidence of economic losses that are increasing annually. In this research work a general review of the existing software product quality evaluation standards is made, an evaluation of compliance of the ISO/IEC 25010 norm in the company will be carried out and an improvement plan will be proposed. In conclusion, the implementation of the proposal is recommended as strategic support for the fulfillment of the strategic objectives of the company, reducing the risk of economic losses and increasing the capacity to execute new STD (Technical Development Requests), which will allow the company to be more profitable and provide a better quality service. / Trabajo de investigación
104

Studying the Relationship between Architectural Smells andMaintainability

Berglund, Alexander, Karlsson, Simon January 2023 (has links)
In recent years, there has been a surge in research on theimpact of architectural smells on software maintainability.Maintainability in turn encompasses several other qualityattributes as sub-characteristics, such as modularity andtestability. However, the empirical evidence establishing aclear relationship between these quality attributes andarchitectural smells has been lacking. This study aims to fillthis gap by examining the correlation between sevenarchitectural smells and testability/modularity across 378versions of eight open-source projects. A self-developedtool—ASAT—was used to collect data on architecturalsmells and metrics relating to modularity and testability. Thecollected data was analyzed to reveal correlations at both theproject-level and within packages. Contrary to expectations,the findings show that, generally, there is no negativecorrelation between smells and modularity at the projectlevel, except for the Dense Structure smell. Remarkably,project-level testability showed the opposite result.However, a rival explanation proposes that the increasingsize of a project may be a stronger factor in this relationship.Similarly, package-level smells, as a whole, did not exhibit anegative correlation with testability. However, most smellsdemonstrated a stronger negative relationship with thequality attributes they were claimed to impair, incomparison to their counterparts. This empirical evidencesubstantiates the assertion that specific architectural smellsindeed relate to distinct quality attributes, which hadpreviously only been supported by argument.
105

IMPROVING MICROSERVICES OBSERVABILITY IN CLOUD-NATIVE INFRASTRUCTURE USING EBPF

Bhavye Sharma (15345346) 26 April 2023 (has links)
<p>Microservices have emerged as a popular pattern for developing large-scale applications in cloud environments for their flexibility, scalability, and agility benefits. However, microservices make management more complex due to their scale, multiple languages, and distributed nature. Orchestration and automation tools like Kubernetes help deploy microservices running simultaneously, but it can be difficult for an operator to understand their behaviors, interdependencies, and interactions. In such a complex and dynamic environment, performance problems (e.g., slow application responses and high resource usage)  require significant human effort spent on diagnosis and recovery. Moreover, manual diagnosis of cloud microservices tends to be tedious, time-consuming, and impractical. Effective and automated performance analysis and anomaly detection require an observable system, which means an application's internal state can be inferred by observing and tracking metrics, traces, and logs. Traditional APM uses libraries and SDKs to improve application monitoring and tracing but has additional overheads of rewriting, recompiling, and redeploying the applications' code base. Therefore, there is a critical need for a standardized automated microservices observability solution that does not require rewriting or redeploying the application to keep up with the agility of microservices.</p> <p><br></p> <p>This thesis studies observability for microservices and implements an automated Extended Berkeley Packet Filter (eBPF) based observability solution. eBPF is a Linux feature that allows us to write extensions to the Linux kernel for security and observability use cases. eBPF does not require modifying the application layer and instrumenting the individual microservices. Instead, it instruments the kernel-level API calls, which are common across all hosts in the cluster. eBPF programs provide observability information from the lowest-level system calls and can export data without additional performance overhead. The Prometheus time-series database is leveraged to store all the captured metrics and traces for analysis. With the help of our tool, a DevOps engineer can easily identify abnormal behavior of microservices and enforce appropriate countermeasures. Using Chaos Mesh, we inject anomalies at the network and host layer, which we can identify with root cause identification using the proposed solution. The Chameleon cloud testbed is used to deploy our solution and test its capabilities and limitations.</p>
106

Smart Security System Based on Edge Computing and Face Recognition

Heejae Han (9226565) 27 April 2023 (has links)
<p>Physical security is one of the most basic human needs. People care about it for various reasons; for the safety and security of personnel, to protect private assets, to prevent crime, and so forth. With the recent proliferation of AI, various smart physical security systems are getting introduced to the world. Many researchers and engineers are working on developing AI-driven physical security systems that have the capability to identify potential security threats by monitoring and analyzing data collected from various sensors. One of the most popular ways to detect unauthorized entrance to restricted space is using face recognition. With a collected stream of images and a proper algorithm, security systems can recognize faces detected from the image and send an alert when unauthorized faces are recognized. In recent years, there has been active research and development on neural networks for face recognition, e.g. FaceNet is one of the advanced algorithms. However, not much work has been done to showcase what kind of end-to-end system architecture is effective for running heavy-weight computational loads such as neural network inferences. Thus, this study explores different hardware options that can be used in security systems powered by a state-of-the-art face recognition algorithm and proposes that an edge computing based approach can significantly reduce the overall system latency and enhance the system reactiveness. To analyze the pros and cons of the proposed system, this study presents two different end-to-end system architectures. The first system is an edge computing-based system that operates most of the computational tasks at the edge node of the system, and the other is a traditional application server-based system that performs core computational tasks at the application server. Both systems adopt domain-specific hardware, Tensor Processing Units, to accelerate neural network inference. This paper walks through the implementation details of each system and explores its effectiveness. It provides a performance analysis of each system with regard to accuracy and latency and outlines the pros and cons of each system.</p> <p><br></p>
107

[pt] RELEVANDO FATORES INTERATIVOS NA DEGRADAÇÃO DO DESIGN DE SOFTWARE / [en] REVEALING INTERACTING FACTORS IN DECAY OF SOFTWARE DESIGN

DANIEL JOSE BARBOSA COUTINHO 28 December 2021 (has links)
[pt] Desenvolvedores realizam mudanças de código constantemente durante a vida de um projeto de software. Essas mudanças podem induzir a degradação progressiva do design. A degradação do design pode ser reduzida ou acelerada por fatores que interagem em cada mudança. Esses fatores podem variar desde uma mudança ou ação de reparo específica – e.g., refatorações – até a maneira como os desenvolvedores contribuem e discutem mudanças. Entretanto, estudos anteriores não exploram como esses fatores interagem e influenciam na degradação do design. Eles apenas focam em alguns fatores e tendem a os investigar em isolamento. Estudar os fatores em isolamento pode não explicar adequadamente qual é o conjunto mais relevante de interações entre fatores e qual sua influência na degradação do design. Isso pode indicar que abordagens existentes para evitar ou mitigar a degradação do design são incompletas, já que elas não consideram interações entre fatores que podem ser relevantes. Portanto, essa dissertação relata uma investigação que almeja aumentar a compreensão sobre como uma ampla gama de interações entre fatores pode afetar a degradação do design, para que consequentemente possam ser investigadas práticas efetivas para evitar ou mitigar esse fenômeno. Para tal fim, nós realizamos uma análise aprofundada buscando preencher lacunas no conhecimento existente sobre dois tipos de fatores: fatores relacionados ao processo (i.e. relacionados às mudanças e seus resultados produzidos) e fatores relacionados ao desenvolvedor (i.e. relacionados ao desenvolvedor trabalhando nas mudanças). Nós focamos em analisar os efeitos de possíveis interações entre os fatores previamente mencionados e uma série de sub-fatores, no que diz respeito como essas interações afetam módulos que sofreram diferentes níveis de degradação. Por exemplo, nós observamos que: (1) individualmente, tanto o sub-fator relacionado ao desenvolvedor que representa um desenvolvedor novato (que está contribuindo pela primeira vez), quanto o sub-fator relacionado ao processo que representa tamanho de uma mudança, não se mostraram relacionados a efeitos negativos na qualidade de código das classes alteradas. Porém, analisando interações entre fatores, nós observamos que mudanças em que esses dois fatores interagem tendem a ter um efeito negativo no código, causando degradação. Interessantemente, esse comportamento não se alterou mesmo quando mudança foi introduzida através de uma pull request (o que frequentemente inicia um processo de revisão de código), (2) surpreendentemente, refatorações de extração frequentemente não tem um efeito positivo na qualidade do código, enquanto, em contrapartida, as refatorações de movimentação foram predominantemente positivas. Nós também discutimos como esses achados apresentados na dissertação podem ajudar desenvolvedores e pesquisadores na melhoria de suas diretrizes sobre como evitar e monitorar a degradação do design. / [en] Developers constantly perform code changes throughout the lifetime of a project. These changes may induce the introduction of design decay over time. Design decay may be reduced or accelerated by interacting factors that underlie each change. These factors may come from specific actions of change or repair – e.g., refactorings – to how developers contribute and discuss the changes. However, existing studies do not explain how these factors interact and influence design decay. They solely tend to focus on a few types of factors, and often consider them in isolation. Interactions between factors may cause different outcomes than those previously studied. Studying factors in isolation may not properly explain what are the most relevant set of interacting factors that influence design decay. This may indicate that existing approaches to avoid or mitigate design decay are misleading since they do not consider potentially relevant interactions between various factors. Thus, this dissertation reports an investigation that aims to increase the understanding of how a wide range of interacting factors can influence design decay in order to facilitate the investigation of which practices can be used to avoid or mitigate design decay. To this end, we performed an in-depth analysis to fill knowledge gaps on two types of factors: process-related (i.e., related to changes and their produced outcomes) and developer-related (i.e., related to the developer working on the changes) factors. We focused on analyzing the effects of potential interactions between the aforementioned factors and 12 sub-factors with regards to how they affected modules with different levels of decay. We observed diverging decay patterns in these modules. Our results indicate that both types of factors can be used to distinguish between different decay levels in classes. We have also observed that: (1) individually, the developer-related subfactor that represented first-time contributors, as well as the process-related one that represented size of the changes, did not exert negative effects on the changed classes. However, when analyzing specific factor interactions, we saw that changes where both of these factors interacted tended to have a negative effect and led to decay. Interestingly, this behaviour did not alter even when the change was introduced via pull request (which usually triggers a code review process); (2) surprisingly, extraction-type refactorings often do not have a positive effect on code quality, while, by contrast, move refactorings were mostly positive. We also discuss how these findings in this dissertation can aid developers and researchers in improving their guidelines for the avoidance and monitoring of design decay.
108

A Life Cycle Software Quality Model Using Bayesian Belief Networks

Beaver, Justin 01 January 2006 (has links)
Software practitioners lack a consistent approach to assessing and predicting quality within their products. This research proposes a software quality model that accounts for the influences of development team skill/experience, process maturity, and problem complexity throughout the software engineering life cycle. The model is structured using Bayesian Belief Networks and, unlike previous efforts, uses widely-accepted software engineering standards and in-use industry techniques to quantify the indicators and measures of software quality. Data from 28 software engineering projects was acquired for this study, and was used for validation and comparison of the presented software quality models. Three Bayesian model structures are explored and the structure with the highest performance in terms of accuracy of fit and predictive validity is reported. In addition, the Bayesian Belief Networks are compared to both Least Squares Regression and Neural Networks in order to identify the technique is best suited to modeling software product quality. The results indicate that Bayesian Belief Networks outperform both Least Squares Regression and Neural Networks in terms of producing modeled software quality variables that fit the distribution of actual software quality values, and in accurately forecasting 25 different indicators of software quality. Between the Bayesian model structures, the simplest structure, which relates software quality variables to their correlated causal factors, was found to be the most effective in modeling software quality. In addition, the results reveal that the collective skill and experience of the development team, over process maturity or problem complexity, has the most significant impact on the quality of software products.
109

MLpylint: Automating the Identification of Machine Learning-Specific Code Smells

Hamfelt, Peter January 2023 (has links)
Background. Machine learning (ML) has rapidly grown in popularity, becoming a vital part of many industries. This swift expansion has brought about new challenges to technical debt, maintainability and the general software quality of ML systems. With ML applications becoming more prevalent, there is an emerging need for extensive research to keep up with the pace of developments. Currently, the research on code smells in ML applications is limited and there is a lack of tools and studies that address these issues in-depth. This gap in the research highlights the necessity for a focused investigation into the validity of ML-specific code smells in ML applications, setting the stage for this research study. Objectives. Addressing the limited research on ML-specific code smells within Python-based ML applications. To achieve this, the study begins with the identification of these ML-specific code smells. Once recognized, the next objective is to choose suitable methods and tools to design and develop a static code analysis tool based on code smell criteria. After development, an empirical evaluation will assess both the tool’s efficacy and performance. Additionally, feedback from industry professionals will be sought to measure the tool’s feasibility and usefulness. Methods. This research employed Design Science Methodology. In the problem identification phase, a literature review was conducted to identify ML-specific code smells. In solution design, a secondary literature review and consultations with experts were performed to select methods and tools for implementing the tool. Additionally, 160 open-source ML applications were sourced from GitHub. The tool was empirically tested against these applications, with a focus on assessing its performance and efficacy. Furthermore, using the static validation method, feedback on the tool’s usefulness was gathered through an expert survey, which involved 15 ML professionals from Ericsson. Results. The study introduced MLpylint, a tool designed to identify 20 ML-specific code smells in Python-based ML applications. MLpylint effectively analyzed 160ML applications within 36 minutes, identifying in total 5380 code smells, although, highlighting the need for further refinements to each code smell checker to accurately identify specific patterns. In the expert survey, 15 ML professionals from Ericsson acknowledged the tool’s usefulness, user-friendliness and efficiency. However, they also indicated room for improvement in fine-tuning the tool to avoid ambiguous smells. Conclusions. Current studies on ML-specific code smells are limited, with few tools addressing them. The development and evaluation of MLpylint is a significant advancement in the ML software quality domain, enhancing reliability and reducing associated technical debt in ML applications. As the industry integrates such tools, it’s vital they evolve to detect code smells from new ML libraries. Aiding developers in upholding superior software quality but also promoting further research in the ML software quality domain.
110

A Software Vulnerability Prediction Model Using Traceable Code Patterns And Software Metrics

Sultana, Kazi Zakia 10 August 2018 (has links)
Software security is an important aspect of ensuring software quality. The goal of this study is to help developers evaluate software security at the early stage of development using traceable patterns and software metrics. The concept of traceable patterns is similar to design patterns, but they can be automatically recognized and extracted from source code. If these patterns can better predict vulnerable code compared to the traditional software metrics, they can be used in developing a vulnerability prediction model to classify code as vulnerable or not. By analyzing and comparing the performance of traceable patterns with metrics, we propose a vulnerability prediction model. Objective: This study explores the performance of code patterns in vulnerability prediction and compares them with traditional software metrics. We have used the findings to build an effective vulnerability prediction model. Method: We designed and conducted experiments on the security vulnerabilities reported for Apache Tomcat (Releases 6, 7 and 8), Apache CXF and three stand-alone Java web applications of Stanford Securibench. We used machine learning and statistical techniques for predicting vulnerabilities of the systems using traceable patterns and metrics as features. Result: We found that patterns have a lower false negative rate and higher recall in detecting vulnerable code than the traditional software metrics. We also found a set of patterns and metrics that shows higher recall in vulnerability prediction. Conclusion: Based on the results of the experiments, we proposed a prediction model using patterns and metrics to better predict vulnerable code with higher recall rate. We evaluated the model for the systems under study. We also evaluated their performance in the cross-dataset validation.

Page generated in 0.0465 seconds