• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 106
  • 76
  • 13
  • 8
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 252
  • 252
  • 81
  • 80
  • 65
  • 44
  • 39
  • 37
  • 37
  • 36
  • 34
  • 31
  • 28
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Arquitetura de sistemas hipermídia adaptativos baseada em atributos de qualidade. / Architecture of adaptive hypermedia systems based on quality attributes.

Takikawa, Fernando Kazuyoshi 28 April 2010 (has links)
A hipermídia adaptativa permite o desenvolvimento de sistemas capazes de fornecer um conteúdo personalizado para cada usuário, baseado em suas características individuais. Essa capacidade é importante em áreas como o ensino, em que o conteúdo didático pode ser apresentado de forma individualizada, de acordo com o progresso e interesse do aluno. Dentre os diversos modelos de Sistemas Hipermídia Adaptativos, destacam-se os modelos AHAM e Munich. No entanto, as propostas de Sistemas Hipermídia Adaptativos conhecidas focam-se apenas nas funcionalidades do sistema, não apresentando maior preocupação com os requisitos não funcionais, ou seja, aqueles mais profundamente relacionados com aspectos de qualidade. O desenvolvimento de uma arquitetura de software precisa considerar tantos os aspectos funcionais quanto os não funcionais e, ao ignorar esta premissa, corre-se o risco de produzir sistemas de baixa qualidade. Partindo deste cenário, este trabalho propõe o desenvolvimento de visões arquiteturais para o sistema SHASIM. SHASIM é um sistema de hipermídia adaptativa, derivado do modelo Munich, que tem como proposta ser um sistema Web adaptativo voltado à educação, adaptando o conteúdo de acordo com o estilo cognitivo e as inteligências múltiplas do aluno. Com base no levantamento dos requisitos funcionais e não-funcionais de Sistemas Hipermídia Adaptativos, este trabalho propõe visões arquiteturais que complementem a arquitetura originalmente proposta para o sistema. Essas visões contemplam um conjunto de atributos de qualidade desejáveis e não considerados na sua implementação inicial, porém fundamentais para conferir qualidade mínima a um sistema desse tipo. / Adaptive hypermedia supports the development of systems able to provide a personalized content for each user, based on his/her personal attributes. This feature is valuable in areas such as e-learning where the learning content can be presented individually according to the progress and interest of the student. Among the several models of Adaptive Hypermedia Systems, the most relevant ones are the AHAM and Munich models. However, the known proposals of Adaptive Hypermedia Systems focus only on functional aspects of the system and show low concern about nonfunctional requirements, i.e. those requirements more deeply related to quality aspects of the system. The development of software architecture has to consider both functional and non-functional aspects and by ignoring this premise, it assumes the risk of developing low quality systems. From this actual scenario, this dissertation proposes the development of architectural views to SHASIM system. SHASIM is an adaptive hypermedia system derived from Munich model that is a proposal for an adaptive Web system focused on learning by adapting the domain contents according to the cognitive style and multiple intelligences of the student. Based on the functional and non-functional requirement of Adaptive Hypermedia Systems, this dissertation proposes architectural views that complement the system architecture. These new views involve the desired quality attributes that were not considered during the first version of the system, but that are essential to grant the minimum quality for this class of systems.
72

Ranking source code static analysis warnings for continuous monitoring of free/libre/open source software repositories / Ranqueamento de avisos de análise estática de código fonte para monitoramento de repositórios de software livre

Ribeiro, Athos Coimbra 22 June 2018 (has links)
While there is a wide variety of both open source and proprietary source code static analyzers available in the market, each of them usually performs better in a small set of problems, making it hard to choose one single tool to rely on when examining a program. Combining the analysis of different tools may reduce the number of false negatives, but yields a corresponding increase in the number of false positives (which is already high for many tools). An interesting solution, then, is to filter these results to identify the issues least likely to be false positives. This work presents kiskadee, a system to support the usage of static analysis during software development by providing carefully ranked static analysis reports. First, it runs multiple static analyzers on the source code. Then, using a classification model, the potential bugs detected by the static analyzers are ranked based on their importance, with critical flaws ranked first, and potential false positives ranked last. To train kiskadee\'s classification model, we post-analyze the reports generated by three tools on synthetic test cases provided by the US National Institute of Standards and Technology. To make our technique as general as possible, we limit our data to the reports themselves, excluding other information such as change histories or code metrics. The features extracted from these reports are used to train a set of decision trees using AdaBoost to create a stronger classifier, achieving 0.8 classification accuracy (the combined false positive rate from the used tools was 0.61). Finally, we use this classifier to rank static analyzer alarms based on the probability of a given alarm being an actual bug. Our experimental results show that, on average, when inspecting warnings ranked by kiskadee, one hits 5.2 times less false positives before each bug than when using a randomly sorted warning list. / Embora exista grande variedade de analisadores estáticos de código-fonte disponíveis no mercado, tanto com licenças proprietárias, quanto com licenças livres, cada uma dessas ferramentas mostra melhor desempenho em um pequeno conjunto de problemas distinto, dificultando a escolha de uma única ferramenta de análise estática para analisar um programa. A combinação das análises de diferentes ferramentas pode reduzir o número de falsos negativos, mas gera um aumento no número de falsos positivos (que já é alto para muitas dessas ferramentas). Uma solução interessante é filtrar esses resultados para identificar os problemas com menores probabilidades de serem falsos positivos. Este trabalho apresenta kiskadee, um sistema para promover o uso da análise estática de código fonte durante o ciclo de desenvolvimento de software provendo relatórios de análise estática ranqueados. Primeiramente, kiskadee roda diversos analisadores estáticos no código-fonte. Em seguida, utilizando um modelo de classificação, os potenciais bugs detectados pelos analisadores estáticos são ranqueados conforme sua importância, onde defeitos críticos são colocados no topo de uma lista, e potenciais falsos positivos, ao fim da mesma lista. Para treinar o modelo de classificação do kiskadee, realizamos uma pós-análise nos relatórios gerados por três analisadores estáticos ao analisarem casos de teste sintéticos disponibilizados pelo National Institute of Standards and Technology (NIST) dos Estados Unidos. Para tornar a técnica apresentada o mais genérica possível, limitamos nossos dados às informações contidas nos relatórios de análise estática das três ferramentas, não utilizando outras informações, como históricos de mudança ou métricas extraídas do código-fonte dos programas inspecionados. As características extraídas desses relatórios foram utilizadas para treinar um conjunto de árvores de decisão utilizando o algoritmo AdaBoost para gerar um classificador mais forte, atingindo uma acurácia de classificação de 0,8 (a taxa de falsos positivos das ferramentas utilizadas foi de 0,61, quando combinadas). Finalmente, utilizamos esse classificador para ranquear os alarmes dos analisadores estáticos nos baseando na probabilidade de um dado alarme ser de fato um bug no código-fonte. Resultados experimentais mostram que, em média, quando inspecionando alarmes ranqueados pelo kiskadee, encontram-se 5,2 vezes menos falsos positivos antes de se encontrar cada bug quando a mesma inspeção é realizada para uma lista ordenada de forma aleatória.
73

Maintenance of the Quality Monitor Web-Application

Ponomarenko, Maksym January 2013 (has links)
Applied Research in System Analysis (ARiSA) is a company specialized in the development of the customer-specific quality models and applied research work. In order to improve the quality of the projects and to reduce maintenance costs, ARiSA developed Quality Monitor (QM) – a web application for quality analysis. QM application has been originally developed as a basic program to enable customers to evaluate the quality of the sources. Therefore, the business logic of the application was simplified and certain limitations were imposed on it, which in its turn leads to a number of issues related to user experience, performance and architecture design. These aspects are important for both application as a product, and for its future promotion. Moreover, this is important for customers, as end users. Main application issues, which were added to the maintenance list are: manual data upload, insufficient server resources to handle long-running and resource consuming operations, no background processing and status reporting, simplistic presentation of analysis results and known usability issues, weak integration between analysis back-ends and front-end. ­­­­­­­­­­­In order to address known issues and to make improvements of the existing limitations, a maintenance phase of QM application is initiated. First of all, it is intended to stabilize current version and improve user experience. It also needed for refactoring and implementation of more efficient data uploads processing in the background. In addition, extended functionality of QM would fulfill customer needs and transform application from the project into a product. Extended functionality includes: automated data upload from different build processes, new data visualizations, and improvement of the current functionality according to customer comments. Maintenance phase of QM application has been successfully completed and master thesis goals are met. Current version is more stable and more responsive from user experience perspective. Data processing is more efficient, and now it is implemented as background analysis with automatic data import. User interface has been updated with visualizations for client-side interaction and progress reporting. The solution has been evaluated and tested in close cooperation with QM application customers. This thesis describes requirements analysis, technology stack with choice rationale and implementation to show maintenance results.
74

Software Quality Attributes - A Resource-Based Perspective

Wang, Hsiang-Ying 06 September 2011 (has links)
Many software development projects were not able to meet their deadlines with expected qualities under budgets. Basically, it was because the stakeholders and the de-velopment team sharing different expectations and misunderstanding the essence of how software was built. In most cases, the budget and resources were allocated according to the Functional Requirements, which described the features about the software. But on the contrary the development teams were striving in making decision about how to mitigate those Non-functional Requirements which were not included in the budget during the development cycle. However, to mitigate the Non-functional Requirements is not without a price, and letting development team impose resources arbitrarily will make software budget estimation more complicated and worse. This research pointed out what Non-functional Requirements should be abides by the international standard ISO 9126, offered a relatively accurate budget estimation framework. The framework showed how the stakeholders and the development team reach the consensus about the software, how to integrate with Cost Accounting to monitor the budget distribution and to implicate how software should be scheduled and estimated the cost for next coming projects.
75

A Pre-enactment Model For Measuring Process Quality

Guceglioglu, A.selcuk 01 June 2006 (has links) (PDF)
Most of the process measurement studies are related with time and cost based models. Although quality is the other conventional aspect, there are no widely used models for measuring the process quality in the literature. In order to provide complementary information about the quality, a process quality measurement model has been chosen to be developed and the studies about process characteristics have been searched in the scope of the thesis. Moreover, by utilizing the similarities between process and software, the studies in software quality have been investigated. In the light of the researches, a model is built on the basis of ISO/IEC 9126 Software Product Quality Model. Some of the quality attributes are redefined in the model according to the process characteristics. In addition, new attributes unique only to the process are developed. A case study is performed and its results discussed from different perspectives of applicability, understandability and suitability.
76

The Effects Of Test Driven Development On Software Productivity And Software Quality

Unlu, Cumhur 01 September 2008 (has links) (PDF)
In the 1990s, software projects became larger in size and more complicated in structure. The traditional development processes were not able to answer the needs of these growing projects. Comprehensive documentation in traditional methodologies made processes slow and discouraged the developers. Testing, after all code is written, was time consuming, too costly and made error correction and debugging much harder. Fixing the code at the end of the project also affects the internal quality of the software. Agile software development processes evolved to bring quick solutions to these existing problems of the projects. Test Driven Development (TDD) is a technique, used in many agile methodologies, that suggests minimizing documentation, writing automated tests before implementing the code and frequently run tests to get immediate feedback. The aim is to increase software productivity by shortening error correction duration and increase software quality by providing rapid feedback to the developer. In this thesis work, a software project is developed with TDD and compared with a control project developed using traditional development techniques in terms of software productivity and software quality. In addition, TDD project is compared with an early work in terms of product quality. The benefits and the challenges of TDD are also investigated during the whole process.
77

An Automated Quality Measurement Approach For Business Process Models

Gurbuz, Ozge 01 September 2011 (has links) (PDF)
Business process modeling has become a common need for organizations. Therefore process quality is also having an important role for the organizations. The most of the quality studies are based on cost and time which can be analyzed during or after the execution of the business processes. There are also quality measures which help analyzing measures before the execution of the business processes. This type of measures can give early feedback about the processes. There are three frameworks defined in the literature for a more comprehensive measurement. One of the frameworks is adapted from software programs and it aims to enable process design to be less error-prone, understandable and maintainable. The second framework is adapted from object-oriented software designs and it provides object-oriented view to the design of the business process. The last framework is adapted from ISO/IEC Software Product Quality enabling to measure the quality of process itself rather than the design. By conducting a case study, the measures defined in the frameworks are explored in terms of applicability, automation potential and required time and effort on a set of business process model. As a result of this study it is observed that measurement takes time and requires effort and is always error-prone. Therefore, an approach is implemented by automating the measures which have automation potential, in order to decrease the required time and effort and also to increase the accuracy of the measurement. The second case study is then conducted on a set of another business process models in order to validate the approach.
78

An empirical study on software quality : developer perception of quality, metrics, and visualizations

Wilson, Gary Lynn 09 December 2013 (has links)
Software tends to decline in quality over time, causing development and maintenance costs to rise. However, by measuring, tracking, and controlling quality during the lifetime of a software product, its technical debt can be held in check, reducing total cost of ownership. The measurement of quality faces challenges due to disagreement in the meaning of software quality, the inability to directly measure quality factors, and the lack of measurement practice in the software industry. This report addresses these challenges through both a literature survey, a metrics derivation process, and a survey of professional software developers. Definitions of software quality from the literature are presented and evaluated with responses from software professionals. A goal, question, metric process is used to derive quality-targeted metrics tracing back to a set of seven code-quality subgoals, while a survey to software professionals shows that despite agreement that metrics and metric visualizations would be useful for improving software quality, the techniques are underutilized in practice. / text
79

Μεθοδολογία έγκαιρης εκτίμησης της γνώμης των χρηστών για την ποιότητα λογισμικού / A methodology for early estimation of users' opinions of software quality

Σταυρινούδης, Δημήτριος 25 June 2007 (has links)
Στην παρούσα διδακτορική διατριβή προτείνονται διάφορες μέθοδοι και τεχνικές που συνεισφέρουν στην έγκαιρη εκτίμηση της γνώμης των χρηστών για την ποιότητα λογισμικού. Τέτοιες μέθοδοι είναι α) η επιλογή και χρήση μετρικών λογισμικού και η ανάλυση των αποτελεσμάτων τους, β) η δόμηση και η ανάλυση ερωτηματολογίων για τη μέτρηση της γνώμης των χρηστών για την ποιότητα λογισμικού, γ) η συσχέτιση εσωτερικών μετρικών λογισμικού με τα εξωτερικά ποιοτικά χαρακτηριστικά ενός προγράμματος λογισμικού, δ) η μεταβολή της γνώμης του χρήστη για την ποιότητα ενός προγράμματος λογισμικού με την πάροδο του χρόνου σε σχέση με το επίπεδο εμπειρίας του χρήστη και ε) η χρήση και η προσαρμογή κανόνων και μοντέλων από τη θεωρία Αναθεώρησης Άποψης. Όλες οι παραπάνω μέθοδοι συνδυάζονται ώστε να συνεισφέρουν στην προτεινόμενη μεθοδολογία της διδακτορικής διατριβής. / In this dissertation, a number of methods and techniques that contribute to the early estimation of users’ opinions of software quality are proposed. These are a) the selection and use of software metrics and the analysis of their results, b) the formation and analysis of questionnaires measuring users’ opinions of software quality, c) the correlation between internal software metrics and the external quality characteristics of a software program, d) the differentiation of a user’s opinion of the quality of a software program over time in relation to the experience level of the user and e) the use and adaptation of rules and models of Belief Revision theory. The combination of these methods results in the proposed methodology of this dissertation.
80

Διεξαγωγή μετρήσεων ποιότητας με στόχο τη βελτίωση της συντηρησιμότητας σε λογισμικό αλληλεπίδρασης με Βάση Δεδομένων / Applying metrics to an object-oriented software interacting with a database to ensure its maintainability

Πέρδικα, Πολυτίμη 16 May 2007 (has links)
Η ποιότητα του λογισμικού είναι μία πολυσυζητημένη έννοια στις μέρες μας. Παρόλο που δεν υπάρχει ένας και μόνο ορισμός που να την περιγράφει, όλοι αντιλαμβάνονται την έννοια της ποιότητας λογισμικού, ιδιαίτερα μέσω της απουσίας της. Η διασφάλιση της ποιότητας του λογισμικού συνδέεται άμεσα με την έννοια της μετρικής, που είναι μία διαδικασία απαραίτητη για τη εκτίμηση της κατάστασης των προϊόντων, των διαδικασιών και των πόρων παραγωγής λογισμικού. Με την εφαρμογή των μετρικών σε ένα λογισμικό, μετρώνται εκείνα τα χαρακτηριστικά του που συμβάλλουν σημαντικά στην ποιότητά του. Έτσι, είναι δυνατό να εξαχθούν συμπεράσματα για το κατά πόσο το λογισμικό πληροί τα κριτήρια ποιότητας. Αντικείμενο της παρούσας διπλωματικής εργασίας είναι η παρουσίαση μεθοδολογίας διεξαγωγής μετρήσεων ποιότητας σε λογισμικό αντικειμενοστραφούς προγραμματισμού που υλοποιεί την αλληλεπίδραση με μία Βάση Δεδομένων, ώστε να εξαχθούν συμπεράσματα κυρίως για τη συντηρησιμότητά του και κατ’ επέκταση για τη δυνατότητα επαναχρησιμοποίησής του. / Although there is not a unique definition for ‘software quality’, its value is clearly understood, especially through its absence. Software quality reassurance is related with the concept of ‘metrics’. Metrics are considered essential to estimate the state of the product, the procedures and the resources for the software production. Through the application of metrics to a software product, the characteristics that contribute to its quality can be measured. In this way, conclusions can be drawn regarding the degree of fulfillment for the criteria of quality. This thesis presents a methodology of applying metrics to an object-oriented software, which is responsible for interacting with a database. The results of measuring the most important characteristics of the software lead to conclusions about the software’s maintainability and reusability.

Page generated in 0.2681 seconds