Spelling suggestions: "subject:"coequality metrics"" "subject:"c.equality metrics""
41 |
[en] SUPPORT TO THE SYNTHESIS OF STRUCTURAL MODELS OF OBJECT-ORIENTED SOFTWARE USING CO-EVOLUTIONARY GENETIC ALGORITHMS / [pt] APOIO À SÍNTESE DE MODELOS ESTRUTURAIS DE SOFTWARE ORIENTADO A OBJETOS UTILIZANDO ALGORITMOS GENÉTICOS CO-EVOLUCIONÁRIOSTHIAGO SOUZA MENDES GUIMARAES 25 October 2005 (has links)
[pt] Esta dissertação investiga o uso de Algoritmos Genéticos
Co-evolucionários
na automatização do processo de desenvolvimento de
Sistemas de Software
Orientados a Objetos. A qualidade final do software
depende principalmente da
qualidade da modelagem desenvolvida para o mesmo.
Durante
a fase de
modelagem, diversos modelos são desenvolvidos
antecipando
diversas visões do
produto final, e possibilitando a avaliação do software
antes mesmo que ele seja
implementado. A síntese de um modelo de software pode,
portanto, ser vista
como um problema de otimização onde se busca uma melhor
configuração entre
os elementos contemplados pelo paradigma de orientação a
objetos, como classes,
métodos e atributos, que atenda a critérios de qualidade
de design. O objetivo do
trabalho foi estudar uma forma de sintetizar modelagens
de
maior qualidade
através da evolução por Algoritmos Genéticos Co-
evolucionários. Para avaliar a
modelagem do software, foram investigadas métricas de
qualidade de software
tais como: Reutilização, Flexibilidade,
Inteligibilidade,
Funcionalidade,
Extensibilidade e Efetividade. Essas métricas foram
aplicadas na função de
avaliação, que por sua vez, foi definida objetivando a
síntese de uma modelagem
de software orientado a objetos com uma maior qualidade.
Neste problema,
deseja-se contemplar mais de um objetivo ao mesmo tempo.
Para isso, foi
utilizada a técnica de Pareto para problemas multi-
objetivos.
Os resultados obtidos foram comparados com modelagens
produzidas por
especialistas e as suas características analisadas. O
desempenho do AG no
processo de otimização foi comparado com o da busca
aleatória e, em todos os
casos, os resultados obtidos pelo modelo foram sempre
superiores. / [en] This work investigates the use of Co-evolutionary Genetic
Algorithms in the
automation of the development process of object-oriented
software systems. The
software final quality depends mainly on the design
quality developed for the
same. During the design phase, different models are
developed anticipating
various visions of the end product, thus making possible
the software evaluation
before it is implemented. The synthesis of a software
model can, therefore, be
seen as an optimization problem where it seeks a better
configuration between the
contemplated elements for the object-oriented paradigm, as
classes, methods and
attributes, which follows the quality design criteria. The
work goal was to study a
way to synthesize designs of better quality through its
evolution by Coevolutionary
Genetic Algorithms. In order to assess the software
quality, it was
also investigated software quality metrics, such as:
Reusability, Flexibility,
Understandability, Functionality, Extensibility and
Effectiveness. These metrics
were applied in an evaluation function that, in turn, was
defined aiming at the
object-oriented design synthesis with a better quality. In
this problem, it is desired
to contemplate more than one objective at a time. For
this, the Pareto technique
for multi-objective problems was used.
The results were compared with designs produced by
specialists and its
characteristics analyzed. The GA performance in the
optimization process was
compared with the exhaustive search and, in all cases, the
model results were
superior.
|
42 |
UMA ABORDAGEM PARA AVALIAÇÃO DA QUALIDADE DE ARTEFATOS DE SOFTWARE / AN APPROACH FOR ASSESSING THE QUALITY OF SOFTWARE ARTIFACTSBertuol, Gelson 27 August 2014 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / While applications and software systems have evolved and becoming more complex,
mainly due to the increasing demands of customers and users, organizations that produce or
acquire have sought alternatives to reduce costs and deliveries without affect the final product
quality. However, in order to make the evaluation of these products more effective, it is
important to use a quality model that allows structure it in a way that satisfies, among other
requirements, the heterogeneous expectations of stakeholders. At same time, it is recommended
starting this evaluation as soon as possible since the early stages of a development process in
order to detect and fix any problems before they propagate. In this sense, this work presents a
study on quality models used in the evaluation of software products, proposing at the same time
the assessment of software artifacts, generated and/or transformed by activities throughout the
lifecycle of a software process. The proposal is based on a quality framework, structured from
a metamodel, which relates the process of evaluating the several characteristics that involve the
artifacts, such as their purposes, stakeholders, methods and corresponding metrics. The work is
also composed by a supporting tool which purpose is to guide evaluators in defining a plan for
assessing the quality of those artifacts. Finally, the proposal was submitted to validation through
a case study involving graduate students of Federal University of Santa Maria. / Ao mesmo tempo em que as aplicações e os sistemas de software vêm evoluindo e
tornando-se mais complexos, devido, principalmente, à crescente exigência dos clientes e
usuários, as organizações que os produzem ou os adquirem têm buscado alternativas para
reduzir custos e prazos de entrega sem que a qualidade do produto final seja afetada. Contudo,
para que a avaliação desses produtos seja mais eficaz, é importante utilizar um modelo de
qualidade que permita estruturá-la de forma que satisfaça, entre outros requisitos, as
expectativas heterogêneas dos interessados. Paralelamente, recomenda-se iniciar essa avaliação
o mais cedo possível, já nos primeiros estágios de um processo de desenvolvimento com o
objetivo de detectar e corrigir os problemas encontrados antes que se propaguem. Neste sentido,
este trabalho apresenta um estudo sobre modelos de qualidade empregados na avaliação de
produtos de software ao mesmo tempo em que propõe a avaliação dos artefatos, gerados e/ou
transformados pelas atividades, ao longo do ciclo de vida de um processo de desenvolvimento.
A proposta é baseada em um framework de qualidade, estruturado a partir de um metamodelo,
que relaciona o processo de avaliação às diversas características que envolvem os artefatos, tais
como seus propósitos, interessados, métodos e métricas correspondentes. O trabalho é
composto, ainda, por uma ferramenta de apoio cujo objetivo é guiar os avaliadores na definição
de um plano de avaliação da qualidade de tais artefatos. Por fim, a proposta foi avaliada e
validada por meio de um estudo de caso envolvendo estudantes de pós-graduação em
informática na avaliação de três aplicações reais desenvolvidas por acadêmicos de graduação
da Universidade Federal de Santa Maria.
|
43 |
A Goal-Driven Methodology for Developing Health Care Quality MetricsVillar Corrales, Carlos January 2011 (has links)
The definition of metrics capable of reporting on quality issues is a difficult task in the health care sector. This thesis proposes a goal-driven methodology for the development, collection, and analysis of health care quality metrics that expose in a quantifiable way the progress of measurement goals stated by interested stakeholders. In other words, this methodology produces reports containing metrics that enable the understanding of information out of health care data. The resulting Health Care Goal Question Metric (HC-GQM) methodology is based on the Goal Question Metric (GQM) approach, a methodology originally created for the software development industry and adapted to the context and specificities of the health care sector. HC-GQM benefits from a double loop validation process where the methodology is first implemented, then analysed, and finally improved. The validation process takes place in the context of adverse event management and incident reporting initiatives at a Canadian teaching hospital, where the HC-GQM provides a set of meaningful metrics and reports on the occurrence of adverse events and incidents to the stakeholders involved. The results of a survey suggest that the users of HC-GQM have found it beneficial and would use it again.
|
44 |
Testování a kvalita softwaru v metodikách vývoje softwaru / Testing and quality assurance in software development methodologiesVachalec, Vladan January 2013 (has links)
The subject of this thesis is testing and quality assurance during software development. The theoretical part explains the meaning of software quality and then describes the metrics used to evaluate software quality. The following part explains the differences between software quality assurance in agile and traditional software development methodologies, including criteria on how to compare the methodologies. Throughout the thesis, there are briefly summarized basic concepts which then include the differences between stat-ic/dynamic testing and manual/automatic testing, as well as a role of quality assurance en-gineer in software development. The practical section extends to an existing software development methodology for small software projects (MMSP) in its testing area. New testing activities, artifacts, and roles are introduced in order to align with real requirements for software testing. They will also function in the methodology when used in the testing area for development of more robust applications in bigger teams. Test management tools and test automation tools are described and followed with recommendations for methodol-ogy usage for only a selected few.
|
45 |
Datová kvalita v prostředí otevřených a propojitelných dat / Data quality on the context of open and linked dataTomčová, Lucie January 2014 (has links)
The master thesis deals with data quality in the context of open and linked data. One of the goals is to define specifics of data quality in this context. The specifics are perceived mainly with orientation to data quality dimensions (i. e. data characteristics which we study in data quality) and possibilities of their measurement. The thesis also defines the effect on data quality that is connected with data transformation to linked data; the effect if defined with consideration to possible risks and benefits that can influence data quality. The list of metrics verified on real data (open linked data published by government institution) is composed for the data quality dimensions that are considered to be relevant in context of open and linked data. The thesis points to the need of recognition of differences that are specific in this context when assessing and managing data quality. At the same time, it offers possibilities for further study of this question and it presents subsequent directions for both theoretical and practical evolution of the topic.
|
46 |
Porovnání objektivních a subjektivních metrik kvality videa pro Ultra HDTV videosekvence / Comparison of objective and subjective video quality metrics for Ultra HDTV sequencesBršel, Boris January 2016 (has links)
Master's thesis deals with the assessment of quality of Ultra HDTV video sequences applying objective metrics. Thesis theoretically describes coding of selected codecs H.265/HEVC and VP9, objective video quality metrics and also subjective methods for assessment of the video sequences quality. Next chapter deals with the implementation of the H.265/HEVC and the VP9 codecs at selected video sequences in the raw format from which arises the test sequences database. Quality of these videos is measured afterwards by objective metrics and selected subjective method. These results are compared for the purpose of finding the most consistent correlations among objective metrics and subjective assessment.
|
47 |
Metodický přístup k evaluaci výpočtů transportu světla / A Methodical Approach to the Evaluation of Light Transport ComputationsTázlar, Vojtěch January 2020 (has links)
Photorealistic rendering has a wide variety of applications, and so there are many rendering algorithms and their variations tailored for specific use cases. Even though practically all of them do physically-based simulations of light transport, their results on the same scene are often different - sometimes because of the nature of a given algorithm or in a worse case because of bugs in their implementation. It is difficult to compare these algorithms, especially across different rendering frameworks, because there is not any standardized testing software or dataset available. Therefore, the only way to get an unbiased comparison of algorithms is to create and use your dataset or reimplement the algorithms in one rendering framework of choice, but both solutions can be difficult and time-consuming. We address these problems with our test suite based on a rigorously defined methodology of evaluation of light transport algorithms. We present a scripting framework for automated testing and fast comparison of rendering results and provide a documented set of non-volumetric test scenes for most popular research-oriented render- ing frameworks. Our test suite is easily extensible to support additional renderers and scenes. 1
|
48 |
Effect of Education on Adult Sepsis Quality Metrics In Critical Care TransportSchano, Gregory R. 21 June 2019 (has links)
No description available.
|
49 |
Repousser les limites de l'identification faciale en contexte de vidéo-surveillance / Breaking the limits of facial identification in video-surveillance context.Fiche, Cécile 31 January 2012 (has links)
Les systèmes d'identification de personnes basés sur le visage deviennent de plus en plus répandus et trouvent des applications très variées, en particulier dans le domaine de la vidéosurveillance. Or, dans ce contexte, les performances des algorithmes de reconnaissance faciale dépendent largement des conditions d'acquisition des images, en particulier lorsque la pose varie mais également parce que les méthodes d'acquisition elles mêmes peuvent introduire des artéfacts. On parle principalement ici de maladresse de mise au point pouvant entraîner du flou sur l'image ou bien d'erreurs liées à la compression et faisant apparaître des effets de blocs. Le travail réalisé au cours de la thèse porte donc sur la reconnaissance de visages à partir d'images acquises à l'aide de caméras de vidéosurveillance, présentant des artéfacts de flou ou de bloc ou bien des visages avec des poses variables. Nous proposons dans un premier temps une nouvelle approche permettant d'améliorer de façon significative la reconnaissance des visages avec un niveau de flou élevé ou présentant de forts effets de bloc. La méthode, à l'aide de métriques spécifiques, permet d'évaluer la qualité de l'image d'entrée et d'adapter en conséquence la base d'apprentissage des algorithmes de reconnaissance. Dans un second temps, nous nous sommes focalisés sur l'estimation de la pose du visage. En effet, il est généralement très difficile de reconnaître un visage lorsque celui-ci n'est pas de face et la plupart des algorithmes d'identification de visages considérés comme peu sensibles à ce paramètre nécessitent de connaître la pose pour atteindre un taux de reconnaissance intéressant en un temps relativement court. Nous avons donc développé une méthode d'estimation de la pose en nous basant sur des méthodes de reconnaissance récentes afin d'obtenir une estimation rapide et suffisante de ce paramètre. / The person identification systems based on face recognition are becoming increasingly widespread and are being used in very diverse applications, particularly in the field of video surveillance. In this context, the performance of the facial recognition algorithms largely depends on the image acquisition context, especially because the pose can vary, but also because the acquisition methods themselves can introduce artifacts. The main issues are focus imprecision, which can lead to blurred images, or the errors related to compression, which can introduce the block artifact. The work done during the thesis focuses on facial recognition in images taken by video surveillance cameras, in cases where the images contain blur or block artifacts or show various poses. First, we are proposing a new approach that allows to significantly improve facial recognition in images with high blur levels or with strong block artifacts. The method, which makes use of specific noreference metrics, starts with the evaluation of the quality level of the input image and then adapts the training database of the recognition algorithms accordingly. Second, we have focused on the facial pose estimation. Normally, it is very difficult to recognize a face in an image taken from another viewpoint than the frontal one and the majority of facial identification algorithms which are robust to pose variation need to know the pose in order to achieve a satisfying recognition rate in a relatively short time. We have therefore developed a fast and satisfying pose estimation method based on recent recognition techniques.
|
50 |
Structural Characterization of Fibre Foam Materials Using Tomographic DataSatish, Shwetha January 2024 (has links)
Plastic foams, such as Styrofoam, protect items during transport. Recognising the recycling challenges of these foams, there's a growing interest in developing alternatives from renewable resources, particularly cellulose fibres, for packaging. A deep understanding of its structure, specifically achieving a uniform distribution of small pore sizes, is crucial to enhancing the mechanical properties of the foam. Prior works highlight the need for improvement in X-ray techniques and image-processing techniques to address challenges in data acquisition and analysis. In this study, X-ray Microtomography equipment was used to capture images of the fibre foam sample, and software like XMController and XMReconstructor obtained 2D projection images at different magnifications (2X, 4X, 10X, and 20X). ImageJ and Python algorithms were then used to distinguish pores and fibres from the obtained images and characterize the pores, which included Bilateral filtering, that helped reduce background noise and preserve fibres in the grayscale images. The Threshold Otsu method converted the grayscale image to a binary image, and the inverted binary image aided in Local thickness image formation. The Local thickness image represented fibres with pixel value zero and blown-up spheres of different intensities representing the pores and their characteristics. As the magnification of the Local thickness images increased, the Pore Area, Pore Volume, Pore Perimeter, and Total Pores decreased, indicating a shift towards a more uniform distribution of smaller pores. Histograms, scatter plots, and pore intensity distribution histograms visually represented this trend. Similarly, characteristics like pore density increased, porosity decreased, and specific surface area remained constant with increasing magnification, suggesting a more compact structure. Objective measurements of image quality metrics, such as PSNR, RMSE, SSIM, and NCC, were used. Grayscale images of different magnifications were compared, and it was noted that as the number of projections increased, the 10X vs. 20X and 2X vs. 4X pairs consistently performed well in terms of image quality. The applied methodologies, comprising Pore Analysis and Image Quality Metrics, exhibit significant strengths in characterising porous structures and evaluating image quality. / Plastskum, som frigolit, skyddar föremål under transport. Att känna igenåtervinningsutmaningar för dessa skum, finns det ett växande intresse för att utveckla alternativ frånförnybara resurser, särskilt cellulosafibrer, för förpackningar. En djup förståelse för detstruktur, specifikt att uppnå en enhetlig fördelning av små porstorlekar, är avgörande förförbättring av skummets mekaniska egenskaper. Tidigare arbeten belyser behovet avförbättring av röntgentekniker och bildbehandlingstekniker för att möta utmaningar idatainsamling och analys. I denna studie användes röntgenmikrotomografiutrustning för attta bilder av fiberskumprovet och programvara som XMController ochXMReconstructor erhöll 2D-projektionsbilder med olika förstoringar (2X, 4X, 10X,och 20X). ImageJ och Python-algoritmer användes sedan för att skilja porer och fibrer frånde erhållna bilderna och karakterisera porerna, vilket inkluderade bilateral filtrering, som hjälpteminska bakgrundsbrus och bevara fibrer i gråskalebilderna. The Threshold Otsumetoden konverterade gråskalebilden till en binär bild, och den inverterade binära bilden hjälpte tilli lokal tjocklek bildbildning. Den lokala tjockleksbilden representerade fibrer med pixelvärde noll och uppblåsta sfärer med olika intensitet som representerar porerna och derasegenskaper. När förstoringen av bilderna med lokal tjocklek ökade, ökade porområdet,Porvolym, poromkrets och totala porer minskade, vilket indikerar en förskjutning mot en merjämn fördelning av mindre porer. Histogram, spridningsdiagram och porintensitetsfördelninghistogram representerade visuellt denna trend. På liknande sätt ökade egenskaper som pordensitet,porositeten minskade och den specifika ytarean förblev konstant med ökande förstoring,föreslår en mer kompakt struktur. Objektiva mätningar av bildkvalitetsmått, t.exsom PSNR, RMSE, SSIM och NCC, användes. Gråskalebilder med olika förstoringarjämfördes, och det noterades att när antalet projektioner ökade, 10X vs. 20Xoch 2X vs. 4X par presterade konsekvent bra när det gäller bildkvalitet. Den tillämpademetoder, som omfattar poranalys och bildkvalitetsmått, uppvisar betydandestyrkor i att karakterisera porösa strukturer och utvärdera bildkvalitet.
|
Page generated in 0.0779 seconds