Spelling suggestions: "subject:"jämföra utvärdering""
1 |
A Comparative Evaluation Between Two Design Solutions for an Information DashboardGannholm, Lovisa January 2013 (has links)
This study is a software usability design case about information presentation in a software dashboard. The dashboard is supposed to present system information about an enterprise resource planning system. The study aims to evaluate if the intended users of the dashboard prefer a list-based or an object-based presentation of the information and why. It also investigates if the possibility to get familiar with the prototype affects the evaluation’s result. The study was performed using parallel prototypes and evaluation with users. The use of parallel prototypes is a rather unexplored area. Likewise, little research has been done in the area of how user experience changes over time. Two prototypes were created, presenting the same information in two different design solutions, one list-based, and one object-based. The prototypes were evaluated with ten presumptive users, with respect to usability. The evaluation consisted of two parts, one quantitative and one qualitative. Half of the respondents got a chance to get familiar with the list-based prototype, and half the object-based prototype, after which they evaluated both sequentially. The result of the evaluation showed that seven out of ten respondents preferred the list-based prototype. The two primary reasons were that they are more used to the list-based concept from their work, and that the list-based prototype presented all information about an application at once. In the object-based prototype the user had to make a request for each type of information, which opened up in a new pop-up window. The primary reason that three of the ten respondents preferred the object-based prototype was that it had a more modern look, and gave a cleaner impression since it only presented the information the respondent was interested in at each point in time. The result also implied that the possibility to get familiar with the prototype by testing it for a couple of days affected the result. Eight out of ten respondents preferred the prototype they got familiar to, and the only ones that liked or preferred the object-based prototype were those who had gotten familiar with it. The results of the study support the results of the existing research done by Dow et al. (2010) on the use of parallel prototypes, i.e. creating several prototypes in parallel, and conform with the results of the research of Karapanos et al. (2009) on how user experience changes over time. Some other interesting information that emerged from the study was that all but one of the respondents thought that the prototype they got familiar with had an acceptable level of usability. The study also validated that all respondents are positive to use a dashboard in their work, and that the presented information was enough for a first version of the dashboard. It also validated that the different groups of users would use the dashboard differently, and therefore are in need of slightly different information.
|
2 |
Evaluating Unsupervised Methods for Out-of-Distribution Detection on Semantically Similar Image Data / Utvärdering av oövervakade metoder för anomalidetektion på semantiskt liknande bilddataPierrau, Magnus January 2021 (has links)
Out-of-distribution detection considers methods used to detect data that deviates from the underlying data distribution used to train some machine learning model. This is an important topic, as artificial neural networks have previously been shown to be capable of producing arbitrarily confident predictions, even for anomalous samples that deviate from the training distribution. Previous work has developed many reportedly effective methods for out-of-distribution detection, but these are often evaluated on data that is semantically different from the training data, and therefore does not necessarily reflect the true performance that these methods would show in more challenging conditions. In this work, six unsupervised out-of- distribution detection methods are evaluated and compared under more challenging conditions, in the context of classification of semantically similar image data using deep neural networks. It is found that the performance of all methods vary significantly across the tested datasets, and that no one method is consistently superior. Encouraging results are found for a method using ensembles of deep neural networks, but overall, the observed performance for all methods is considerably lower than in many related works, where easier tasks are used to evaluate the performance of these methods. / Begreppet “out-of-distribution detection” (OOD-detektion) avser metoder vilka används för att upptäcka data som avviker från den underliggande datafördelningen som använts för att träna en maskininlärningsmodell. Detta är ett viktigt ämne, då artificiella neuronnät tidigare har visat sig benägna att generera godtyckligt säkra förutsägelser, även på data som avviker från den underliggande träningsfördelningen. Tidigare arbeten har producerat många välpresterande OOD-detektionsmetoder, men dessa har ofta utvärderats på data som är semantiskt olikt träningsdata, och reflekterar därför inte nödvändigtvis metodernas förmåga under mer utmanande förutsättningar. I detta arbete utvärderas och jämförs sex oövervakade OOD-detektionsmetoder under utmanande förhållanden, i form av klassificering av semantiskt liknande bilddata med hjälp av djupa neuronnät. Arbetet visar att resultaten för samtliga metoder varierar markant mellan olika data och att ingen enskild modell är konsekvent överlägsen de andra. Arbetet finner lovande resultat för en metod som utnyttjar djupa neuronnätsensembler, men överlag så presterar samtliga modeller sämre än vad tidigare arbeten rapporterat, där mindre utmanande data har nyttjats för att utvärdera metoderna.
|
Page generated in 0.1158 seconds