• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 14
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 61
  • 61
  • 33
  • 14
  • 14
  • 12
  • 10
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A guide to motivating students to twist to better spelling

Du Cloux, Kim Elaine 01 January 2003 (has links)
Students learn and retain more when they enjoy the process or avenue of learning. Students have fun learning to spell when visual, auditory, and hands-on learning are included in the process. In addition, the intervention project can be used to support and assist second language learners. The benefits from this intervention project will not only strengthen students' phonemic and spelling foundation, but will also enhance their reading comprehension and writing effectiveness.
52

Robotic process automation - An evaluative model for comparing RPA-tools

Bornegrim, Lucas, Holmquist, Gustav January 2020 (has links)
This research studies the three market-leading RPA-tools, Automation Anywhere, Blue Prism and UiPath, in order to fill the lack of literature regarding methods for evaluating and comparing RPA-tools. Design science research was performed by designing and creating artefacts in the form of process implementations and an evaluative model. A typical process representing a common area of use was implemented using each of the three RPA-tools, in order to create an evaluative model. Official documentation, along with the three implementations, were studied. Evaluative questions specific to RPA-tool evaluation were created based on a quality model for product quality found in the ISO/IEC 25010 standard. Characteristics dependant on organisational context were not included in the evaluation, in order to create an evaluative model which is not dependant on any specific business environment. The results of the research provide knowledge of (1) how RPA-tools can be implemented and (2) the differences that exist between the three market-leading RPA tools. The research also contributes in the form of a method for investigating and evaluating the RPA-tools. When creating the evaluative model, some of the criteria found in the ISO/IEC 25010 quality model were concluded to be of low relevance and, therefore, not included in the model. By analysing and evaluating the created evaluative model, using a theoretical concept of digital resources and their evaluation, the validity of the evaluative model was reinforced. From an evaluative perspective, this research emphasises the need to appropriate and change existing evaluative methods in order to successfully evaluate the most relevant characteristics of RPA-tools. / Denna forskning studerar de tre marknadsledande RPA-verktygen, Automation Anywhere, Blue Prism och UiPath, för att fylla bristen på litteratur om metoder för utvärdering och jämförelse av RPA-verktyg. Design science research genomfördes genom att utforma och skapa artefakter i form av processimplementeringar och en utvärderingsmodell. En typisk process som representerar ett vanligt användningsområde implementerades med användning av vart och ett av de tre RPA-verktygen för att skapa en utvärderingsmodell. Officiell dokumentation, tillsammans med de tre implementeringarna, studerades. Utvärder-ingsfrågor specifika för RPA-verktygsutvärdering skapades baserat på en kvalitetsmodell för produktkvalitet som finns i ISO/IEC 25010-standarden. Egenskaper som är beroende av organisatoriskt sammanhang ingick inte i utvärderingen för att skapa en utvärderingsmodell som inte är beroende av någon specifik affärsmiljö. Resultaten av forskningen ger kunskap om (1) hur RPA-verktyg kan implementeras och (2) skillnaderna som finns mellan de tre marknadsledande RPA-verktygen. Forskningen bidrar också i form av en metod för att undersöka och utvärdera RPA-verktygen. Vid skapandet av utvärderingsmodellen drogs slutsatsen att några av kriterierna i kvalitetsmodellen i ISO/IEC 25010 var av låg relevans och de är därför inte inkluderade i den resulterande modellen. Genom att analysera och utvärdera den skapade utvärderingsmodellen, med hjälp av ett teoretiskt koncept av digitala resurser och deras utvärdering, förstärktes utvärderingsmodellens validitet. Ur ett utvärderingsperspektiv betonar denna forskning behovet av att anpassa och ändra befintliga utvärderingsmetoder för att framgångsrikt utvärdera de mest relevanta egenskaperna hos RPA-verktyg.
53

Teplotně vlhkostní namáhaní stěny dřevostavby / Hygrothermal processes in walls of wooden houses

Veselá, Lucie January 2018 (has links)
The diploma thesis deals with the thermal-humidity stress of the wooden wall. The work is focused on the connection of the wall to the base structure of the building. Three details were chosen. The work was focused on detail with the most common structure of an external wall used in the Czech Republic on the composition with a supporting structure made of KVH columns, which are filled with mineral insulation. This construction is covered with plate elements. The insulation from the exterior is made of ETICS with expanded polystyrene thermal insulation. This detail was assessed in the software. To compare the results calculated by real-time software, an experimental model was made, which was subjected to experimental measurements. Part of the diploma thesis is a comparison of detail stress under different boundary design conditions, with or without anchoring.
54

An Implementation of Splitting for Dung Style Argumentation Frameworks

Wong, Renata 19 February 2018 (has links)
Argumentation and reasoning have been an area of research in such disciplines as philosophy, logic and artificial intelligence for quite some time now. In the area of AI, knowledge needed for reasoning can be represented using various kinds of representation systems. The natural problem posed by this fact is that of possible incompatibility between heterogeneous systems as far as communication between them is concerned. This imposes a limitation on the possibility of extending smaller knowledge bases to larger ones. In order to facilitate a common platform for exchange across the systems unified formalisms for the different approaches to knowledge representation are required. This was the motivation for Dung [11] to propose in his 1995 paper an approach that later came to be known as an abstract argumentation framework. Roughly speaking, Dung's arguments are abstract entities which are related to each other by the means of conflicts between them. An intuitive graphical representation of Dung style framework is a graph whose nodes stand for arguments and whose edges stand for conflicts. A framework postulated this way is on one hand too general to be used on its own, but on the other hand it is general enough as to allow for varied extensions increasing its expressiveness, which indeed have been proposed. They include value-based argumentation frameworks by Bench-Capon et al. [6], preference-based argumentation frameworks by Amgoud and Cayrol [1] and bipolar argumentation frameworks by Brewka and Woltran [7], to name a few. The present thesis is concerned with yet another variation of Dung's framework: the concept of splitting. It was developed by Baumann [4] with one of the underlying purposes being that the computation time in frameworks which have been split into two parts and then computed separately may show some improvement in comparison to their variant without splitting. It was one of the main tasks of my work to develop an efficient algorithm for the splitting operation based on the theoretical framework given in [4]. On the other hand I hoped to confirm the expectation that splitting can indeed make a computation perform better.
55

On the Modularity of a System

Johansson, Per, Holmberg, Henric January 2010 (has links)
Den här uppsatsen behandlar skapandet och designen av en arkitektur över ett system för behandling av depression och andra psykiska sjukdomar via internet, kallat Melencolia. Ett av kraven för detta projekt är att skapa ett system som kan utökas i framtiden. Vi har härlett detta krav till begreppet modularitet och för att skapa en modulär arkitektur för Melencolia har vi undersökt vad begreppet innebär och härlett det till att vara ett kvalitetsdrag hos flera kvalitetsattribut däribland ”maintainability” och ”reusability”. Med hjälp av ”Attribute Driven Design” kan en arkitektur skapas som fokuserar kring en viss typ av kvalitetsattribut. Eftersom modularitet inte är ett kvalitetsattribut utan en kvalitetsegenskap har vi varit tvungna att ändra indata till denna metod, från kvalitetsattribut till kvalitetsegenskap. Vidare har vi härlett och lagt fram en ny metod för att mäta kvalitetsegenskaper i en mjukvaruarkikektur.Slutligen har vi använt vår metod för att mäta graden av modularitet i Melencolias arkitektur. / This thesis considers the problem of creating and designing an architecture for a software project that will result in a system for treatment of depression on the Internet. One of the requirements for this project, named by Melencolia, is to create a system which can be extended in the future. From this requirement we have derived the concept of modularity. In order to create a modular architecture we have concluded that modularity is a quality characteristic of multiple quality attributes such as "maintainability" and "reusability".We deploy Attribute-Driven Design (ADD) in this Melencolia project. By doing this, an architecture that is focused around modularity can be created. Since modularity is not a quality attribute, but rather a quality characteristic, we had to change the input to ADD from a quality attribute to a quality characteristic.Furthermore, we derive and propose a new method for quality characteristic evaluation of software architectures.Finally we apply our aforementioned method on the architecture of Melencolia and by doing this we get an indication on how well our proposed architecture satisfies modularity.
56

Desenvolvimento de um software para implantação do processo de enfermagem

Silva, Claudir Lopes da 22 May 2015 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2015-10-26T12:52:57Z No. of bitstreams: 1 Claudir Lopes da Silva_.pdf: 1760704 bytes, checksum: 63e6869c1fd30559cb7cf0b2b62b7d83 (MD5) / Made available in DSpace on 2015-10-26T12:52:57Z (GMT). No. of bitstreams: 1 Claudir Lopes da Silva_.pdf: 1760704 bytes, checksum: 63e6869c1fd30559cb7cf0b2b62b7d83 (MD5) Previous issue date: 2015-05-22 / UNISINOS - Universidade do Vale do Rio dos Sinos / Este trabalho parte dos caminhos e encontros utilizados para a implantação de um Protocolo de Sepse Grave em um Hospital Universitário. O estudo é de abordagem qualitativa usando o método narrativa auto-referente. Utilizou-se os pressupostos da Educação Permanente em Saúde (EPS) como método de ativação de rede, onde os envolvidos são os atores da equipe assistencial do serviço em estudo, os mesmos foram convidados a pensar na elaboração coletiva de um Protocolo de Sepse Grave. Esta narrativa revela o percurso de uma enfermeira que realiza a construção em rede (rodas em redes) de um Protocolo de Sepse Grave. Aponta as possibilidades e entraves e as construções coletivas que surgiram no percurso com objetivo de subsidiar os serviços de saúde para a construção de coletivos organizados para a produção de saúde. Os resultados do estudo indicam para muitos aprendizados, destacando-se a construção de redes no interior dos serviços de saúde e a importância de atuarmos na perspectiva da linha de cuidado. Além disso, o estudo revela a necessidade de disseminação de uma gestão colegiada no sentido de proporcionar espaço para escuta e conversação para que os profissionais da saúde se sintam parte integrante do processo de cuidado, que atuem na perspectiva usuário-centrada e em busca de um projeto terapêutico singular. O processo de enfermagem (PE) é um instrumento que orienta o cuidado profissional de enfermagem. A inserção dos sistemas informatizados no campo da enfermagem deve contribuir para sua aplicação, e estes devem possibilitar a aplicação de todas as etapas, conforme preconizado na legislação vigente. OBJETIVOS: desenvolver um software para aplicação do processo de enfermagem em serviços de saúde e avaliar as potencialidades do uso do software para o registro do processo de enfermagem. MATERIAL E MÉTODO: estudo de desenvolvimento tecnológico, com a produção de um software para coletar informações dos pacientes para a realização do PE. Na construção do software, foi utilizada a metodologia orientada ao objeto, e as etapas do PE foram seguidas conforme a Resolução COFEN 358/2009 e alicerçado na Teoria das Necessidades Humanas Básicas de Maslow. Para sua avaliação, utilizaram-se o roteiro da ABNT NBR ISO/IEC 14598-6 (2004) e o modelo de qualidade descrito na norma ISO/IEC 25010 (2008). A validação do Sistema de Aplicação do Processo de Enfermagem foi realizada por 21 expertises (10 enfermeiros e 11 profissionais de informática). RESULTADOS: na avaliação dos expertises enfermeiros, a característica “adequação funcional” obteve 100% das respostas de acordo, “confiabilidade”, 93,7%, “usabilidade”, 94,5, “eficiência no desempenho”, 100%, “compatibilidade”, 100%, e “segurança”, 100%. Os expertises profissionais de informática também avaliaram “adequação” com 95,1% das respostas de acordo, “confiabilidade” com 93,3%, “usabilidade” com 97,4, “eficiência no desempenho” com 98,9%, “compatibilidade” com 93,7%, “segurança” com 100%, “manutebilidade” com 97,3%, e “compatibilidade” com 94,4%. O SISAPENF foi avaliado como de acordo com um percentual acima da meta estipulada, que era de 70% das respostas de acordo. CONCLUSÃO: o presente estudo comprovou que o software SISAPENF atende à necessidade para a aplicação do processo de enfermagem em todas as suas etapas e também pode ser utilizado para realização de simulação realística, treinamentos e prática diária, além de possuir facilidade de acesso. / The nursing process is a tool that guides the professional nursing care. The integration of computer systems in the nursing field should contribute to their application and they should enable the implementation of all the steps, as set by law. OBJECTIVES: To develop a software application for the Nursing process in health services and evaluate the potential application of the software to record the nursing process. MATERIALS AND METHODS: technological development, with the production of software to collect information from patients for the EP. In building the software was used object-oriented methodology and the EP steps have been taken as the COFEN Resolution 358/2009 and alicersado the Theory of Basic Human Needs of Maslow. To review used the script of ISO / IEC 14598-6 (2004) and the quality model described in ISO / IEC 25010 (2008). The validation of the Nursing Process Application System was performed by 21 expertise (10 nurses and 11 computer professionals). RESULTS: In the evaluation of the characteristic functional adequacy nurses expertise obtained 100% of respondents agree, 93.7% reliability, usability 94.5, efficient performance 100%, 100% compatibility, security 100%. The expertise IT professionals also evaluated adequacy with 95.1% of respondents agree, 93.3% reliability, usability 97.4%, efficient performance 98.9%, 93.7% compatibility, security 100% manutebilidade 97, 3%, 94.4% compatibility. The SISAPENF was assessed as according to a percentage above the established target which was 70% of agreement answers. CONCLUSION: This study showed that the SISAPENF software meets the need for the application of the nursing process in all its stages and can also be used to perform realistic simulation, training and daily practice, as well as having easy access.
57

An Empirical Evaluation & Comparison of Effectiveness & Efficiency of Fault Detection Testing Techniques

Natraj, Shailendra January 2013 (has links)
Context: The thesis is the analysis work of the replication of software experiment conducted by Natalia and Sira at Technical University of Madrid, SPAIN. The empirical study was conducted for the verification and validation of experimental data, and to evaluate the effectiveness and efficiency of the testing techniques. The analysis blocks, considered for the analysis were observable fault, failure visibility and observed faults. The statistical data analysis involved the ANOVA and Classification package of SPSS. Objective: To evaluate and compare the result obtained from the statistical data analysis. To establish the verification and validation of effectiveness and efficiency of testing techniques by using ANOVA and Classification tree analysis for percentage subject, percentage defect-subject and values (Yes / No) for each of the blocks. RQ1: Empirical evaluation of effectiveness of fault detection testing technique, using data analysis (ANOVA and Classification tree package). For the blocks (observable fault, failure visibility and observed faults) using ANOVA and Classification tree. RQ2: Empirical evaluation of efficiency of fault detection technique, based on time and number of test cases using ANOVA. RQ3: Comparison and inference of the obtained results for both effectiveness and efficiency. Method:The research will be focused on the statistical data analysis to empirically evaluate the effectiveness and efficiency of the fault detection technique for the experimental data collected at UPM (Technical university of Madrid, SPAIN). Empirical Strategy Used: Software Experiment. Results: Based on the planned research work. The analysis result obtained for the observable fault types were standardized (Ch5). Within the observable fault block, both the techniques, functional and structural were equally effective. In the failure visibility block, the results were partially standardized. The program types nametbl and ntree were equally effective in fault detection than cmdline. The result for observed fault block was partially standardized and diverse. The list for significant factors in this blocks were program types, fault types and techniques. In the efficiency block, the subject took less time in isolating the fault in the program type cmdline. Also the efficiency in fault detection was seen in cmdline with the help of generated test cases. Conclusion:This research will help the practitioners in the industry and academic in understanding the factors influencing the effectiveness and efficiency of testing techniques.This work also presents a comprehensive analysis and comparison of results of the blocks observable fault, failure visibility and observed faults. We discuss the factors influencing the efficiency of the fault detection techniques. / shailendra.natraj@gmail.com +4917671952062
58

Quality Assurance of Exposure Models for Environmental Risk Assessment of Substances / Qualitätssicherung von Expositionsmodellen zur Umweltrisikoabschätzung von Substanzen

Schwartz, Stefan 04 September 2000 (has links)
Environmental risk assessment of chemical substances in the European Union is based on a harmonised scheme. The required models and parameters are laid down in the Technical Guidance Document (TGD) and are implemented in the EUSES software. An evaluation study of the TGD exposure models was carried out. In particular, the models for estimating chemical intake by humans were investigated. The objective of this study was two-fold: firstly, to develop an evaluation methodology, since no appropriate approach is available in the scientific literature. Secondly, to elaborate applicability and limitations of the models and to provide proposals for their improvement. The principles of model evaluation in terms of quality assurance, model validation and software evaluation were elaborated and a suitable evaluation protocol for chemical risk assessment models was developed. Quality assurance of a model includes internal (e.g. an investigation of the underlying theory) and external (e.g. a comparison of the results with experimental data) validation, and addresses the evaluation of the respective software. It should focus not only on the predictive capability of a model, but also on the strength of the theoretical underpinnings, evidence supporting the model?s conceptualisation, the database and the software. The external validation was performed using a set of reference substances with different physico-chemical properties and use patterns. Additionally, sensitivity and uncertainty analyses were carried out, and alternative models were discussed. Recommendations for improvements and maintenance of the risk assessment methodology were presented. To perform the software evaluation quality criteria for risk assessment software were developed. From a theoretical point of view, it was shown that the models strongly depend on the lipophilicity of the substance, that the underlying assumptions drastically limit the applicability, and that realistic concentrations may seldom be expected. If the models are applied without adjustment, high uncertainties must inevitably be expected. However, many cases were found in which the models deliver highly valuable results. The overall system was classified as a good compromise between complexity and practicability. But several chemicals and classes of chemicals, respectively, with several restrictions were revealed: The investigated models used to assess indirect exposure to humans are in parts currently not applicable for dissociating compounds, very polar compounds, very lipophilic compounds, ions, some surfactants, and compounds in which metabolites provide the problems and mixtures. In a strict sense, the method is only applicable for persistent, non-dissociating chemicals of intermediate lipophilicity. Further limitations may exist. Regarding the software, it was found that EUSES basically fulfils the postulated criteria but is highly complex and non-transparent. To overcome the inadequacies a more modular design is proposed.
59

Τεχνικές εξόρυξης δεδομένων και εφαρμογές σε προβλήματα διαχείρισης πληροφορίας και στην αξιολόγηση λογισμικού / Data mining techniques and their applications in data management problems and in software systems evaluation

Τσιράκης, Νικόλαος 20 April 2011 (has links)
Τα τελευταία χρόνια όλο και πιο επιτακτική είναι η ανάγκη αξιοποίησης των ψηφιακών δεδομένων τα οποία συλλέγονται και αποθηκεύονται σε διάφορες βάσεις δεδομένων. Το γεγονός αυτό σε συνδυασμό με τη ραγδαία αύξηση του όγκου των δεδομένων αυτών επιβάλλει τη δημιουργία υπολογιστικών μεθόδων με απώτερο σκοπό τη βοήθεια του ανθρώπου στην εξόρυξη της χρήσιμης πληροφορίας και γνώσης από αυτά. Οι τεχνικές εξόρυξης δεδομένων παρουσιάζουν τα τελευταία χρόνια ιδιαίτερο ενδιαφέρον στις περιπτώσεις όπου η πηγή των δεδομένων είναι οι ροές δεδομένων ή άλλες μορφές όπως τα XML έγγραφα. Σύγχρονα συστήματα και εφαρμογές όπως είναι αυτά των κοινοτήτων πρακτικής έχουν ανάγκη χρήσης τέτοιων τεχνικών εξόρυξης για να βοηθήσουν τα μέλη τους. Τέλος ενδιαφέρον υπάρχει και κατά την αξιολόγηση λογισμικού όπου η πηγή δεδομένων είναι τα αρχεία πηγαίου κώδικα για σκοπούς καλύτερης συντηρησιμότητας τους. Από τη μια μεριά οι ροές δεδομένων είναι προσωρινά δεδομένα τα οποία περνούν από ένα σύστημα «παρατηρητή» συνεχώς και σε μεγάλο όγκο. Υπάρχουν πολλές εφαρμογές που χειρίζονται δεδομένα σε μορφή ροών, όπως δεδομένα αισθητήρων, ροές κίνησης δικτύων, χρηματιστηριακά δεδομένα και τηλεπικοινωνίες. Αντίθετα με τα στατικά δεδομένα σε βάσεις δεδομένων, οι ροές δεδομένων παρουσιάζουν μεγάλο όγκο και χαρακτηρίζονται από μια συνεχή ροή πληροφορίας που δεν έχει αρχή και τέλος. Αλλάζουν δυναμικά, και απαιτούν γρήγορες αντιδράσεις. Ίσως είναι η μοναδική πηγή γνώσης για εξόρυξη δεδομένων και ανάλυση στην περίπτωση όπου οι ανάγκες μιας εφαρμογής περιορίζονται από τον χρόνο απόκρισης και το χώρο αποθήκευσης. Αυτά τα μοναδικά χαρακτηριστικά κάνουν την ανάλυση των ροών δεδομένων πολύ ενδιαφέρουσα ιδιαίτερα στον Παγκόσμιο Ιστό. Ένας άλλος τομέας ενδιαφέροντος για τη χρήση νέων τεχνικών εξόρυξης δεδομένων είναι οι κοινότητες πρακτικής. Οι κοινότητες πρακτικής (Communities of Practice) είναι ομάδες ανθρώπων που συμμετέχουν σε μια διαδικασία συλλογικής εκμάθησης. Μοιράζονται ένα ενδιαφέρον ή μια ιδέα που έχουν και αλληλεπιδρούν για να μάθουν καλύτερα για αυτό. Οι κοινότητες αυτές είναι μικρές ή μεγάλες, τοπικές ή παγκόσμιες, face to face ή on line, επίσημα αναγνωρίσιμες, ανεπίσημες ή και αόρατες. Υπάρχουν δηλαδή παντού και σχεδόν όλοι συμμετέχουμε σε δεκάδες από αυτές. Ένα παράδειγμα αυτών είναι τα γνωστά forum συζητήσεων. Σκοπός μας ήταν ο σχεδιασμός νέων αλγορίθμων εξόρυξης δεδομένων από τις κοινότητες πρακτικής με τελικό σκοπό να βρεθούν οι σχέσεις των μελών τους και να γίνει ανάλυση των εξαγόμενων δεδομένων με μετρικές κοινωνικών δικτύων ώστε συνολικά να αποτελέσει μια μεθοδολογία ανάλυσης τέτοιων κοινοτήτων. Επίσης η eXtensible Markup Language (XML) είναι το πρότυπο για αναπαράσταση δεδομένων στον Παγκόσμιο Ιστό. Η ραγδαία αύξηση του όγκου των δεδομένων που αναπαρίστανται σε XML μορφή δημιούργησε την ανάγκη αναζήτησης μέσα στην δενδρική δομή ενός ΧΜL εγγράφου για κάποια συγκεκριμένη πληροφορία. Η ανάγκη αυτή ταυτόχρονα με την ανάγκη για γρήγορη πρόσβαση στους κόμβους του ΧΜL δέντρου, οδήγησε σε διάφορα εξειδικευμένα ευρετήρια. Για να μπορέσουν να ανταποκριθούν στη δυναμική αυτή των δεδομένων, τα ευρετήρια πρέπει να έχουν τη δυνατότητα να μεταβάλλονται δυναμικά. Ταυτόχρονα λόγο της απαίτησης για αναζήτηση συγκεκριμένης πληροφορίας πρέπει να γίνεται το φιλτράρισμα ενός συνόλου XML δεδομένων διαμέσου κάποιων προτύπων και κανόνων ώστε να βρεθούν εκείνα τα δεδομένα που ταιριάζουν με τα αποθηκευμένα πρότυπα και κανόνες. Από την άλλη μεριά οι διαστάσεις της εσωτερικής και εξωτερικής ποιότητας στη χρήση ενός προϊόντος λογισμικού αλλάζουν κατά τη διάρκεια ζωής του. Για παράδειγμα η ποιότητα όπως ορίζεται στην αρχή του κύκλου ζωής του λογισμικού δίνει πιο πολύ έμφαση στην εξωτερική ποιότητα και διαφέρει από την εσωτερική, όπως για παράδειγμα στη σχεδίαση η οποία αναφέρεται στην εσωτερική ποιότητα και αφορά τους μηχανικούς λογισμικού. Οι τεχνικές εξόρυξης δεδομένων που μπορούν να χρησιμοποιηθούν για την επίτευξη του απαραίτητου επιπέδου ποιότητας, όπως είναι ο καθορισμός και η αξιολόγηση της ποιότητας πρέπει να λαμβάνουν υπόψη τους τις διαφορετικές αυτές διαστάσεις σε κάθε στάδιο του κύκλου ζωής του προϊόντος. Στα πλαίσια αυτής της διδακτορικής διατριβής έγινε σε βάθος έρευνα σχετικά με τεχνικές εξόρυξης δεδομένων και εφαρμογές τόσο στο πρόβλημα διαχείρισης πληροφορίας όσο και στο πρόβλημα της αξιολόγησης λογισμικού. / The World Wide Web has gradually transformed into a large data repository consisting of vast amount of data in many different types. These data doubles about every year, but useful information seems to be decreasing. The area of data mining has arisen over the last decade to address this problem. It has become not only an important research area, but also one with large potential in the real world. Data mining has many directives and handles various types of data. When the related data are for example data streams or XML data then the problems seem to be very crucial and interesting. Also contemporary systems and applications related to communities of practice seek appropriate data mining techniques and algorithms in order to help their members. Finally, great interest has the field of software evaluation when by using data mining in order to facilitate the comprehension and maintainability evaluation of a software system’s source code. Source code artifacts and measurement values can be used as input to data mining algorithms in order to provide insights into a system’s structure or to create groups of artifacts with similar software measurements. First, data streams are large volumes of data arriving continuously. Data mining techniques have been proposed and studied to help users better understand and analyze the information. Clustering is a useful and ubiquitous tool in data analysis. With the rapid increase in web-traffic and e-commerce, understanding user behavior based on their interaction with a website is becoming more and more important for website owners and clustering in correlation with personalization techniques of this information space has become a necessity. The knowledge obtained by learning the users preferences can help improve web content, find usability issues related to this content and its structure, ensure the security of provided data, analyze the different groups of users that can be derived from the web access logs and extract patterns, profiles and trends. This thesis investigates the application of a new model for clustering and analyzing click-stream data in the World Wide Web with two different approaches. The next part of the thesis deals with data mining techniques regarding communities of practice. These are groups of people taking part in a collaborative way of learning and exchanging ideas. Systems for supporting argumentative collaboration have become more and more popular in digital world. There are many research attempts regarding collaboration filtering and recommendation systems. Sometimes depending on the system and its needs there are different problems and developers have to deal with special cases in order to provide useful service to users. Data mining can play an important role in the area of collaboration systems that want to provide decision support functionality. Data mining in these systems can be defined as the effort to generate actionable models through automated analysis of their databases. Data mining can only be deployed successfully when it generates insights that are substantially deeper than what a simple view of data can give. This thesis introduces a framework that can be applied to a wide range of software platforms aiming at facilitating collaboration and learning among users. More precisely, an approach that integrates techniques from the Data Mining and Social Network Analysis disciplines is being presented. The next part of the thesis deals with XML data and ways to handle huge volumes of data that they may hold. Lately data written in a more sophisticated markup language such as XML have made great strides in many domains. Processing and management of XML documents have already become popular research issues with the main problem in this area being the need to optimally index them for storage and retrieval purposes. This thesis first presents a unified clustering algorithm for both homogeneous and heterogeneous XML documents. Then using this algorithm presents an XML P2P system that efficiently distributes a set of clustered XML documents in a P2P network in order to speed-up user queries. Ultimately, data mining and its ability to handle large amounts of data and uncover hidden patterns has the potential to facilitate the comprehension and maintainability evaluation of a software system. This thesis investigates the applicability and suitability of data mining techniques to facilitate the comprehension and maintainability evaluation of a software system’s source code. What is more, this thesis focuses on the ability of data mining to produce either overviews of a software system (thus supporting a top down approach) or to point out specific parts of this system that require further attention (thus supporting a bottom up approach) potential to facilitate the comprehension and maintainability evaluation of a software system.
60

GARREC: ferramenta de apoio no processo de certificação de software da CERTICS / GARREC: Supporting tool on the process of software's certification of CERTICS

Medeiros, Adriana Gonçalves Silva de 01 September 2017 (has links)
A certificação CERTICS foi desenvolvida para ser um instrumento de política pública que busca contribuir para o desenvolvimento nacional sustentável e pode apoiar as empresas nacionais de software na evolução necessária para se tornarem mais competitivas frente aos softwares estrangeiros. No entanto, esta certificação, assim como outras, requer investimento de profissionais e recursos financeiros, o que é um problema notadamente nas pequenas empresas de software. Este trabalho tem o objetivo de apresentar o GARREC, Guia para Atendimento dos Requisitos dos Resultados Esperados da CERTICS, que é uma ferramenta desenvolvida para apoiar no processo da certificação CERTICS, atuando em complemento à documentação existente. O GARREC foi construído visando facilitar o entendimento dos conceitos da CERTICS e no atendimento dos resultados esperados por meio de proposição de evidências, considerando cenários de pequenas empresas. Assim, o GARREC contribuirá para reduzir o investimento necessário para a certificação. O método de pesquisa adotado envolveu a análise do Modelo de Referência para Avaliação da CERTICS e o detalhamento dos Requisitos Específicos dos seus Resultados Esperados e, para estes foram propostas evidências para atendimento classificadas por relevância. Desta forma, todos os aspectos avaliados são considerados, garantindo qualidade de cobertura do atendimento aos requisitos da certificação. Para a avaliação do GARREC foi realizado um experimento no qual os participantes o utilizaram para atender a resultados esperados predeterminados e responderam a uma pesquisa. Participaram do experimento três empresas com diferentes níveis de conhecimento da CERTICS, uma empresa certificada, uma em processo de certificação e uma sem conhecimento anterior. A partir dos resultados coletados da pesquisa de avaliação, o GARREC atinge os seus objetivos de auxiliar no entendimento e no atendimento dos requisitos da certificação CERTICS, com 91,3% de aceitação aos itens de efetividade e 97,5% referente aos itens de aplicabilidade. Uma validação mais ampla em campo ainda se faz necessária para uma avaliação mais consistente da ferramenta. / The CERTICS certification was developed to be a public policy tool that seeks to contribute to sustainable national development and it can support national software companies in the evolution required to become more competitive compared to the foreign software. However, this certification, as well as others, requires professional investment and financial resources, which is usually a problem for small software companies. This work aims to present GARREC, Guide for Meeting the Requirements of Results Expected from CERTICS, which is a tool developed to support the understanding and obtaining of the CERTICS certification, working in addition to the existing documentation. GARREC was built to facilitate the understanding of the CERTICS’ concepts and in meeting the expected results through evidence proposition considering small business scenarios.Therefore, GARREC will contribute to reducing the investment required for certification. The research method involved the analysis of the Reference Model for Evaluation of CERTICS and detailing of the Specific Requirements of its Expected Results, and for these, evidence was presented to meet them, classified by relevance. In this way all evaluated aspects are considered, guaranteeing quality of coverage of the attendance to the certification requirements. For the GARREC evaluation, an experiment was carried out in which the participants used it to meet predetermined expected results and answered to a survey. Three companies with different levels of knowledge of CERTICS, a certified company, one in the process of certification and one without previous knowledge participated in the experiment. Based on the results of the evaluation survey, GARREC achieves its objectives of assisting in the understanding and fulfillment of CERTICS certification requirements, with 91.3% acceptance of the items referring to Effectiveness and, 97.5% acceptance of the related items Applicability. Further validation in the field is still necessary for a more consistent evaluation of the tool.

Page generated in 0.0825 seconds