• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 13
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 44
  • 44
  • 25
  • 16
  • 15
  • 13
  • 12
  • 10
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Validation of Machine Learning and Visualization based Static Code Analysis Technique / Validering av Machine Learning and Visualization bygger statisk kod analysteknik

Mahmood, Waqas, Akhtar, Muhammad Faheem January 2009 (has links)
Software security has always been an afterthought in software development which results into insecure software. Companies rely on penetration testing for detecting security vulnerabilities in their software. However, incorporating security at early stage of development reduces cost and overhead. Static code analysis can be applied at implementation phase of software development life cycle. Applying machine learning and visualization for static code analysis is a novel idea. Technique can learn patterns by normalized compression distance NCD and classify source code into correct or faulty usage on the basis of training instances. Visualization also helps to classify code fragments according to their associated colors. A prototype was developed to implement this technique called Code Distance Visualizer CDV. In order test the efficiency of this technique empirical validation is required. In this research we conduct series of experiments to test its efficiency. We use real life open source software as our test subjects. We also collected bugs from their corresponding bug reporting repositories as well as faulty and correct version of source code. We train CDV by marking correct and faulty version of code fragments. On the basis of these trainings CDV classifies other code fragments as correct or faulty. We measured its fault detection ratio, false negative and false positive ratio. The outcome shows that this technique is efficient in defect detection and has low number of false alarms. / Software trygghet har alltid varit en i efterhand inom mjukvaruutveckling som leder till osäker mjukvara. Företagen är beroende av penetrationstester för att upptäcka säkerhetsproblem i deras programvara. Att införliva säkerheten vid tidigt utvecklingsskede minskar kostnaderna och overhead. Statisk kod analys kan tillämpas vid genomförandet av mjukvaruutveckling livscykel. Tillämpa maskininlärning och visualisering för statisk kod är en ny idé. Teknik kan lära mönster av normaliserade kompressionständning avstånd NCD och klassificera källkoden till rätta eller felaktig användning på grundval av utbildning fall. Visualisering bidrar också till att klassificera code fragment utifrån deras associerade färger. En prototyp har utvecklats för att genomföra denna teknik som kallas Code Avstånd VISUALISERARE CDV. För att testa effektiviteten hos denna teknik empirisk validering krävs. I denna forskning vi bedriver serie experiment för att testa dess effektivitet. Vi använder verkliga livet öppen källkod som vår test ämnen. Vi har också samlats in fel från deras motsvarande felrapportering förråd samt fel och rätt version av källkoden. Vi utbildar CDV genom att markera rätt och fel version av koden fragment. På grundval av dessa träningar CDV klassificerar andra nummer fragment som korrekta eller felaktiga. Vi mätt sina fel upptäckt förhållandet falska negativa och falska positiva förhållandet. Resultatet visar att den här tekniken är effektiv i fel upptäckt och har låga antalet falsklarm. / waqasmah@gmail.com +46762316108
22

DefectoFix : An interactive defect fix logging tool.

Hameed, Muhammad Muzaffar, Haq, Muhammad Zeeshan ul January 2008 (has links)
Despite the large efforts made during the development phase to produce fault free system, most of the software implementations still require the testing of entire system. The main problem in the software testing is the automation that could verify the system without manual intervention. Recent work in software testing is related to the automated fault injection by using fault models from repository. This requires a lot of efforts, which adds to the complexity of the system. To solve this issue, this thesis suggests DefectoFix framework. DefectoFix is an interactive defect fix logging tools that contains five components namely Version Control Sysem (VCS), source code files, differencing algorithm, Defect Fix Model (DFM) creation and additional information (project name, class name, file name, revision number, diff model). The proposed differencing algorithm extracts detailed information by detecting differences in source code files. This algorithm performs comparison at sub-tree levels of source code files. The extracted differences with additional information are stored as DFM in repository. DFM(s) can later be used for the automated fault injection process. The validation of DefectoFix framework is performed by a tool developed using Ruby programming language. Our case study confirms that the proposed framework generates a correct DFM and is useful in automated fault injection and software validation activities.
23

OPS-SAT Software Simulator

Suteu, Silviu Cezar January 2016 (has links)
OPS-SAT is an in-orbit laboratory mission designed to allow experimenters todeploy new on-board software and perform in-orbit demonstrations of new tech-nology and concepts related to mission operations. The NanoSat MO Frame-work facilitates the process of developing experimental on-board software for OPS-SAT by abstracting the complexities related to communication across the space toground link as well as the details of low-level device access. The objective of thisproject is to implement functional simulation models of OPS-SAT peripherals andorbit/attitude behavior, which integrated together with the NanoSat MO Frame-work provide a sufficiently realistic runtime environment for OPS-SAT on-boardsoftware experiment development. Essentially, the simulator exposes communi-cation interfaces for executing commands which affect the payload instrumentsand/or retrieve science data and telemetry. The commands can be run either fromthe MO Framework or manually, from an intuitive GUI which performs syntaxcheck. In this case, the output will be displayed for advanced debugging. The endresult of the thesis work is a virtual machine which has all the tools installed todevelop cutting edge technology space applications.
24

Information Visualization and Machine Learning Applied on Static Code Analysis

Kacan, Denis, Sidlauskas, Darius January 2008 (has links)
Software engineers will possibly never see the perfect source code in their lifetime, but they are seeing much better analysis tools for finding defects in software. The approaches used in static code analysis emerged from simple code crawling to usage of statistical and probabilistic frameworks. This work presents a new technique that incorporates machine learning and information visualization into static code analysis. The technique learns patterns in a program’s source code using a normalized compression distance and applies them to classify code fragments into faulty or correct. Since the classification frequently is not perfect, the training process plays an essential role. A visualization element is used in the hope that it lets the user better understand the inner state of the classifier making the learning process transparent. An experimental evaluation is carried out in order to prove the efficacy of an implementation of the technique, the Code Distance Visualizer. The outcome of the evaluation indicates that the proposed technique is reasonably effective in learning to differentiate between faulty and correct code fragments, and the visualization element enables the user to discern when the tool is correct in its output and when it is not, and to take corrective action (further training or retraining) interactively, until the desired level of performance is reached.
25

Incremental Lifecycle Validation Of Knowledge-based Systems Through Commonkads

Batarseh, Feras 01 January 2011 (has links)
This dissertation introduces a novel validation method for knowledge-based systems (KBS). Validation is an essential phase in the development lifecycle of knowledge-based systems. Validation ensures that the system is valid, reliable and that it reflects the knowledge of the expert and meets the specifications. Although many validation methods have been introduced for knowledge-based systems, there is still a need for an incremental validation method based on a lifecycle model. Lifecycle models provide a general framework for the developer and a mapping technique from the system into the validation process. They support reusability, modularity and offer guidelines for knowledge engineers to achieve high quality systems. CommonKADS is a set of models that helps to represent and analyze knowledge-based systems. It offers a de facto standard for building knowledge-based systems. Additionally, CommonKADS is a knowledge representation-independent model. It has powerful models that can represent many domains. Defining an incremental validation method based on a conceptual lifecycle model (such as CommonKADS) has a number of advantages such as reducing time and effort, ease of implementation when having a template to follow, well-structured design, and better tracking of errors when they occur. Moreover, the validation method introduced in this dissertation is based on case testing and selecting an appropriate set of test cases to validate the system. The validation method defined makes use of results of prior test cases in an incremental validation procedure. This facilitates defining a minimal set of test cases that provides complete and effective system coverage. CommonKADS doesn’t define validation, verification or testing in any of its models. This research seeks to establish a direct relation between validation and lifecycle models, and introduces a validation method for KBS embedded into CommonKADS
26

A validation software package for discrete simulation models

Florez, Rossanna E. January 1986 (has links)
This research examined the simulation model validation process. After a model is developed, its reliability should be evaluated using validation techniques. This research was concerned with the validation of discrete simulation models which simulate an existing physical system. While there are many validation techniques available in the literature, only the techniques which compare available real system data to model data were considered by this research. Three of the techniques considered were selected and automated in a micro-computer software package. The package consists of six programs which are intended to aid the user in the model validation process. DATAFILE allows for real and model data input, and creates files using a DIF format. DATAGRAF plots real against model system responses and provides histograms of the variables. These two programs are based on the approach used in McNichol's statistical software. Hypothesis tests comparing real and model responses are conducted using TESTHYPO. The potential cost of using an invalid model, in conjunction with the determination of the alpha level of significance, is analyzed in COSTRISK. A non-parametric hypothesis test can be performed using NOTPARAM. Finally, a global validity measure can be obtained using VALSCORE. The software includes brief explanations of each technique and its use. The software was written in the BASIC computer language. The software was demonstrated using a simulation model and hypothetical but realistic system data. The hardware chosen for the package use was the IBM Personal Computer with 256k memory. / M.S.
27

Test data generation from UML state machine diagrams using GAs

Doungsa-ard, Chartchai, Dahal, Keshav P., Hossain, M. Alamgir, Suwannasart, T. January 2008 (has links)
Automatic test data generation helps testers to validate software against user requirements more easily. Test data can be generated from many sources; for example, experience of testers, source program, or software specification. Selecting a proper test data set is a decision making task. Testers have to decide what test data that they should use, and a heuristic technique is needed to solve this problem automatically. In this paper, we propose a framework for generating test data from software specifications. The selected specification is Unified Modeling Language (UML) state machine diagram. UML state machine diagram describes a system in term of state which can be changed when there is an action occurring in the system. The generated test data is a sequence of these actions. These sequences of action help testers to know how they should test the system. The quality of generated test data is measured by the number of transitions which is fired using the test data. The more transitions test data can fire, the better quality of test data is. The number of coverage transitions is also used as a feedback for a heuristic search for a better test set. Genetic algorithms (GAs) are selected for searching the best test data. Our experimental results show that the proposed GA-based approach can work well for generating test data for some types of UML state machine diagrams.
28

Validação de um método para análises tridimensionais de tomografia computadorizada de feixe cônico / Validation of a method for three-dimensional analysis using cone beam computed tomography

Bianchi, Jonas [UNESP] 12 December 2016 (has links)
Submitted by Jonas Bianchi null (jonasbianchi@foar.unesp.br) on 2017-02-10T09:34:31Z No. of bitstreams: 1 BianchiJ.pdf: 8241431 bytes, checksum: c518c64d3206149062fef34dad2ea3f0 (MD5) / Approved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-02-14T18:27:57Z (GMT) No. of bitstreams: 1 bianchi_j_me_arafo.pdf: 8241431 bytes, checksum: c518c64d3206149062fef34dad2ea3f0 (MD5) / Made available in DSpace on 2017-02-14T18:27:57Z (GMT). No. of bitstreams: 1 bianchi_j_me_arafo.pdf: 8241431 bytes, checksum: c518c64d3206149062fef34dad2ea3f0 (MD5) Previous issue date: 2016-12-12 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / O exame de tomografia computadorizada de feixe cônico (TCFC) tem sido utilizado amplamente na área clínica e científica médico/odontológica. Diversos softwares de diferentes fabricantes fornecem opções para processamento das imagens, segmentações e análises quantitativas tridimensionais. Um ponto que ainda gera controvérsias nessa área é a confiabilidade dos dados analisados computacionalmente devido às limitações dos algoritmos utilizados, complexidade da estrutura a ser avaliada, magnitude da mensuração, resolução espacial e variações nas metodologias de análises. Muitas dessas limitações são devidas as análises serem realizadas de formas não padronizadas e dependentes do operador. Deste modo, o objetivo geral do presente estudo foi desenvolver e validar um novo aplicativo para mensuração automática de deslocamentos ósseos a partir de TCFC de um crânio humano macerado. Para testar a confiabilidade do método, criamos um protótipo onde foram realizados deslocamentos físicos no crânio seguidos por exames de TCFC e realizamos os mesmos deslocamentos de forma virtual. As mensurações foram obtidas com base nos registros em maxila e na base do crânio por meio do nosso aplicativo e pelo 3D-Slicer, respectivamente. Além disso, realizamos uma análise visual após segmentação semiautomática por meio do ITK-SNAP para detecção do menor defeito ósseo em fragmento de osso bovino. Nossos resultados mostraram que as ferramentas testadas foram capazes de detectar deslocamentos físicos menores que a resolução espacial da imagem, sendo que os resultados foram comparáveis ao 3D-Slicer. Para os deslocamentos virtuais, foram obtidos resultados precisos, sendo que os deslocamentos foram limitados pela resolução da imagem. Além disso, observamos que a detecção e visualização de pequenos defeitos ósseos, mesmo que maiores do que a resolução espacial da imagem podem ser comprometidas pelo processo de segmentação da imagem. Concluímos que o aplicativo de análises automáticas desenvolvido é confiável para mensurações tridimensionais na área craniomaxilofacial. / Cone beam computerized tomography (CBCT) has been widely used in the clinical, scientific, medical and dental field. Various software from different manufacturers offer options for image processing, segmentation and quantitative analysis. One point that still generates controversy in this area is the reliability of the computer-analyzed data, because of the algorithms limitations, complexity of the structure to be assessed, magnitude of the measurement, spatial resolution and spatial variations in the records. Many of these limitations are caused by analyzes that are conducted in a non-standard form and highly dependent on the operator. Thus, the general objective of this study was to develop and validate a new application for automatic measurement of bone displacement from CBCTs of a macerated human skull. To evaluate our method’s reliability, we created a prototype where physical movements were performed in a human skull followed by CBCT examinations and performed the same movements virtually. Measurements were obtained from the records in jaw and skull base using our application and the 3D Slicer software, respectively. We also performed a visual analysis after the semi-automatic segmentation using the ITK-SNAP for detection of the lower bone defect in bovine bone fragment. Our results showed that we have succeeded in implementing our application of automatic analysis for three-dimensional measurements in the craniofacial area. The tools used in this study could detect physical displacements smaller than the spatial resolution of the image, and the results were comparable to the 3D Slicer software. For virtual displacements, precise results were obtained, and the movements were limited by the image resolution. Furthermore, we observed that the detection and visualization of bone defects in the bovine cortex, even higher than the spatial resolution of the image, can be compromised by the image segmentation process. We conclude that the automatic analysis derived from our developed application is reliable for three-dimensional measurements in the craniofacial area.
29

Validação de um método para análises tridimensionais de tomografia computadorizada de feixe cônico /

Bianchi, Jonas. January 2016 (has links)
Orientador: João Roberto Gonçalves / Resumo: O exame de tomografia computadorizada de feixe cônico (TCFC) tem sido utilizado amplamente na área clínica e científica médico/odontológica. Diversos softwares de diferentes fabricantes fornecem opções para processamento das imagens, segmentações e análises quantitativas tridimensionais. Um ponto que ainda gera controvérsias nessa área é a confiabilidade dos dados analisados computacionalmente devido às limitações dos algoritmos utilizados, complexidade da estrutura a ser avaliada, magnitude da mensuração, resolução espacial e variações nas metodologias de análises. Muitas dessas limitações são devidas as análises serem realizadas de formas não padronizadas e dependentes do operador. Deste modo, o objetivo geral do presente estudo foi desenvolver e validar um novo aplicativo para mensuração automática de deslocamentos ósseos a partir de TCFC de um crânio humano macerado. Para testar a confiabilidade do método, criamos um protótipo onde foram realizados deslocamentos físicos no crânio seguidos por exames de TCFC e realizamos os mesmos deslocamentos de forma virtual. As mensurações foram obtidas com base nos registros em maxila e na base do crânio por meio do nosso aplicativo e pelo 3D-Slicer, respectivamente. Além disso, realizamos uma análise visual após segmentação semiautomática por meio do ITK-SNAP para detecção do menor defeito ósseo em fragmento de osso bovino. Nossos resultados mostraram que as ferramentas testadas foram... (Resumo completo, clicar acesso eletrônico abaixo) / Cone beam computerized tomography (CBCT) has been widely used in the clinical, scientific, medical and dental field. Various software from different manufacturers offer options for image processing, segmentation and quantitative analysis. One point that still generates controversy in this area is the reliability of the computer-analyzed data, because of the algorithms limitations, complexity of the structure to be assessed, magnitude of the measurement, spatial resolution and spatial variations in the records. Many of these limitations are caused by analyzes that are conducted in a non-standard form and highly dependent on the operator. Thus, the general objective of this study was to develop and validate a new application for automatic measurement of bone displacement from CBCTs of a macerated human skull. To evaluate our method's reliability, we created a prototype where physical movements were performed in a human skull followed by CBCT examinations and performed the same movements virtually. Measurements were obtained from the records in jaw and skull base using our application and the 3D Slicer software, respectively. We also performed a visual analysis after the semi-automatic segmentation using the ITK-SNAP for detection of the lower bone defect in bovine bone fragment. Our results showed that we have succeeded in implementing our application of automatic analysis for three-dimensional measurements in the craniofacial area. The tools used in this study could detect physical displacements smaller than the spatial resolution of the image, and the results were comparable to the 3D Slicer software. For virtual displacements, precise results were obtained, and the movements were limited by the image resolution. Furthermore, we observed that the detection and visualization of bone defects in the bovine cortex...(Complete abstract electronic access below) / Mestre
30

Técnicas de testes aplicadas a software embarcado em redes ópticas / Tests techniques applied to embedded software in optical networks

Fadel, Aline Cristine, 1984- 19 August 2018 (has links)
Orientadores: Regina Lúcia de Oliveira Moraes, Eliane Martins / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Tecnologia / Made available in DSpace on 2018-08-19T14:09:37Z (GMT). No. of bitstreams: 1 Fadel_AlineCristine_M.pdf: 3259764 bytes, checksum: a287ca33254d027f23e2f2f818464ee1 (MD5) Previous issue date: 2011 / Resumo: Esse trabalho apresenta os detalhes e os resultados de testes automatizados e manuais que utilizaram a técnica de injeção de falhas e que foram aplicados em redes ópticas. No primeiro experimento o teste foi automatizado e utilizou a emulação de falhas físicas baseadas na máquina de estados do software embarcado dessa rede. Para esse teste foi utilizado uma chave óptica que é controlada por um robô de testes. O segundo experimento foi um teste manual, que injetou falhas nas mensagens de comunicação do protocolo dessa rede, a fim de validar os mecanismos de tolerância a falhas do software central dessa rede. Esse experimento utilizou a metodologia Conformance and Fault injection para preparar, executar e relatar os resultados dos casos de testes. Nos dois experimentos também foi utilizado um padrão de documentação de testes que visa facilitar a reprodução dos testes, a fim de que eles possam ser aplicados em outros ambientes. Com a aplicação desses testes, a rede óptica pode alcançar uma maior confiabilidade, disponibilidade e robustez, que são características essenciais para sistemas que requerem alta dependabilidade / Abstract: This work presents the details and the results of automatic and manual tests that used the fault injection technique and were applied on GPON network. In the first experiment the test was automated, and it performed the emulation of physical faults based on the state machine of the embedded software in this network. In this test is used an optical switch that is controlled by a test robot. The second experiment was a manual test, which injected faults on protocol communication message exchanged through the optical network, in order to validate the main software fault tolerance mechanisms. This experiment used a Conformance and Fault injection methodology to prepare, execute and report the results of the test cases. In both experiments, it was used a standard test documentation to facilitate the reproduction of the tests, so that they can be applied in other environments. With applying both tests, the optical networks reach greater reliability, availability and robustness. These attributes are essential for systems that require high dependability / Mestrado / Tecnologia e Inovação / Mestre em Tecnologia

Page generated in 0.1115 seconds