1 |
Interoperability of wireless communication technologies in hybrid networks : evaluation of end-to-end interoperability issues and quality of service requirementsAbbasi, Munir A. January 2011 (has links)
Hybrid Networks employing wireless communication technologies have nowadays brought closer the vision of communication “anywhere, any time with anyone”. Such communication technologies consist of various standards, protocols, architectures, characteristics, models, devices, modulation and coding techniques. All these different technologies naturally may share some common characteristics, but there are also many important differences. New advances in these technologies are emerging very rapidly, with the advent of new models, characteristics, protocols and architectures. This rapid evolution imposes many challenges and issues to be addressed, and of particular importance are the interoperability issues of the following wireless technologies: Wireless Fidelity (Wi-Fi) IEEE802.11, Worldwide Interoperability for Microwave Access (WiMAX) IEEE 802.16, Single Channel per Carrier (SCPC), Digital Video Broadcasting of Satellite (DVB-S/DVB-S2), and Digital Video Broadcasting Return Channel through Satellite (DVB-RCS). Due to the differences amongst wireless technologies, these technologies do not generally interoperate easily with each other because of various interoperability and Quality of Service (QoS) issues. The aim of this study is to assess and investigate end-to-end interoperability issues and QoS requirements, such as bandwidth, delays, jitter, latency, packet loss, throughput, TCP performance, UDP performance, unicast and multicast services and availability, on hybrid wireless communication networks (employing both satellite broadband and terrestrial wireless technologies). The thesis provides an introduction to wireless communication technologies followed by a review of previous research studies on Hybrid Networks (both satellite and terrestrial wireless technologies, particularly Wi-Fi, WiMAX, DVB-RCS, and SCPC). Previous studies have discussed Wi-Fi, WiMAX, DVB-RCS, SCPC and 3G technologies and their standards as well as their properties and characteristics, such as operating frequency, bandwidth, data rate, basic configuration, coverage, power, interference, social issues, security problems, physical and MAC layer design and development issues. Although some previous studies provide valuable contributions to this area of research, they are limited to link layer characteristics, TCP performance, delay, bandwidth, capacity, data rate, and throughput. None of the studies cover all aspects of end-to-end interoperability issues and QoS requirements; such as bandwidth, delay, jitter, latency, packet loss, link performance, TCP and UDP performance, unicast and multicast performance, at end-to-end level, on Hybrid wireless networks. Interoperability issues are discussed in detail and a comparison of the different technologies and protocols was done using appropriate testing tools, assessing various performance measures including: bandwidth, delay, jitter, latency, packet loss, throughput and availability testing. The standards, protocol suite/ models and architectures for Wi-Fi, WiMAX, DVB-RCS, SCPC, alongside with different platforms and applications, are discussed and compared. Using a robust approach, which includes a new testing methodology and a generic test plan, the testing was conducted using various realistic test scenarios on real networks, comprising variable numbers and types of nodes. The data, traces, packets, and files were captured from various live scenarios and sites. The test results were analysed in order to measure and compare the characteristics of wireless technologies, devices, protocols and applications. The motivation of this research is to study all the end-to-end interoperability issues and Quality of Service requirements for rapidly growing Hybrid Networks in a comprehensive and systematic way. The significance of this research is that it is based on a comprehensive and systematic investigation of issues and facts, instead of hypothetical ideas/scenarios or simulations, which informed the design of a test methodology for empirical data gathering by real network testing, suitable for the measurement of hybrid network single-link or end-to-end issues using proven test tools. This systematic investigation of the issues encompasses an extensive series of tests measuring delay, jitter, packet loss, bandwidth, throughput, availability, performance of audio and video session, multicast and unicast performance, and stress testing. This testing covers most common test scenarios in hybrid networks and gives recommendations in achieving good end-to-end interoperability and QoS in hybrid networks. Contributions of study include the identification of gaps in the research, a description of interoperability issues, a comparison of most common test tools, the development of a generic test plan, a new testing process and methodology, analysis and network design recommendations for end-to-end interoperability issues and QoS requirements. This covers the complete cycle of this research. It is found that UDP is more suitable for hybrid wireless network as compared to TCP, particularly for the demanding applications considered, since TCP presents significant problems for multimedia and live traffic which requires strict QoS requirements on delay, jitter, packet loss and bandwidth. The main bottleneck for satellite communication is the delay of approximately 600 to 680 ms due to the long distance factor (and the finite speed of light) when communicating over geostationary satellites. The delay and packet loss can be controlled using various methods, such as traffic classification, traffic prioritization, congestion control, buffer management, using delay compensator, protocol compensator, developing automatic request technique, flow scheduling, and bandwidth allocation.
|
2 |
SPLMT-TE: a software product lines system test case toolLima Neto, Crescencio Rodrigues 31 January 2011 (has links)
Made available in DSpace on 2014-06-12T16:01:17Z (GMT). No. of bitstreams: 2
arquivo7562_1.pdf: 3512712 bytes, checksum: d7dd3b157b1e7c89309ff683efdc8a2f (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2011 / Atualmente a decisão de trabalhar, ou não, com Linhas de Produtos de Software (LPS) se
tornou um requisito obrigatório para o planejamento estratégico das empresas que trabalham
com domínio específico. LPS possibilita que as organizações alcancem reduções
significativas nos custos de desenvolvimento e manutenção, melhorias quantitativas na
produtividade, qualidade e satisfação do cliente.
Por outro lado, os pontos negativos em adotar LPS são demanda extra de investimentos
para criar os artefatos reusáveis, fazer mudana¸s organizacionais, etc. Além disso, teste
é mais complicado e crítico em linhas de produtos do que em sistemas simples. Porém,
continua sendo a forma mais efetiva para garantia de qualidade em LPS.
Por isso, aprender a escolher as ferramentas certas para teste em LPS é um benefício
que contribui pra redução de alguns desses problemas enfrentados pelas empresas.
Apesar do crescente número de ferramentas disponíveis, teste em LPS ainda necessita
de ferramentas que apoiem o nível de teste de sistema, gerenciando a variabilidade dos
artefatos de teste.
Neste contexto, este trabalho apresenta uma ferramenta de teste de linhas de produtos
de software para construir testes de sistema a partir dos casos de uso que endereçam
desafios para teste em LPS identificados na revisão literária. A ferramenta foi desenvolvida
com o intuito de reduzir o esforço necessário para realizar as atividades de teste
no ambiente de LPS.
Além disso, esta dissertação apresenta um estudo exploratório sistemático que tem
como objetivo investigar o estado da arte em relação a ferramentas de teste, sintetizando as
evidências disponíveis e identificar lacunas entre as ferramentas, disponíveis na literatura.
Este trabalho também apresenta um estudo experimental controlado para avaliar a eficácia
da ferramenta proposta
|
3 |
Porovnání komerčních a open source nástrojů pro testování softwaru / Comercial and open source software testing tool comparisonŠtolc, Robin January 2010 (has links)
The subject of this thesis are software testing tools, specifically tools for manual testing, automatic testing, bug tracking and test management. The aim of this thesis is to introduce reader to several testing tools from each category and to compare these tools. This objective relates to the secondary aim of creating set of criteria for testing tools comparison. The contribution of this thesis is the description and comparison of chosen testing tools and the creation of a set of kriteria, that can be used to Compaq any other testing tools. The Thesis is dividend into the four main parts. The first part briefly describes the theoretical foundations of software testing, the second part deals with descriptions of various categories of testing tools and their role in the testing process, the third part defines the method of comparison and comparison kriteria and in the last, fourth, part the selected testing tools are described and compared.
|
4 |
Network Security Issues, Tools for Testing Security in Computer Network and Development Solution for Improving Security in Computer NetworkSkaria, Sherin, Reza Fazely Hamedani, Amir January 2010 (has links)
No description available.
|
5 |
Network Security Issues, Tools for Testing Security in Computer Network and Development Solution for Improving Security in Computer NetworkSkaria, Sherin, Reza Fazely Hamedani, Amir January 2010 (has links)
No description available.
|
6 |
Příprava a zavedení výkonnostních testů / Preparation and implementation of performance testsDosoudil, Tomáš January 2017 (has links)
Master thesis is dedicated to the testing and its set up into new environment. The theoretical part is devoted to testing and projects. Theoretical part defines what is project, tests as a subject of the project management and defines the types of tests, along with the tool selection issues for the stress tests. The practical part is focused on describing the processes necessary for the preparation and implementation of performance tests and the necessary steps in the project management of the project implementation.
|
7 |
Security vulnerability verification through contract-based assertion monitoring at runtimeHoole, Alexander M. 08 January 2018 (has links)
In this dissertation we seek to identify ways in which the systems development life cycle (SDLC) can be augmented with improved software engineering practices to measurably address security concerns that have arisen relating to security vulnerability defects in software. By proposing a general model for identifying potential vulnerabilities (weaknesses) and using runtime monitoring for verifying their reachability and exploitability during development and testing reduces security risk in delivered products.
We propose a form of contract for our monitoring framework that is used to specify the environmental and system security conditions necessary for the generation of probes that monitor security assertions during runtime to verify suspected vulnerabilities. Our assertion-based security monitoring framework, based on contracts and probes, known as the Contract-Based Security Assertion Monitoring Framework (CB_SAMF) can be employed for verifying and reacting to suspected vulnerabilities in the application and kernel layers of the Linux operating system. Our methodology for integrating CB_SAMF into SDLC during development and testing to verify suspected vulnerabilities reduces the human effort by allowing developers to focus on fixing verified vulnerabilities. Metrics intended for the weighting, prioritizing, establishing confidence, and detectability of potential vulnerability categories are also introduced. These metrics and weighting approaches identify deficiencies in security assurance programs/products and also help focus resources towards a class of suspected vulnerabilities, or a detection method, which may presently be outside of the requirements and priorities of the system.
Our empirical evaluation demonstrates the effectiveness of using contracts to verify exploitability of suspected vulnerabilities across five input validation related vulnerability types, combining our contracts with existing static analysis detection mechanisms, and measurably improving security assurance processes/products used in an enhanced SDLC. As a result of this evaluation we introduced two new security assurance test suites, through collaborations with the National Institute of Standards and Technology (NIST), replacing existing test suites. The new and revised test cases provide numerous improvements to consistency, accuracy, and preciseness along with enhanced test case metadata to aid researchers using the Software Assurance Reference Dataset (SARD). / Graduate
|
8 |
Främjande av inklusiva webbupplevelser : En jämförelsestudie av automatiserade tillgänglighetstestverktyg i en E2E-integrerad mjukvaruprocessEngström, Angelica January 2023 (has links)
Att skapa inkluderande webbupplevelser innebär att användare, oavsett förutsättningar, ska kunna uppfatta, förstå sig på, interagera med och kunna bidra till tjänster på webben. För att arbeta med webbtillgänglighet finns uppsatta standarder för att upptäcka och minimera begränsningar. Trots uppsatta standarder tillhandahåller flertalet myndigheter otillgängliga tjänster, vilket sägs bero på inkompetens och resursbrist. Studiens mål är därmed att bidra med kunskap om automatiserade tillgänglighetstestverktygs effektivitet. Verktygen är implementerade inom och för mjukvaruprocesser och har integrerats med ett End-To-End-testverktyg. Studiens mål uppnås med hjälp av en kvantitativ datainsamling inspirerad av Brajniks definition av effektivitet som mäter verktygens exekveringstid, fullständighet, specificitet och korrekthet. Utifrån mätningar på 4 av World Wide Web Consortiums demonstrationswebbsidor baserat på WCAG 2.1 enligt DIGGs tillsynsmanual visar studien att tillgänglighetstestverktygen Pa11y, QualWeb, IBM Equal Access och Google Lighthouse presterar bättre och sämre inom olika områden. Studien uppmärksammar att QualWeb i genomsnitt har kortast exekveringstid med ca 3933 millisekunder. QualWeb har även högst andel genomsnittlig fullständighet (80,94 %) och specificitet (58,26 %). Det verktyg med högst andel korrekthet är Google Lighthouse (99,02 %). Inget av verktygen anses perfekt eftersom samtliga verktyg gör felbedömningar. Studiens slutsats menar därmed att QualWeb är ett effektivare tillgänglighetstestverktyg som kräver komplettering av ytterligare testningsmetoder såsom manuella och användarcentrala tester. / Creating inclusive web experiences means that users, regardless of their circumstances, should be able to perceive, understand, interact with, and contribute to services on the web. To ensure web accessibility, there are established standards to detect and minimize limitations. However, despite these standards, most authorities provide inaccessible services, due to incompetence and resource constraints. The goal of the study is thus to contribute knowledge about automated accessibility testing tools effectiveness. These tools are implemented within and for software processes and are integrated with an End-To-End testing tool. The study achieves its goals through quantitative data collection inspired by Brajnik’s definition of efficiency, measuring the tools’ execution time, completeness, specificity, and correctness. Based on measurements taken from four of the World Wide Web Consortium’s demonstration websites, following WCAG 2.1 according to DIGG’s supervision manual, the study shows that the accessibility testing tools Pa11y, QualWeb, IBM Equal Access, and Google Lighthouse perform better and worse in different areas. The study highlights that QualWeb has the shortest average execution time at approximately 3933 milliseconds. QualWeb also has the highest average completeness (80.94%) and specificity (58.26%). The tool with the highest correctness rate is Google Lighthouse (99.02%). None of the tools are considered perfect, as all of them make mistakes. The study’s conclusion suggests that QualWeb is a more effective accessibility testing tool that requires additional testing methods such as manual, and user testing.
|
9 |
A step forward in using QSARs for regulatory hazard and exposure assessment of chemicals / Ett steg framåt i användandet av QSARs för regulatorisk riskbedömning och bedömning av exponeringen till kemikalierRybacka, Aleksandra January 2016 (has links)
According to the REACH regulation chemicals produced or imported to the European Union need to be assessed to manage the risk of potential hazard to human health and the environment. An increasing number of chemicals in commerce prompts the need for utilizing faster and cheaper alternative methods for this assessment, such as quantitative structure-activity or property relationships (QSARs or QSPRs). QSARs and QSPRs are models that seek correlation between data on chemicals molecular structure and a specific activity or property, such as environmental fate characteristics and (eco)toxicological effects. The aim of this thesis was to evaluate and develop models for the hazard assessment of industrial chemicals and the exposure assessment of pharmaceuticals. In focus were the identification of chemicals potentially demonstrating carcinogenic (C), mutagenic (M), or reprotoxic (R) effects, and endocrine disruption, the importance of metabolism in hazard identification, and the understanding of adsorption of ionisable chemicals to sludge with implications to the fate of pharmaceuticals in waste water treatment plants (WWTPs). Also, issues related to QSARs including consensus modelling, applicability domain, and ionisation of input structures were addressed. The main findings presented herein are as follows: QSARs were successful in identifying almost all carcinogens and most mutagens but worse in predicting chemicals toxic to reproduction. Metabolic activation is a key event in the identification of potentially hazardous chemicals, particularly for chemicals demonstrating estrogen (E) and transthyretin (T) related alterations of the endocrine system, but also for mutagens. The accuracy of currently available metabolism simulators is rather low for industrial chemicals. However, when combined with QSARs, the tool was found useful in identifying chemicals that demonstrated E- and T- related effects in vivo. We recommend using a consensus approach in final judgement about a compound’s toxicity that is to combine QSAR derived data to reach a consensus prediction. That is particularly useful for models based on data of slightly different molecular events or species. QSAR models need to have well-defined applicability domains (AD) to ensure their reliability, which can be reached by e.g. the conformal prediction (CP) method. By providing confidence metrics CP allows a better control over predictive boundaries of QSAR models than other distance-based AD methods. Pharmaceuticals can interact with sewage sludge by different intermolecular forces for which also the ionisation state has an impact. Developed models showed that sorption of neutral and positively-charged pharmaceuticals was mainly hydrophobicity-driven but also impacted by Pi-Pi and dipole-dipole forces. In contrast, negatively-charged molecules predominantly interacted via covalent bonding and ion-ion, ion-dipole, and dipole-dipole forces. Using ionised structures in multivariate modelling of sorption to sludge did not improve the model performance for positively- and negatively charged species but we noted an improvement for neutral chemicals that may be due to a more correct description of zwitterions. Overall, the results provided insights on the current weaknesses and strengths of QSAR approaches in hazard and exposure assessment of chemicals. QSARs have a great potential to serve as commonly used tools in hazard identification to predict various responses demanded in chemical safety assessment. In combination with other tools they can provide fundaments for integrated testing strategies that gather and generate information about compound’s toxicity and provide insights of its potential hazard. The obtained results also show that QSARs can be utilized for pattern recognition that facilitates a better understanding of phenomena related to fate of chemicals in WWTP. / Enligt kemikalielagstiftningen REACH måste kemikalier som produceras i eller importeras till Europeiska unionen riskbedömas avseende hälso- och miljöfara. Den ökande mängden kemikalier som används i samhället kräver snabbare och billigare alternativa riskbedömningsmetoder, såsom kvantitativa struktur-aktivitets- eller egenskapssamband (QSARs eller QSPRs). QSARs och QSPRs är datamodeller där samband söks korrelationer mellan data för kemikaliers struktur-relaterade egenskaper och t.ex. kemikaliers persistens eller (eko)toxiska effekter. Målet med den här avhandlingen var att utvärdera och utveckla modeller för riskbedömning av industri kemikalier och läkemedel för att studera hur QSARs/QSPRs kan förbättra riskbedömningsprocessen. Fokus i avhandlingen var utveckling av metoder för identifiering av potentiellt cancerframkallande (C), mutagena (M), eller reproduktionstoxiska (R) kemikalier, och endokrint aktiva kemikalier, att studera betydelsen av metabolism vid riskbedömning och att öka vår förståelse för joniserbara kemikaliers adsorption till avloppsslam. Avhandlingen behandlar även konsensusmodellering, beskrivning av modellers giltighet och betydelsen av jonisering för kemiska deskriptorer. De huvudsakliga resultaten som presenteras i avhandlingen är: QSAR-modeller identifierade nästan alla cancerframkallande ämnen och de flesta mutagener men var sämre på att identifiera reproduktionstoxiska kemikalier. Metabolisk aktivering är av stor betydelse vid identifikationen av potentiellt toxiska kemikalier, speciellt för kemikalier som påvisar östrogen- (E) och sköldkörtel-relaterade (T) förändringar av det endokrina systemet men även för mutagener. Träffsäkerheten för de tillgängliga metabolismsimulatorerna är ganska låg för industriella kemikalier men i kombination med QSARs så var verktyget användbart för identifikation av kemikalier som påvisade E- och T-relaterade effekter in vivo. Vi rekommenderar att använda konsensusmodellering vid in silico baserad bedömning av kemikaliers toxicitet, d.v.s. att skapa en sammanvägd förutsägelse baserat på flera QSAR-modeller. Det är speciellt användbart för modeller som baseras på data från delvis olika mekanismer eller arter. QSAR-modeller måste ha ett väldefinierat giltighetsområde (AD) för att garantera dess pålitlighet vilket kan uppnås med t.ex. conformal prediction (CP)-metoden. CP-metoden ger en bättre kontroll över prediktiva gränser hos QSAR-modeller än andra distansbaserade AD-metoder. Läkemedel kan interagera med avloppsslam genom olika intermolekylära krafter som även påverkas av joniseringstillståndet. Modellerna visade att adsorptionen av neutrala och positivt laddade läkemedel var huvudsakligen hydrofobicitetsdrivna men också påverkade av Pi-Pi- och dipol-dipol-krafter. Negativt laddade molekyler interagerade huvudsakligen med slam via kovalent bindning och jon-jon-, jon-dipol-, och dipol-dipol-krafter. Kemiska deskriptorer baserade på joniserade molekyler förbättrade inte prestandan för adsorptionsmodeller för positiva och negativa joner men vi noterade en förbättring av modeller för neutrala substanser som kan bero på en mer korrekt beskrivning av zwitterjoner. Sammanfattningsvis visade resultaten på QSAR-modellers styrkor och svagheter för användning som verkyg vid risk- och exponeringsbedömning av kemikalier. QSARs har stor potential för bred användning vid riskidentifiering och för att förutsäga en mängd olika responser som krävs vid riskbedömning av kemikalier. I kombination med andra verktyg kan QSARs förse oss med data för användning vid integrerade bedömningar där data sammanvägs från olika metoder. De erhållna resultaten visar också att QSARs kan användas för att bedöma och ge en bättre förståelse för kemikaliers öde i vattenreningsverk.
|
10 |
Orientação a objeto: definição, implementação e análise de recursos de teste e validação / Object-oriented: definition, implementation and analysis of validation and testing resourcesVincenzi, Auri Marcelo Rizzo 05 May 2004 (has links)
O desenvolvimento de software baseado no paradigma Orientado a Objetos (OO) e baseado em componentes é uma realidade. Este trabalho trata de teste e validação dentro desse contexto. Observa-se que diversos trabalhos relacionados ao teste de programas OO vêm sendo desenvolvidos. Apesar de ser um ponto controverso, alguns pesquisadores consideram que critérios de teste desenvolvidos para o teste de programas procedimentais podem ser facilmente estendidos para o teste de programas OO, pelo menos para o teste de métodos. Ainda são poucas as iniciativas de estender critérios de fluxo de dados e critérios baseados em mutação, tradicionalmente utilizados no teste de programas procedimentais, para o teste de programas OO. O presente trabalho visa a contribuir na identificação e definição de recursos de teste e validação que possam ser utilizados no teste de programas OO, com ênfase nos critérios de teste baseados em fluxo de dados e em mutação, cobrindo as fases do teste de unidade e de integração. Além disso, para apoiar a aplicação desses critérios, é de fundamental importância o desenvolvimento de ferramentas automatizadas que permitam a realização de estudos comparativos e a transferência tecnológica para a indústria. Em suma, o presente trabalho traz contribuições teóricas, com a definição de critérios de teste; empírica, com a realização de estudos empíricos; e de automatização, com a definição e implementação de um ambiente integrado de teste e validação para programas OO. Exemplos são utilizados para ilustrar as idéias e ferramentas apresentadas neste trabalho. / The development of Object-Oriented (OO) and component-based software is a reality. This work investigates software testing and validation in this context. Several studies related with OO testing have been carried out. In spite of being a controversial point, some researchers state that the procedural testing criteria can be easily extended to OO program testing, for instance to the testing of methods. There are few initiatives aiming at applying data-flow and mutation-based criteria, traditionally used for procedural testing, to the test of OO programs. The present work aims at contributing to identify and define resources for OO program testing and validation, considering data-flow and mutation based testing criteria, involving the unit and integration testing phases. An integrated environment for testing and validation has been developed to support the application of these criteria. This environment provides means to comparative studies amongst the criteria and to technology transfer processes. This work provides contributions to the following testing perspectives: theoretical -- with the definition of testing criteria; empirical -- with the conduction of empirical studies; and to testing automation -- with the specification and implementation of an integrated environment for testing and validation of OO programs. Examples are provided to illustrate the ideas and tools presented in this work.
|
Page generated in 0.0734 seconds