• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 12
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 61
  • 20
  • 14
  • 14
  • 14
  • 13
  • 13
  • 10
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Architectural Rules Conformance with ArCon and Open-SourceModeling Tools

Fridell, Emil January 2012 (has links)
In software development it is often crucial that the system implementationfollows the architecture dened through design patterns and a constraint set.In Model-Driven development most artefacts are created using models, butthe architectural design rules is one area where no standard to model therules exists. ArCon, Architecture Conformance Checker, is a tool to checkconformance of architectural design rules on a system model, dened in UML,that implements the system or application. The architectural design rules aredened in a UML model but with a specic meaning, dierent from standardUML, proposed by the authors of ArCon. Within this thesis ArCon wasextended to be able to check models created by the Open-Source modelingtool Papyrus, and integrated as a plugin on the Eclipse platform. The methodused by ArCon, to dene architectural rules, was also given a short evaluationduring the project to get a hint of its potential and future use. The case-studyshowed some problems and potential improvements of the used implementationof ArCon and its supported method.
22

Architecture-Based Verification of Software-Intensive Systems

Johnsen, Andreas January 2010 (has links)
Development of software-intensive systems such as embedded systems for telecommunications, avionics and automotives occurs under severe quality, schedule and budget constraints. As the size and complexity of software-intensive systems increase dramatically, the problems originating from the design and specification of the system architecture becomes increasingly significant. Architecture-based development approaches promise to improve the efficiency of software-intensive system development processes by reducing costs and time, while increasing quality. This paradox is partially explained by the fact that the system architecture abstracts away unnecessary details, so that developers can concentrate both on the system as a whole, and on its individual pieces, whether it's the components, the components' interfaces, or connections among components. The use of architecture description languages (ADLs) provides an important basis for verification since it describes how the system should behave, in a high level view and in a form where automated tests can be generated. Analysis and testing based on architecture specifications allow detection of problems and faults early in the development process, even before the implementation phase, thereby reducing a significant amount of costs and time. Furthermore, tests derived from the architecture specification can later be applied to the implementation to see the conformance of the implementation with respect to the specification. This thesis extends the knowledge base in the area of architecture-based verification. In this thesis report, an airplane control system is specified using the Architecture Analysis and Description Language (AADL). This specification will serve as a starting point of a system development process where developed architecture-based verification algorithms are applied.
23

Detection of performance anomalies through Process Mining

Marra, Carmine January 2022 (has links)
Anomaly detection in computer systems operating within complex environments,such as cyber-physical systems (CPS), has become increasingly popularduring these last years due to useful insights this process can provide aboutcomputer systems’ health conditions against known reference nominal states.As performance anomalies lead degraded service delivery, and, eventually,system-wide failures, promptly detecting such anomalies may trigger timelyrecovery responses. In this thesis, Process Mining, a discipline aiming at connectingdata science with process science, is broadly explored and employedfor detecting performance anomalies in complex computer systems, proposinga methodology for connecting event data to high-level process models forvalidating functional and non-functional requirements, evaluating system performances,and detecting anomalies. The proposed methodology is appliedto the industry-relevant European Rail Traffic Management System/EuropeanTrain Control System (ERTMS/ETCS) case-study. Experimental results sampledfrom an ERTMS/ETCS system Demonstrator implementing one of thescenarios the standard prescribe have shown Process Mining allows characterizingnominal system performances and detect deviations from such nominalconditions, opening the opportunity to apply recovery routines for steeringsystem performances to acceptable levels.
24

Développement logiciel orienté paradigme de conception : la programmation dirigée par la spécification / Leveraging software architectures to guide and verify the development of sense–compute–control applications

Cassou, Damien 17 March 2011 (has links)
Nombre d'applications ont pour comportement principal l'attente d'un événement venant d'un environnement extérieur, la préparation d'un résultat et l'exécution d'actions sur cet environnement. Les interfaces graphiques et les systèmes avioniques en sont des exemples. Le paradigme SCC, pour sense-compute-control, est particulièrement adapté à la description de ces applications. Le développement d'applications suivant ce paradigme est complexe à cause du manque de cadre conceptuel et d'outils de support.Cette thèse propose un cadre conceptuel dédié au paradigme SCC et se concrétise par un langage de description d'architectures. À partir d'une description dans ce langage, un framework de programmation peut être généré. Il guide l'implémentation d'une application grâce à un support dédié et vérifie que cette implémentation est conforme à l'architecture décrite. Les contributions de cette thèse sont évaluées suivant des critères d'expressivité, d'utilisabilité et de productivité. / Numerous applications have, as their main behavior, to wait for information coming from a foreign environment, to prepare a result, and to execute actions on this environment. Graphical user interfaces and avionic systems are two examples. The SCC paradigm, for Sense–Compute–Control, is dedicated to the description of such applications. Developing applications with this paradigm is made difficult by the lack of conceptual framework and tool support.This thesis proposes a conceptual framework dedicated to the SCC paradigm which is materialized by an architecture description language named DiaSpec. This language provides a framework to support the development of an SCC application, assigning roles to the stakeholders and providing separation of concerns. This thesis also proposes dedicated programming support. Indeed, from DiaSpec descriptions a dedicated programming framework is generated in a target language. This programming framework guides the implementation of an SCC application and raises the level of abstraction of this implementation with both high-level and dedicated mechanisms. This programming framework is designed to ensure conformance of the implementation to its architecture described in DiaSpec by leveraging the target language type system. Contributions of this thesis are evaluated through three criteria: expressiveness, usability and productivity.
25

Análise de cobertura de critérios de teste estruturais a partir de conjuntos derivados de especificações formais: um estudo comparativo no contexto de aplicações espaciais / Structural coverage analysis of test sets derived from formal specifications: a comparative study in the space applications context

Herculano, Paula Fernanda Ramos 24 April 2007 (has links)
As técnicas de teste podem ser divididas, num primeiro nível, naquelas baseadas no código (caixa branca) e naquelas baseadas na especificação (caixa preta ou funcionais). Nenhuma delas é completa pois visam a identificar tipos diferentes de defeitos e a sua utilização em conjunto pode elevar o nível de confiabilidade das aplicações. Assim, tornam-se importantes estudos que contribuam para um melhor entendimento da relação existente entre técnicas funcionais e estruturais, como elas se complementam e como podem ser utilizadas em conjunto. Este trabalho está inserido no contexto do projeto PLAVIS (Plataforma para Validação e Integração de Software em Aplicações Espaciais), e tem como objetivo realizar um estudo comparativo entre as técnicas de geração de casos de teste funcionais (baseadas nas especificações formais) e os critérios estruturais baseados em fluxo de controle e fluxo de dados, aplicados nas implementações. Num contexto específico, esse estudo deve fornecer dados de como se relacionam essas duas técnicas (funcional e estrutural) gerando subsídios para sua utilização em conjunto. Num contexto mais amplo - o do projeto PLAVIS - visa a estabelecer uma estratégia de teste baseada em critérios funcionais e estruturais e que possam, juntamente com as ferramentas que dão suporte a eles, compor um ambiente de teste disponível à utilização em aplicações espaciais dentro do INPE / Testing techniques can be divided, in high level, in code-based ones (white box) and specification based ones (black box). None of them are complete as they intend to identify different kinds of faults. The use of them together can increase the application confidence level. Thus, it is important to investigate the relationship between structural testing techniques and functional testing techniques, how they complete themselves and how they can be used together. This paper was developed in the context of the Plavis (PLAtform of software Validation & Integration on Space systems) project. This project provides comparative studies between functional generation testing techniques (based on formal specifications) and structural generation testing techniques, such as control-flow and data-flow criteria, applied in the implementation. In a specific context, this study provides data about the relationship between these techniques and how they can be used together. In the context of the Plavis project, the goal is to provide a testing strategy, based on functional and structural criteria, and a set of tools, composing a testing environment to be used in Space Applications projects, at INPE
26

Metodologia para a homologação dos equipamentos do Sistema Canal Azul da Carne - MHECAC. / Methodology for Blue Path Technology equipments compliance.

Sergio, Leandro Ruzene 28 April 2016 (has links)
Este trabalho de pesquisa apresenta a Metodologia para a Homologação dos Equipamentos do Sistema Canal Azul da Carne - MHECAC. Esta proposta de metodologia é complementar ao desenvolvimento do Sistema Canal Azul da Carne e tem o objetivo de apoiar o MAPA (Ministério da Agricultura, Pecuária e Abastecimento) no estabelecimento de um processo de verificação da conformidade de equipamentos, buscando garantir a interoperabilidade, o desempenho e a segurança de seus componentes de hardware. O Sistema Canal Azul é uma realização do MAPA em conjunto com o GAESI da Escola Politécnica da Universidade de São Paulo (EPUSP) e a iniciativa privada, e tem o objetivo de reduzir o tempo empregado nos processos de exportação de carnes no Brasil. A MHECAC baseia-se na estrutura de processo de avaliação de conformidade estabelecida pelo Sistema Brasileiro de Avaliação da Conformidade - SBAC e seus requisitos gerais podem ser aplicados à avaliação de conformidade de produtos em setores variados. No desenvolvimento da MHECAC foram aplicadas as principais referências técnicas e normativas correspondentes aos equipamentos que compõe a arquitetura do sistema Canal Azul. Além disto, foram definidos os modelos de homologação, auditoria e inspeção, os planos de amostragem, os requisitos mínimos e a metodologia de ensaio. A MHECAC subdivide-se em dois segmentos principais. O primeiro apresenta os requisitos gerais para o estabelecimento de sistemas de avaliação da conformidade e certificação de produtos, a aplicação destes requisitos não se limita ao Sistema Canal Azul, e o segundo apresenta requisitos específicos ao sistema estabelecido pelo MAPA. A aplicação da MHECAC favorece o tratamento isonômico de fornecedores e é um importante balizador para a seleção de equipamentos, pois permite a qualificação e a comparação de soluções, por meio de um embasamento técnico, pautado pela qualidade. / This research presents a methodology to the Blue Path Technology equipments\' certification, MHECAC (Metodologia para a Homologação dos Equipamentos do Sistema Canal Azul da Carne). The MHECAC is a propose methodology, which aims to support the MAPA in the definition of a process of certification, based on the analysis of compliance with minimum requirements, so that there is a mechanism of standardization and interoperability assurance, quality and safety where equipment of Blue Path Technology are applied. The Blue Path Technology is a project developed by MAPA in conjunction with the GAESI (Department of Electrical Energy and Automation) of the Polytechnic School (POLI) of the University of São Paulo (USP) and the private sector, and aims to reduce the time spent in meat export processes in Brazil. The MHECAC is based on SBAC (Sistema Brasileiro de Avaliação da Conformidade) and can be expanded for other sectors. In the development of MHECAC were applied the main technical references and standards corresponding to the equipment of the Blue Path Technology. In addition,were defined the approval mode, the audit/nspection and sampling plans, the list of minimum requirements and the testing methodology. The MHECAC is divided in two main segments. The first presents the general requirements for establishing conformity assessment and product certification systems, the application of these requirements is not limited to the Blue Path Technology, and the second provides specific requirements to the system established by MAPA. The application of MHECAC contributes to the equal treatment of suppliers and is an important tool for the selection of equipment, it allows the classification and the comparison solutions, by means of a technical basis, based on the quality.
27

Traffic Management of the ABR. Service Category in ATM Networks

Cerdà Alabern, Llorenç 13 January 2000 (has links)
Data traffic has emerged as a big challenge for the standardization of traffic management mechanisms in ATM networks. In April 1996 the ATM Forum published the first version of the Available Bit Rate Service Category (ABR) to give support to this kind of traffic. ABR was designed with ambitious objectives: high network efficiency, fairness and inter-operability of different ABR switch mechanisms.The major part of this PhD Thesis has been devoted to ABR. Instead of focusing on one aspect of ABR, the main research topics involved in ABR have been covered, namely: (i) switching mechanisms, (ii) conformance definition, (iii) charging, (iv) ABR support to TCP traffic. In the following the main conclusions are summarized. Maybe, switch algorithms have been the most investigated topic of ABR. This has happened because the specification of ABR given by the ATM Forum allows a diversity of switch algorithms to be implemented. These range from the simplest binary switches to the more complex ER switches. In the PhD Thesis three of these switch algorithms are analyzed by means of simulation, showing the different degree of performance and complexity that can be achieved. The behavior of ER switches is also analyzed by means of real traces obtained with a commercial ER switch. The conformance definition is the formalism established to decide whether the source transmits according to the traffic contract. The conformance algorithm standardized for ABR is the Dynamic Generic Cell Rate Algorithm (DGCRA). The PhD Thesis gives a detailed description of the DGCRA. Furthermore, traces obtained by simulation are depicted showing that the algorithm given by the ATM Forum has a decreasing accuracy of the rate conformance with increasing feedback delay. A "UPC based on the CCR" is proposed to solve this drawback. The parameter dimensioning of the DGCRA is addressed in the PhD Thesis by means of two analytical approaches. Numerical results calculated with the analytical models are also obtained by simulation for validation. The analytical approaches are based on a novel queuing model of the DGCRA. The first analytical approach is based on a renewal assumption of the cell inter-arrival process at the UPC. This approach gives a simple but coarse approximation of the cell rejection probability at the UPC. The second analytical method consists of a Markov chain that accurately describes the stochastic variables involved in the queuing model of the DGCRA. The Markov chain is solved applying the matrix geometric technique. The complexity of this mathematical approach only allows investigating a simple network topology. However, the accuracy of the model allows taking into account the influence of the delay bounds that are negotiated with the DGCRA. This study shows that a major degradation of the cell rejection probability may be obtained if these delay bounds are not properly set. Another issue investigated in the PhD Thesis is the charging of ABR. Charging may have a decisive impact on the deployment, success and growth of a network. In fact, the research community has paid a great attention to this topic in recent years. Furthermore, pricing may be an essential condition for the users when submitting traffic. Some authors have used this fact to propose congestion control mechanisms based on a dynamic pricing. In such schemes, prices vary according to the demand of network resources by the sources. New prices are conveyed to the sources by means of a feedback mechanism. This charging scheme seems to fit well with ABR, since the RM-cells can be used to dynamically communicate the prices. In the PhD Thesis a dynamic pricing scheme is proposed and an analytical model is used to find out the evolution of the prices. Additionally, several charging schemes are confronted by simulation. This comparison shows that the dynamic pricing gives the best expected charging. Finally, the support of ABR to the traffic generated with the TCP protocol used in the Internet is investigated by simulation. Currently, the data communications are dominated by the Internet traffic transported by a variety of networks. The deployment of ATM technology has been located in the backbone networks and the end-to-end ATM systems appear remote. In fact, it is not clear whether the universal multi-service network will be built on the Internet rather than the B-ISDN. Simulations performed in the PhD Thesis confront the transport of TCP traffic in different scenarios using ABR and the simpler UBR Service Category. The main conclusion is that ABR can solve the severe fairness problems that can arise using UBR.
28

A Conformance And Interoperability Test Suite For Turkey

Sinaci, Ali Anil 01 June 2009 (has links) (PDF)
Conformance to standards and interoperability is a major challenge of today`s applications in all domains. Several standards have been developed and some are still under development to address the various layers in the interoperability stack. Conformance and interoperability testing involves checking whether the applications conform to the standards so that they can interoperate with other conformant systems. Only through testing, correct information exchange among applications can be guaranteed. National Health Information System (NHIS) of Turkey aims to provide a nation-wide infrastructure for sharing Electronic Health Records (EHRs). In order to guarantee the interoperability, the Ministry of Health (MoH), Turkey, developed an Implementation/Integration/Interoperability Profile based on HL7 standards. TestBATN - Testing Business Process, Application, Transport and Network Layers - is a domain and standards independent set of tools which can be used to test all of the layers of the interoperability stack, namely, the Communication Layer, Document Content Layer and the Business Process Layer. In this thesis work, the requirements for conformance and interoperability testing of the NHIS are analyzed, a testing approach is designated, test cases for several NHIS services are developed and deployed and a test execution control and monitoring environment within TestBATN is designed and implemented through the identified testing requirements. The work presented in this thesis is part of the TestBATN system supported by the T&Uuml / BiTAK TEYDEB Project No: 7070191 in addition by the Ministry of Health, Turkey.
29

Modeling conformance control and chemical EOR processes using different reservoir simulators

Goudarzi, Ali 16 September 2015 (has links)
Successful field waterflood is a crucial prerequisite for improving the performance before EOR methods, such as ASP, SP, and P flooding, are applied in the field. Excess water production is a major problem in mature waterflooded oil fields that leads to early well abandonment and unrecoverable hydrocarbon. Gel treatments at the injection and production wells to preferentially plug the thief zones are cost-effective methods to improve sweep efficiency in reservoirs and reduce excess water production during hydrocarbon recovery. There are extensive experimental studies performed by some researchers in the past to investigate the performance of gels in conformance control and decreasing water production in mature waterflooded reservoirs, but no substantial modeling work has been done to simulate these experiments and predict the results for large field cases. We developed a novel, 3-dimensional chemical compositional and robust general reservoir simulator (UTGEL) to model gel treatment processes. The simulator has the capability to model different types of microgels, such as preformed particle gels (PPG), thermally active polymers (TAP), pH-sensitive microgels, and colloidal dispersion gels (CDG). The simulator has been validated for gel flooding using laboratory and field scale data. The simulator helps to design and optimize the flowing gel injection for conformance control processes in larger field cases. The gel rheology, adsorption, resistance factor and residual resistance factor with salinity effect, gel viscosity, gel kinetics, and swelling ratio were implemented in UTGEL. Several simulation case studies in fractured and heterogeneous reservoirs were performed to illustrate the effect of gel on production behavior and water control. Laboratory results of homogeneous and heterogeneous sandpacks, and Berea sandstone corefloods were used to validate the PPG transport models. Simulations of different heterogeneous field cases were performed and the results showed that PPG can improve the oil recovery by 5-10% OOIP compared to waterflood. For recovery from fractured reservoirs by waterflooding, injected water will flow easily through fractures and most part of reservoir oil will remain in matrix blocks unrecovered. Recovery from these reservoirs depends on matrix permeability, wettability, fracture intensity, temperature, pressure, and fluid properties. Chemical processes such as polymer flooding (P), surfactant/polymer (SP) flooding and alkali/surfactant/polymer (ASP) flooding are being used to enhance reservoir energy and increase the recovery. Chemical flooding has much broader range of applicability than in the past. These include high temperature reservoirs, formations with extreme salinity and hardness, naturally fractured carbonates, and sandstone reservoirs with heavy and viscous crude oils. The recovery from fractured carbonate reservoirs is frequently considered to be dominated by spontaneous imbibition. Therefore, any chemical process which can enhance the rate of imbibition has to be studied carefully. Wettability alteration using chemicals such as surfactant and alkali has been studied by many researchers in the past years and is recognized as one of the most effective recovery methods in fractured carbonate reservoirs. Injected surfactant will alter the wettability of matrix blocks from oil-wet to water-wet and also reduce the interfacial tension to ultra-low values and consequently more oil will be recovered by spontaneous co-current or counter-current imbibition depending on the dominant recovery mechanism. Accurate and reliable up-scaling of chemical enhanced oil recovery processes (CEOR) are among the most important issues in reservoir simulation. The important challenges in up-scaling CEOR processes are predictability of developed dimensionless numbers and also considering all the required mechanisms including wettability alteration and interfacial tension reduction. Thus, developing new dimensionless numbers with improved predictability at larger scales is of utmost importance in CEOR processes. There are some scaling groups developed in the past for either imbibition or coreflood experiments but none of them were predictive because all the physics related to chemical EOR processes (interfacial tension reduction and wettability alteration) were not included. Furthermore, most of commercial reservoir simulators do not have the capability to model imbibition tests due to lack of some physics, such as surfactant molecular diffusion. The modeling of imbibition cell tests can aid to understand the mechanisms behind wettability alteration and consequently aid in up-scaling the process. Also, modeling coreflood experiments for fractured vuggy carbonates is challenging. Different approaches of random permeability distribution and explicit fractures were used to model the experiments which demonstrate the validity and ranges of applicability of upscaled procedures, and also indicate the importance of viscous and capillary forces in larger scales. The simulation models were then used to predict the recovery response times for larger cores.
30

Model-Based Test Case Generation for Real-Time Systems

Hessel, Anders January 2007 (has links)
Testing is the dominant verification technique used in the software industry today. The use of automatic test case execution increases, but the creation of test cases remains manual and thus error prone and expensive. To automate generation and selection of test cases, model-based testing techniques have been suggested. In this thesis two central problems in model-based testing are addressed: the problem of how to formally specify coverage criteria, and the problem of how to generate a test suite from a formal timed system model, such that the test suite satisfies a given coverage criterion. We use model checking techniques to explore the state-space of a model until a set of traces is found that together satisfy the coverage criterion. A key observation is that a coverage criterion can be viewed as consisting of a set of items, which we call coverage items. Each coverage item can be treated as a separate reachability problem. Based on our view of coverage items we define a language, in the form of parameterized observer automata, to formally describe coverage criteria. We show that the language is expressive enough to describe a variety of common coverage criteria described in the literature. Two algorithms for test case generation with observer automata are presented. The first algorithm returns a trace that satisfies all coverage items with a minimum cost. We use this algorithm to generate a test suite with minimal execution time. The second algorithm explores only states that may increase the already found set of coverage items. This algorithm works well together with observer automata. The developed techniques have been implemented in the tool CoVer. The tool has been used in a case study together with Ericsson where a WAP gateway has been tested. The case study shows that the techniques have industrial strength.

Page generated in 0.055 seconds