• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 14
  • 2
  • 1
  • 1
  • Tagged with
  • 52
  • 52
  • 29
  • 19
  • 15
  • 11
  • 11
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Facilitating dynamic flexibility and exception handling for workflows

Adams, Michael James January 2007 (has links)
Workflow Management Systems (WfMSs) are used to support the modelling, analysis, and enactment of business processes. The key benefits WfMSs seek to bring to an organisation include improved efficiency, better process control and improved customer service, which are realised by modelling rigidly structured business processes that in turn derive well-defined workflow process instances. However, the proprietary process definition frameworks imposed by WfMSs make it difficult to support (i) dynamic evolution and adaptation (i.e. modifying process definitions during execution) following unexpected or developmental change in the business processes being modelled; and (ii) exceptions, or deviations from the prescribed process model at runtime, even though it has been shown that such deviations are a common occurrence for almost all processes. These limitations imply that a large subset of business processes do not easily translate to the 'system-centric' modelling frameworks imposed. This research re-examines the fundamental theoretical principles that underpin workflow technologies to derive an approach that moves forward from the productionline paradigm and thereby offers workflow management support for a wider range of work environments. It develops a sound theoretical foundation based on Activity Theory to deliver an implementation of an approach for dynamic and extensible flexibility, evolution and exception handling in workflows, based not on proprietary frameworks, but on accepted ideas of how people actually perform their work activities. The approach produces a framework called worklets to provide an extensible repertoire of self-contained selection and exception-handling processes, coupled with an extensible ripple-down rule set. Using a Service-Oriented Architecture (SOA), a selection service provides workflow flexibility and adaptation by allowing the substitution of a task at runtime with a sub-process, dynamically selected from its repertoire depending on the context of the particular work instance. Additionally, an exceptionhandling service uses the same repertoire and rule set framework to provide targeted and multi-functional exception-handling processes, which may be dynamically invoked at the task, case or specification level, depending on the context of the work instance and the type of exception that has occurred. Seven different types of exception can be handled by the service. Both expected and unexpected exceptions are catered for in real time. The work is formalised through a series of Coloured Petri Nets and validated using two exemplary studies: one involving a structured business environment and the other a more creative setting. It has been deployed as a discrete service for the well-known, open-source workflow environment YAWL, and, having a service orientation, its applicability is in no way limited to that environment, but may be regarded as a case study in service-oriented computing whereby dynamic flexibility and exception handling for workflows, orthogonal to the underlying workflow language, is provided. Also, being open-source, it is freely available for use and extension.
32

Um framework para coordenação do tratamento de exceções em sistemas tolerantes a falhas / A framework for exception handling coordination in fault-tolerant systems

David Paulo Pereira 09 March 2007 (has links)
A adoção em larga escala de redes de computadores e gerenciadores de banco de dados contribuiu para o surgimento de sistemas de informação complexos. Atualmente, estes sistemas tornaram-se elementos essenciais na vida das pessoas, dando suporte a processos de negócio e serviços corporativos indispensáveis à sociedade, como automação bancária e telefonia. A utilização de componentes na estruturação destes sistemas promove maior qualidade e flexibilidade ao produto e agiliza o processo de desenvolvimento. Entretanto, para que estes benefícios sejam totalmente observados, é fundamental que os provedores de componentes de prateleira projetem especificações precisas, completas e consistentes. Geralmente, as especificações omitem ou negligenciam o comportamento dos componentes nas situações de falha. Desta forma, a utilização de componentes não confiáveis, cujos comportamentos não podem ser inteiramente previstos, compromete seriamente o projeto de sistemas tolerantes a falhas. Uma estratégia para a especificação de componentes tolerantes a falhas é informar a ocorrência de erros através de exceções e realizar a recuperação dos mesmos por rotinas de tratamento correspondentes. A especificação deve separar claramente o comportamento normal do excepcional, destinado à recuperação do erro. Entretanto, em sistemas concorrentes e distribuídos, a especificação apenas deste tratamento local não é suficiente. Uma exceção pode ser lançada em decorrência de erros sistêmicos (i.e. problemas de rede) que afetam todo o sistema. Assim, determinadas exceções devem ser tratadas em nível arquitetural, envolvendo os demais componentes no tratamento. O modelo conceitual de ações Atômicas Coordenadas (ações CA - Coordinated Atomic actions), bastante aplicado na estruturação de sistemas tolerantes a falhas, define um mecanismo geral para a coordenação do tratamento excepcional dos componentes, que cooperam na execução das atividades e competem por recursos compartilhados. Portanto, o modelo de ações CA oferece uma solução potencialmente viável para a especificação do tratamento de exceções em nível arquitetural. Este trabalho propõe um framework para a especificação do tratamento de exceções em nível arquitetural, baseando-se no modelo de aninhamento de ações CA e utilizando a linguagem orientada a eventos CSP (Communicating Sequential Processes). Sua principal característica é prover um protocolo padronizado para a coordenação do tratamento de exceções, que envolve a cooperação dos componentes do sistema. Além disso, é apresentada uma estratégia para a verificação formal dos sistemas na ferramenta FDR (Failure Divergence Refinement), com base no modelo de refinamento por rastros. / The widespread scale adoption of computer networks and database management systems has contributed to the arising of complex information systems. Nowadays, these systems have become essential aspects in the everyday life, supporting business processes and indispensable enterprise services to society such as banking automation and telephony. The usage of components in structuring of these systems promotes higher quality and flexibility to the product and accelerates the software development process. However, in order to fully observe the benefits it is essential that the suppliers of these COTS (commercial off-the-shelf) design precise, complete and consistent specifications. Generally, the specifications omit or neglect the behavior of these components in exceptional situations. Therefore, the usage of untrustworthy components whose behavior cannot be entirely foreseen seriously compromise the design of fault-tolerant systems. One of the strategies used for the specification of fault-tolerant components is to inform the occurrence of errors through exceptions and make its recovering by the correspondent exception handling routines. The specification should separate clearly the normal behavior from the exceptional one, specially designed for error recovery. However, in concurrent and distributed systems, specification of local exception handling is not enough. An exception could be raised as a result of systemic errors (i.e. network errors) which affect the entire system, thus specific types of exceptions should be treated at an architectural level involving all the other components in this handling activity. The conceptual model of Coordinated Atomic (CA) actions, often applied in the structuring of fault-tolerant systems, defines a general mechanism for coordination of exception handling with components that cooperate while executing activities and compete for shared resources. Therefore, the model of CA actions offers a perfectly viable solution for the specification of exception handling at an architectural level. This work proposes a framework for the specification of exception handling at an architectural level, based on the nesting model of CA actions and using the event-oriented language CSP (Communicating Sequential Processes). Its main characteristic is to provide a standardized protocol for coordination of exception handling that involves the cooperation of system components. Moreover, it is presented a formal strategy for system verification using the FDR (Failure Divergence Refinement) tool, based on the traces refinement model.
33

CatchML: a modeling language for context aware exception handling verification and specification in ubiquitous systems / CatchML: uma linguagem de domÃnio especÃfico para modelagem do tratamento de exceÃÃo sensÃvel ao contexto

Rafael de Lima 28 August 2013 (has links)
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico / In ubiquitous systems, due to the complexity added by the use of contextual information, the application of context aware exception handling (CAEH) techniques has many challenges and in the literature several approaches have been found to define concepts and abstractions useful for modeling CAEH. However, only one of these approaches proposes a method for specification and verification of models in the field of ubiquitous systems, which provides a tool for specifying the CAEH model using a Java API, and also generates an error report to a text file. The disadvantage of this approach is that the designer should strive to understand programming details that are irrelevant to the analysis process of the exceptional behavior of the system. Then, this work aims to propose a domain specific language for modeling CAEH, which provides abstractions and constructors that allow to express relevant concepts and make the task of designing CAEH models simpler and more intuitive. In addition, the language is integrated with the tool mentioned before that allows automatic model verification. The errors generated by the verifier are now shown directly in the source code making their identification and correction easier for the designer. In order to evaluate the language, a case study is conducted to provide evidence of its viability as an alternative to modeling CAEH. / Em sistemas ubÃquos, devido à complexidade inserida pela utilizaÃÃo de informaÃÃes contextuais, a aplicaÃÃo de tÃcnicas de tratamento de exceÃÃo sensÃvel ao contexto (TESC) tem sido objeto de estudo para muitos pesquisadores. Na literatura sÃo encontradas diversas abordagens que definem conceitos e abstraÃÃes Ãteis para modelagem de TESC. Entretanto, apenas uma dessas abordagens propÃe um mÃtodo para especificaÃÃo e verificaÃÃo de modelos no domÃnio de sistemas ubÃquos o qual fornece uma ferramenta para especificaÃÃo do modelo de TESC atravÃs de uma API Java, e gera ainda um relatÃrio de erros em um arquivo texto. A desvantagem dessa abordagem à que o projetista deve se esforÃar para entender detalhes de programaÃÃo irrelevantes ao processo de anÃlise do comportamento excepcional do sistema. Esta dissertaÃÃo tem portanto como objetivo propor uma linguagem de domÃnio especÃfico para modelagem de TESC, com o intuito de oferecer abstraÃÃes e construtores que permitem expressar conceitos pertinentes e tornar a tarefa de projetar modelos de TESC mais simples e intuitiva. AlÃm disso, a linguagem à integrada com a ferramenta citada anteriormente, o que permite realizar a verificaÃÃo do modelo de forma automÃtica. Os erros gerados pelo verificador sÃo mostrados agora diretamente no cÃdigo do modelo facilitando a identificaÃÃo e correÃÃo dos mesmos pelo projetista. A fim de avaliar a linguagem, um estudo de caso à realizado para fornecer indÃcios de sua viabilidade como alternativa para modelagem de TESC.
34

Projeto e implementação de um mecanismo de tratamento de exceções coordenadas para arquiteturas de componentes de serviços / Design and implementation of a coordinated exception handling mechanism for service component architecture

Leite, Douglas Siqueira 17 August 2018 (has links)
Orientador: Cecília Mary Fischer Rubira / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-17T14:44:52Z (GMT). No. of bitstreams: 1 Leite_DouglasSiqueira_M.pdf: 1797650 bytes, checksum: ce96fe468509c785b633e1cde43729dd (MD5) Previous issue date: 2010 / Resumo: Arquitetura Orientada a Serviços (Service-Oriented Architecture - SOA) _e um modelo arquitetural que visa melhorar a eficiência, agilidade e a produtividade de aplicações empresariais através do uso de serviços e composições de serviços, as quais podem ser executadas tanto de forma síncrona quanto assíncrona. Diferentes tecnologias de software podem ser usadas para implementar SOA, tais como Web services e Arquitetura de Componentes de Serviços (Service Component Architecture - SCA). A primeira _e baseada em padrões XML, ao passo que a segunda provê um modelo de componentes para implementação de serviços e composições de serviços. Em particular, quando composições de serviços assíncronos são executadas, um ou mais erros podem ocorrer concorrentemente nos diferentes serviços, possivelmente ao mesmo tempo, afetando a dependabilidade da composição. Dessa forma, mecanismos de tolerância a falhas são necessários a _m de prevenir que um defeito se manifeste na composição. Neste trabalho, apresentamos o projeto e implementação de um mecanismo de tratamento de exceções coordenadas para arquiteturas orientadas a serviços que permite a criação de composições de serviços assíncronos tolerante a falhas de uma forma flexível. Mais especifiçamente, nossa solução _e baseada em um mecanismo de tratamento de exceções global, definido pelo modelo Guardian, já que este oferece uma solução mais geral e flexível quando comparado com outras abordagens, tais como soluções baseadas em ações atômicas coordenadas. Nosso framework, denominado Guardian-SCA, foi implementado como parte do projeto Apache Tuscany SCA, usando o modelo de extensão do Tuscany e programação orientada a aspectos, aumentando assim a flexibilidade do framework / Abstract: Service-Oriented Architecture (SOA) is an architectural model that aims to enhance the efficiency, agility, and productivity of an enterprise by structuring services in terms of services compositions, which can be executed either synchronously or asynchronously. Different software technologies can be used to implement SOA, such as Web services and Service Component Architecture (SCA). The former is based on XML-based standards, while the latter provides a component model for implementing services and service compositions. In particular, when asynchronous services compositions are executed, one or more errors can occur concurrently, possibly at same time, affecting the composition's dependability. In this way, fault tolerance mechanisms are necessary in order to prevent the services compositions from reaching a failure state. In this work, we present the design and implementation of a coordinated exception handling mechanism, applicable to service-oriented architectures, which allows the creation of fault-tolerant asynchronous service compositions. More specifically, our solution is based on a global exception handling mechanism defined by the Guardian model, since it is more general and flexible when compared to other approaches, like CA Actions-based solutions. Our framework, named Guardian-SCA, was implemented as a part of the Apache Tuscany SCA project, using the Tuscany extension model and aspect-oriented programming with the aim to increase the framework's exibility / Mestrado / Sistemas de Informação / Mestre em Ciência da Computação
35

Um metodo para modelagem de exceções em desenvolvimento baseado em componentes / A method for modelling exceptions in component-based software development

Brito, Patrick Henrique da Silva 14 October 2005 (has links)
Orientador: Cecilia Mary Fischer Rubira / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-05T04:18:48Z (GMT). No. of bitstreams: 1 Brito_PatrickHenriquedaSilva_M.pdf: 1701733 bytes, checksum: c2009f5302dc57e1c6d8e2d4a0c95c85 (MD5) Previous issue date: 2005 / Resumo: Devido a grande popularização do Desenvolvimento Baseado em Componentes (DBC), ele vem sendo empregado inclusive no desenvolvimento de sistemas computacionais críticos. O emprego do DBC na construção de sistemas confiáveis evidencia a necessidade de se desenvolver componentes de software que sejam robustos e que possuam uma garantia maior do seu funcionamento correto. Tratamento de exceções é uma técnica bastante conhecida para a verificação e tratamento de erros em sistemas de software. Por'em, apesar da sua popularidade, o seu projeto e a implementação são constituídos de tarefas muito complexas que não recebem uma atenção adequada dos processos de desenvolvimento existentes. A situação É ainda mais crítica se levarmos em considera¸c¿ao os métodos para DBC. Este trabalho propõe um método para auxiliar a modelagem do comportamento excepcional de sistemas baseados em componentes, chamado MDCE+. Baseado no refinamento da metodologia MDCE, o MDCE+ apresenta dois diferenciais importantes, que reforçam o seu aspecto robusto: (i) o fato dele combinar as abordagens top-down e botton-up para o desenvolvimento de sistemas confiáveis; e (ii) o fato dele ser centrado na arquitetura. O foco na arquitetura de software contribui para uma melhor definição e análise do fluxo de exceções entre os componentes do sistema. Essa maneira estruturada de detectar e tratar exceções no contexto da ocorrência de falhas é particularmente relevante para sistemas que apresentam requisitos de confiabilidade extrema. O método MDCE+ é um método genérico que pode ser aplicada a processos de desenvolvimento modernos. Em particular, nesta dissertação o método MDCE+ foi adaptado ao processo UML Components e a uma metodologia de testes. Como maneira de avaliar esse método, foi desenvolvido um estudo de caso de um sistema financeiro real, com requisitos de tolerância a falhas. Dada a sua importância, o processo de avaliação do método MDCE+ foi dividido em tr¿es etapas: (i) preparação; (ii) execução; e (iii) análise dos resultados. Nesse estudo foi necessário tratar exceções na arquitetura do sistema, com o intuito de aumentar a disponibilidade dos serviços / Abstract: Due to the large adoption of the Component-Based Development (CBD), it has also been employed in the development of critical software systems. The development of dependable systems using the CBD paradigm evidences the necessity of developing software components that are robust and dependable. Exception handling is a well known technique for verify and treat errors in software systems. However, despite its popularity, its design and implementation are constituted of very complex tasks that do not receive the adequate attention from the existing development processes. This is still more critical in the context of CBD processes. This work presents the MDCE+, a method that assists the modeling of the exceptional behavior in component-based software development. Based in the refinement of the MDCE methodology, the MDCE+ presents two important differentials, that strengthen its robustness: (i) it combines the top-down and bottom-up strategies for the development of dependable systems; and (ii) it is centered in the software architecture. As a consequence of the focus given to the software architecture, the exceptions that flow between the system components are better defined and analyzed. This structured way to detect and to treat exceptions in the context of the occurrence of imperfections is particularly needed for developing dependable systems. The MDCE+ is a generic method that can be applied together with modern development processes. In particular, in this master thesis MDCE+ was adapted to the UML Components process and to a software test methodology. In order to evaluate this method, a case study of a real financial system with fault-tolerance requirements was developed. Given its importance, the evaluation process of the MDCE+ method was decomposed in three stages: (i) preparation; (ii) execution; and (iii) results analysis. In order to increase the services availability, in this study it was necessary to deal with exceptions in the software architecture / Mestrado / Engenharia de Software / Mestre em Ciência da Computação
36

Validação do fluxo excepcional a partir do diagrama de atividades da UML 2.0 / Validation of exceptional flow in UML 2.0 acitivity diagram

Ferreira, Jeferson, 1973- 18 August 2018 (has links)
Orientador: Eliane Martins / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-18T15:53:22Z (GMT). No. of bitstreams: 1 Ferreira_Jeferson_M.pdf: 10344068 bytes, checksum: f13e5d139255d50754dc668d1bbf3fc6 (MD5) Previous issue date: 2011 / Resumo: Para a construção de sistemas robustos, devem ser utilizadas técnicas de tolerância a falhas que podem ser implementadas através de mecanismos de tratamento de exceções. Esses mecanismos possibilitam o tratamento de possíveis exceções, ou até mesmo a continuação da execução das funcionalidades do sistema mesmo na presença de uma exceção. O uso dos mecanismos de tratamento de exceções para desenvolver sistemas de software em larga escala, juntamente com o fato de ser implementado por diversas linguagens modernas, confirma a importância desta prática de desenvolvimento. Por outro lado, o uso desses mecanismos tem suas desvantagens, impactando principalmente na complexidade dos sistemas. Um problema que ocorre com muita frequência é efetuar a validação do fluxo excepcional somente na fase de implementação. A detecção de um problema de especificação nesta etapa do processo, pode acarretar em um aumento nos custos e prazos para a entrega do software. Este trabalho apresenta uma abordagem que utiliza as técnicas de análise estática, normalmente empregadas para detectar falhas no código fonte, para antecipar a validação do fluxo excepcional de um componente de software durante o ciclo de desenvolvimento. A solução proposta utiliza as informações do fluxo de controle e fluxo de dados obtidas a partir de um modelo comportamental. O modelo utilizado nesta abordagem é o diagrama de atividades da UML, que passa por uma série de transformações até gerar um grafo de fluxo de controle interprocedimental. Durante este processo são executadas análises de fluxo de dados para inferir com precisão quais são os tipos de exceções podem ser lançadas em dado ponto do modelo. Também faz parte deste trabalho a apresentação de uma ferramenta de apoio para o processo de validação do fluxo excepcional. Esta ferramenta, denominada ADEX (Activity Diagram EXceptional flow analyzer), implementa os algoritmos utilizados para a conversão do diagrama de atividades no grafo de fluxo de controle interprocedimental. A ferramenta também oferece recursos para a visualização do fluxo de controle normal e excepcional do modelo / Abstract: In order to develop robust software, should be used fault tolerant techniques that can be implemented by exception handling mechanisms. These mechanisms allow the handling of possible exceptions or even the continued of execution of the system's functionalities, even in the presence of an exception. The use of exception handling mechanisms to develop large scale software systems together with the fact that several modern programming languages provide these mechanisms, confirm the importance of these mechanisms in practice. On the other hand, the use of these mechanisms has some disadvantages, principally impacting on the complexity of the systems. One problem that occurs very often is performing the validation of the exceptional flow only during the implementation phase. The detection of a specification problem at this stage of the process can lead the increasing of costs and delays to delivery the software. This paper presents an approach that uses static analysis techniques, usually used to detect anomalies in the source code, to antecipate the validation of the exceptional flow of a software component in the development cycle. The proposed solution uses the information of control flow and data flow gathered from a behavioral model. The model used in this approach is the UML activity diagram, which undergoes a series of transformations to generate a interprocedural control flow graph. During this process are performed data flow analysis to inferring precisely what kind of exceptions can be thrown at a specific point of the model. The presentation of a tool to support the validation of the exceptional flow, also is part of this work. This tool, called ADEX (Activity Diagram EXceptional flow analyzer), implements the algorithms used to convert the activity diagram in the interprocedural control flow graph. The tool also provides features for visualization of normal and exceptional control flow of the model / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
37

Sistemas de informação cientes de processos, robustos e confiáveis / Robust and reliable process-aware information systems

André Luis Schwerz 08 December 2016 (has links)
Atualmente, diversas empresas e organizações estão cada vez mais empreendendo esforços para transformar rapidamente as suas potenciais ideias em produtos e serviços. Esses esforços também têm estimulado a evolução dos sistemas de informação que passaram a ser apoiados por modelos de alto nível de abstração para descrever a lógica do processo. Neste contexto, destaca-se o sucesso dos Sistemas de Informação cientes de Processos (PAIS, do inglês Process-Aware Information Systems) para o gerenciamento de processos de negócios e automação de processos científicos de larga escala (e-Science). Grande parte do sucesso dos PAIS é devido à capacidade de prover funcionalidades genéricas para modelagem, execução e monitoramento dos processos. Essas características são bem-sucedidas quando os modelos de processos têm um caminho bem-comportado no sentido de atingir os seus objetivos. No entanto, situações anômalas que desviam a execução desse caminho bem-comportado ainda representam um significativo desafio para os PAIS. Por causa dos vários tipos de falhas que desviam a execução do comportamento esperado, prover uma execução robusta e confiável é uma tarefa complexa para os atuais PAIS, uma vez que nem todas as situações de falha podem ser eficientemente descritas dentro da estrutura do fluxo tradicional. Como consequência, o tratamento de tais situações geralmente envolve intervenções manuais nos sistemas por operadores humanos, o que resulta em custos adicionais e significativos para as empresas. Neste trabalho é introduzido um método de composição para recuperação ciente de custos e benefícios que é capaz de encontrar e seguir caminhos alternativos que reduzam os prejuízos financeiros do tratamento de exceções. Do ponto de vista prático, esse método provê o tratamento de exceção automatizado e otimizado ao calcular os custos e benefícios de cada caminho de recuperação e escolher o caminho com a melhor relação custo-benefício disponível. Mais especificamente, o método de recuperação proposto estende a abordagem WED-flow (Workflow, Event processing and Data-flow) para permitir a composição ciente de custos e benefícios de passos de recuperação transacionais backward e forward. Por fim, os experimentos mostram que esse método de recuperação pode ser adequadamente incorporado para manipular exceções em uma ampla variedade de processos. / Nowadays, many corporations and organizations are increasingly making efforts to transform quickly and effectively their potential ideas into products and services. These efforts have also stimulated the evolution of information systems that are now supported by higher-level abstract models to describe the process logic. In this context, several sophisticated Process-Aware Information Systems (PAIS) have successfully been proposed for managing business processes and automating large-scale scientific (e-Science) processes. Much of this success is due to their ability to provide generic functionality for modeling, execution and monitoring processes. These functionalities work well when process models have a well-behaved path towards achieving their objectives. However, anomalous situations that fall outside of the well-behaved execution path still pose a significant challenge to PAIS. Because of the many types of failures that may deviate execution away from expected behaviors, provision of robust and reliable execution is a complex task for current PAIS, since not all failure situations can be efficiently modeled within the traditional flow structure. As a consequence, the treatment for such situations usually involves interventions in systems by human operators, which result in significant additional cost for businesses. In this work, we introduce a cost/benefit-aware recovery composition method that is able to find and follow alternative paths to reduce the financial side effects of exception handling. From a practical point of view, this method provides the automated and optimized exception handling, by calculating the cost and benefits of each recovery path, and choosing the recovery path with the best cost/benefits available. More specifically, our recovery method extends the WED-flow (Workflow, Event processing and Data-flow) approach for enabling cost/benefit-aware composition of forward and/or backward transactional recovery steps. Finally, the experiments point out that this recovery method can be suitably incorporated into exception handling within a wide variety of processes.
38

AdaptFlow: Protocol-based Medical Treatment Using Adaptive Workflows

Greiner, U., Müller, R., Rahm, E., Ramsch, J., Heller, B., Löffler, M. 25 January 2019 (has links)
Objectives: In many medical domains investigator-initiated clinical trials are used to introduce new treatments and hence act as implementations of guideline-based therapies. Trial protocols contain detailed instructions to conduct the therapy and additionally specify reactions to exceptional situations (for instance an infection or a toxicity). To increase quality in health care and raise the number of patients treated according to trial protocols, a consultation system is needed that supports the handling of the complex trial therapy processes efficiently. Our objective was to design and evaluate a consultation system that should 1) observe the status of the therapies currently being applied, 2) offer automatic recognition of exceptional situations and appropriate decision support and 3) provide an automatic adaptation of affected therapy processes to handle exceptional situations. Methods: We applied a hybrid approach that combines process support for the timely and efficient execution of the therapy processes as offered by workflow management systems with a knowledge and rule base and a mechanism for dynamic workflow adaptation to change running therapy processes if induced by changed patient condition. Results and Conclusions: This approach has been implemented in the AdaptFlow prototype. We performed several evaluation studies on the practicability of the approach and the usefulness of the system. These studies show that the AdaptFlow prototype offers adequate support for the execution of real-world investigator-initiated trial protocols and is able to handle a large number of exceptions.
39

Low-Level Static Analysis for Memory Usage and Control Flow Recovery

Bockenek, Joshua Alexander 07 March 2023 (has links)
Formal characterization of the memory used by a program is an important basis for security analyses, compositional verification, and identification of noninterference. However, soundly proving memory usage requires operating on the assembly level due to the semantic gap between high-level languages and the code that processors actually execute. Automated methods, such as model checking, would not be able to handle many interesting functions due to the undecidability of memory usage. Fully-interactive methods do not scale well either. Sound control flow recovery (CFR) is also important for binary decompilation, verification, patching, and security analysis. It lifts raw unstructured data into a form that allows reasoning over behavior and semantics. However, doing so requires interpreting the behavior of the program when indirect or dynamic control flow exists, creating a recursive dependency. This dissertation tackles the first property with two contributions that perform proof generation combined with interactive theorem proving in a semi-automated manner: an untrusted tool extracts as much information as it can from the functions under test and then generates all the necessary proofs to be completed in a theorem prover. The first, Floyd-style approach still requires significant manual effort but provides good flexibility and ensures no paths are analyzed more than once. In contrast, the second, Hoare-style approach sacrifices some flexibility and avoidance of repeated path evaluation in order to achieve much greater automation. However, neither approach can handle the dynamic control flow caused by indirect branching. The second property is handled by the second set of contributions of this dissertation. These two contributions provide fully-automated methods of recovering control flow from binaries even in the presence of indirect branching. When such dynamic control flow cannot be overapproximatively resolved, it is clearly noted in the resultant output. In the first approach to control flow recovery, a structured memory representation allows for general analysis of control flow in the presence of indirection, gaining scalability by utilizing context-free function analysis. It supports various aliasing conditions via the usage of nondeterminism, with multiple output states potentially being produced from a given input state. The second approach adds function context and abstract interpretation-inspired modeling of the C++ exception handling (EH) application binary interface (ABI), allowing for the discovery of previously-unknown paths while maintaining or increasing automation. / Doctor of Philosophy / Modern computer programs are so complicated that individual humans cannot manually check all but the smallest programs to make sure they are correct and secure. This is even worse if you want to reduce the trusted computing base (TCB), the stuff that you have to assume is working right in order to say a program will execute correctly. The TCB includes your computer itself, but also whatever tools were used to take the programs written by programmers and transform them into a form suitable for running on a computer. Such tools are often called compilers. One method of reducing the TCB is to examine the lowest-level representation of that program, the assembly or even machine code that is actually run by your computer. This poses unique challenges, because operating on such a low level means you do not have a lot of the structure that a more abstract, higher-level representation provides. Also, sometimes you want to formally state things about a program's behavior; that is, say things about what it does with a high degree of confidence based on mathematical principles. You may also want to verify that one or more of those statements are true. If you want to be detailed about that behavior, you may need to know all of the chunks, or regions, in random-access memory (RAM) that are used by that program. RAM, henceforth referred to as just ``memory'', is your computer's first place of storage for the information used by running programs. This is distinct from long-term storage devices like hard disk drives (HDDs) or solid-state drives (SSDs), which programs do not normally have direct access to. Unfortunately, there is no one single approach that can automatically determine with absolute certainty for all cases the exact regions of memory that are read or written. This is called undecidability, and means that you need to approximate those memory regions a lot of the time if you want to have a significant degree of automation. An underapproximation, an approach that only gives you some of the regions, is not useful for formal statements as it might miss out on some behavior; it is unsound. This means that you need an overapproximation, an approach that is guaranteed to give you at least the regions read or written. Therefore, the first contribution of this dissertation is a preliminary approach to such an overapproximation. This approach is based on the work of Robert L. Floyd, focusing on the direct control flow (where the steps of a program go) in an individual function (structured program component). It still requires a lot of user effort, including having to manually specify the regions in memory that were possibly used and do a lot of work to prove that those regions are (overapproximatively) correct, so our tests were limited in scope. The second contribution automated a lot of the manual work done for the first approach. It is based on the work of Charles Antony Richard Hoare, who developed a verification approach focusing on the syntax (the textual form) of programs. This contribution produces what we call formal memory usage certificates (FMUCs), which are formal statements that the regions of memory they describe are the only ones possibly affected by the functions under test. These statements also come with proofs, which for our work are like scripts used to verify that the things the FMUCs assert about the corresponding functions can be shown to be true given the assumptions our FMUCs have. Sometimes those proofs are incomplete, though, such as when there is a loop (repeated bit of code) in a function under test or one function calls (executes) another. In those cases, a user has to finish the proof, in the first case by weakening (removing information from) the FMUC's statements about the loop and in the second by composing, or combining, the FMUCs of the two functions. Additionally, this second approach cannot handle dynamic control flow. Such control flow occurs when the low-level instructions a program uses to move to another place in that program do not have a pre-stored location to go to. Instead, that location is supplied as the program is running. This is opposed to direct control flow, where the place to go to is hard-coded into the program when it is compiled. The tool also cannot not deal with aliasing, which is when different state parts (value-holding components) of a program contain the same value and that value is used as the numeric address or identifier of a location in memory. Specifically, it cannot deal with potential aliasing, when there is not enough information available to determine if the state parts alias or not. Because of that, we had to add extra assumptions to the FMUCs that limited them to those cases where ambiguous memory-referencing state parts referred to separate memory locations. Finally, it specifically requires assembly as input; you cannot directly supply a binary to it. This is also true of the first contribution. Because of this, we were able to test on more functions than before, but not a lot more. Not being able to deal with dynamic control flow is a big problem, as almost all programs use it. For example, when a function reaches its end, it has to figure out where to return to based on the current state of the program (in the previous contribution, this was done manually). This means that control flow recovery (CFR) is very important for many applications, including decompilation (converting a program back into a higher-level form), patching (updating a program in place without modifying the original code and recompiling it), and low-level analysis or verification in general. However, as you may have noticed from earlier in this paragraph, in order to deal with such dynamic control flow you need to figure out what the possible destinations are for the individual control flow transfers. That can require knowing where you came from in the program, which means that analysis of dynamic control flow requires context (in this context, information previously obtained in the program). Even worse, it is another undecidable problem that requires overapproximation. To soundly recover control flow, we developed Hoare graphs (HGs), the third contribution of this dissertation. HGs use memory models that take the form of forests, or collections of tree data structures. A single tree represents a region in memory that may have multiple symbolic references, or abstract representations of a value. The children of the tree represent regions used in the program that are enclosed within their parent tree elements. Now, instead of assuming that all ambiguous memory regions are separate, we can use them under various aliasing conditions. We have also implemented support for some forms of dynamic control flow. Those that are not supported are clearly marked in the resultant HG. No user interaction is required even when loops are present thanks to a methodology that automatically reduces the amount of information present at a re-executed instruction until the information stabilizes. Function composition is also automatic now thanks to a method that treats each function as its own context in a safe and automated way, reducing memory consumption of our tool and allowing larger programs to be examined. In the process we did lose the ability to deal with recursion (functions that call themselves or call other functions that call back to the original), though. Lastly, we provided the ability to directly load binaries into the tool, no external disassembly (converting machine code into human-readable instructions) needed. This all allowed much greater testing than before, with applications to multiple programs and program libraries. The fourth and final contribution of this dissertation iterates on the HG work by narrowing focus to the concept of exceptional control flow. Specifically, it models the kind of exception handling used by C++ programs. This is important as, if you want to explore a program's behavior, you need to know all the places it goes to. If you use a tool that does not model exception handling, you may end up missing paths of execution caused by unwinding. This is when an exception is thrown and propagates up through the program's current stack of function calls, potentially reaching programmer-supplied handling for that exception. Despite this, commonplace tools for static, low-level program analysis do not model such unwinding. The control flow graph (CFG) produced by our exception-aware tool are called exceptional interprocedural control flow graphs (EICFGs). These provide information about the exceptions being thrown and what paths they take in the program when they are thrown. Additional improvements are a better methodology for handling dynamic control flow as well adding back in support for recursion. All told, this allowed us to explore even more programs than ever before.
40

Exception handling in object-oriented analysis and design

Van Rensburg, Annelise Janse 01 January 2002 (has links)
This dissertation investigates current trends concerning exceptions. Exceptions influence the reliability of software systems. In order to develop software systems that are most robust, thus delivering higher availability at a lower development and operating cost, the occurence of exceptions needs to be reduced and the effects of the exceptions controlled. In order to do this, issues such as detection, identification, classification, propagation, handling, language implementation, software testing and reporting of exceptions must be attended to. Although some of these areas are well researched there are remaining problems. The quest is to establish if a unified exception-handling framework is possible and viable, which can address the issues and problems throughout the software development life cycle, and if so, the requirements for such a framework. / Computing / M.Sc. (Information Systems)

Page generated in 0.1402 seconds