• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 116
  • 34
  • 13
  • 13
  • 12
  • 11
  • 11
  • 9
  • 6
  • 6
  • 5
  • 4
  • 3
  • 1
  • Tagged with
  • 468
  • 468
  • 324
  • 142
  • 135
  • 116
  • 112
  • 92
  • 90
  • 89
  • 88
  • 87
  • 83
  • 78
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Facilitating dynamic flexibility and exception handling for workflows

Adams, Michael James January 2007 (has links)
Workflow Management Systems (WfMSs) are used to support the modelling, analysis, and enactment of business processes. The key benefits WfMSs seek to bring to an organisation include improved efficiency, better process control and improved customer service, which are realised by modelling rigidly structured business processes that in turn derive well-defined workflow process instances. However, the proprietary process definition frameworks imposed by WfMSs make it difficult to support (i) dynamic evolution and adaptation (i.e. modifying process definitions during execution) following unexpected or developmental change in the business processes being modelled; and (ii) exceptions, or deviations from the prescribed process model at runtime, even though it has been shown that such deviations are a common occurrence for almost all processes. These limitations imply that a large subset of business processes do not easily translate to the 'system-centric' modelling frameworks imposed. This research re-examines the fundamental theoretical principles that underpin workflow technologies to derive an approach that moves forward from the productionline paradigm and thereby offers workflow management support for a wider range of work environments. It develops a sound theoretical foundation based on Activity Theory to deliver an implementation of an approach for dynamic and extensible flexibility, evolution and exception handling in workflows, based not on proprietary frameworks, but on accepted ideas of how people actually perform their work activities. The approach produces a framework called worklets to provide an extensible repertoire of self-contained selection and exception-handling processes, coupled with an extensible ripple-down rule set. Using a Service-Oriented Architecture (SOA), a selection service provides workflow flexibility and adaptation by allowing the substitution of a task at runtime with a sub-process, dynamically selected from its repertoire depending on the context of the particular work instance. Additionally, an exceptionhandling service uses the same repertoire and rule set framework to provide targeted and multi-functional exception-handling processes, which may be dynamically invoked at the task, case or specification level, depending on the context of the work instance and the type of exception that has occurred. Seven different types of exception can be handled by the service. Both expected and unexpected exceptions are catered for in real time. The work is formalised through a series of Coloured Petri Nets and validated using two exemplary studies: one involving a structured business environment and the other a more creative setting. It has been deployed as a discrete service for the well-known, open-source workflow environment YAWL, and, having a service orientation, its applicability is in no way limited to that environment, but may be regarded as a case study in service-oriented computing whereby dynamic flexibility and exception handling for workflows, orthogonal to the underlying workflow language, is provided. Also, being open-source, it is freely available for use and extension.
222

Interface adaptation for conversational services

Wang, Kenneth W.S. January 2008 (has links)
The proliferation of services on the web is leading to the formation of service ecosystems wherein services interact with one another in ways not foreseen during their development or deployment. This means that over its lifetime, a service is likely to be reused across multiple interactions, such that in each of them a different interface is required from it. Implementing, testing, deploying, and maintaining adapters to deal with this multiplicity of required interfaces can be costly and error-prone. The problem is compounded in the case of services that do not follow simple request-response interactions, but instead engage in conversations comprising arbitrary patterns of message exchanges. A key challenge in this setting is service mediation: the act of retrofitting existing services by intercepting, storing, transforming, and (re-)routing messages going into and out of these services so they can interact in ways not originally foreseen. This thesis addresses one aspect of service mediation, namely service interface adaptation. This problem arises when the interface that a service provides does not match the interface that it is expected to provide in a given interaction. Specifically, the thesis focuses on the reconciliation of mismatches between behavioural interfaces, that is, interfaces that capture ordering constraints between message exchanges. We develop three complementary proposals. Firstly, we propose a visual language for specifying adapters for conversational services. The language is based on a an algebra of operators that are composed to define links between provided-required interfaces. These expressions are fed into an execution engine that intercepts, buffers, transforms and forwards messages to enact the adapter specification. Secondly, we endow such adapter specifications with a formal semantics defined in terms of Petri nets. The formal semantics is used to statically check the correctness of adapter specifications. Finally, we propose an alternative approach to service interface adaptation that does not require hard-wired links between provided and required interfaces. This alternative approach is based on the definition of mapping rules between message types, and is embodied in an adaptation machine. The adaptation machine sits between pairs of services and manipulates the exchanged messages according to a repository of mapping rules. The adaptation machine is also able to detect deadlocks and information loss at runtime.
223

Acceptance and Use of the Service Oriented Computing Paradigm: the IT Professionals’ Perspective

Ilse Baumgartner Unknown Date (has links)
The thesis “Acceptance and use of the Service Oriented Computing paradigm: the IT professionals’ perspective” focuses on the question: what are the critical factors that influence IT professionals’ intentions to accept and use the Service Oriented Computing (SOC) approach to systems development? This work considers IT professionals as the key stakeholders in the SOC acceptance and use process and argues that the acceptance and practical use of SOC depends – at an early acceptance stage – primarily on the individual-level acceptance decisions made by senior IT professionals working for an organisation. Consequently, SOC acceptance and use (in its early stage) is seen as a bottom-up process driven, and to a high degree controlled, by the “early adopters” (Rogers 1995) of this technological paradigm (i.e. involved senior IT professionals). Although SOC is considered the enabling technological approach in many different future areas (e.g. eBusiness, eGovernment, eScience etc.), very little research exists on the process of practical acceptance of this paradigm, in particular focusing on the perspective of the “early stage” key stakeholders of this acceptance process, namely the IT professionals. This thesis consists of four major parts. First, it reviews existing literature on technology acceptance and use and confirms the absence of an established theoretical framework in the domain of individual-level technology acceptance in the IT industry. Second, based on data collected in a series of exploratory interviews with senior IT practitioners, an initial model explaining the acceptance and use of SOC among IT professionals is being proposed. Third, the derived model is revised and reformulated using an eGovernment case study. And forth, based on the refined model, a survey instrument is developed, pilot-tested and administered to senior IT professionals currently using the SOC approach to systems development in their professional work. This thesis makes a contribution to IS research in several ways. While there exists extensive, well-grounded and well-accepted research in the domain of “IT end-user” individual-level technology acceptance, the research on technology acceptance in the IT industry (i.e. technology acceptance by the IT professionals) is very limited, and nearly all studies carried out in this IS research field are concerned with established approaches or technologies. The current study is among the few examining the perspective of “early adopters” or “innovators” (Rogers 1995) instead of investigating the acceptance process of “early majority” or even “late majority”. Moreover, to the author’s knowledge it is the first study examining the process of individual-level SOC acceptance with a particular focus on the perspective of the “early stage” key stakeholders of this acceptance process, namely the IT professionals. One of the additional strengths of the study is the usage of multiple research methodologies – exploratory open-ended interviews, qualitative case study and web-based survey. This research is expected to be very interesting to researchers focusing on technology acceptance in general and on technology acceptance in the IT industry in particular. This research might also be of interest to IT practitioners considering to accept and use the SOC approach in their future applications.
224

Acceptance and Use of the Service Oriented Computing Paradigm: the IT Professionals’ Perspective

Ilse Baumgartner Unknown Date (has links)
The thesis “Acceptance and use of the Service Oriented Computing paradigm: the IT professionals’ perspective” focuses on the question: what are the critical factors that influence IT professionals’ intentions to accept and use the Service Oriented Computing (SOC) approach to systems development? This work considers IT professionals as the key stakeholders in the SOC acceptance and use process and argues that the acceptance and practical use of SOC depends – at an early acceptance stage – primarily on the individual-level acceptance decisions made by senior IT professionals working for an organisation. Consequently, SOC acceptance and use (in its early stage) is seen as a bottom-up process driven, and to a high degree controlled, by the “early adopters” (Rogers 1995) of this technological paradigm (i.e. involved senior IT professionals). Although SOC is considered the enabling technological approach in many different future areas (e.g. eBusiness, eGovernment, eScience etc.), very little research exists on the process of practical acceptance of this paradigm, in particular focusing on the perspective of the “early stage” key stakeholders of this acceptance process, namely the IT professionals. This thesis consists of four major parts. First, it reviews existing literature on technology acceptance and use and confirms the absence of an established theoretical framework in the domain of individual-level technology acceptance in the IT industry. Second, based on data collected in a series of exploratory interviews with senior IT practitioners, an initial model explaining the acceptance and use of SOC among IT professionals is being proposed. Third, the derived model is revised and reformulated using an eGovernment case study. And forth, based on the refined model, a survey instrument is developed, pilot-tested and administered to senior IT professionals currently using the SOC approach to systems development in their professional work. This thesis makes a contribution to IS research in several ways. While there exists extensive, well-grounded and well-accepted research in the domain of “IT end-user” individual-level technology acceptance, the research on technology acceptance in the IT industry (i.e. technology acceptance by the IT professionals) is very limited, and nearly all studies carried out in this IS research field are concerned with established approaches or technologies. The current study is among the few examining the perspective of “early adopters” or “innovators” (Rogers 1995) instead of investigating the acceptance process of “early majority” or even “late majority”. Moreover, to the author’s knowledge it is the first study examining the process of individual-level SOC acceptance with a particular focus on the perspective of the “early stage” key stakeholders of this acceptance process, namely the IT professionals. One of the additional strengths of the study is the usage of multiple research methodologies – exploratory open-ended interviews, qualitative case study and web-based survey. This research is expected to be very interesting to researchers focusing on technology acceptance in general and on technology acceptance in the IT industry in particular. This research might also be of interest to IT practitioners considering to accept and use the SOC approach in their future applications.
225

A Co-Design Modeling Methodology for Simulation of Service Oriented Computing Systems

January 2011 (has links)
abstract: The adoption of the Service Oriented Architecture (SOA) as the foundation for developing a new generation of software systems - known as Service Based Software Systems (SBS), poses new challenges in system design. While simulation as a methodology serves a principal role in design, there is a growing recognition that simulation of SBS requires modeling capabilities beyond those that are developed for the traditional distributed software systems. In particular, while different component-based modeling approaches may lend themselves to simulating the logical process flows in Service Oriented Computing (SOC) systems, they are inadequate in terms of supporting SOA-compliant modeling. Furthermore, composite services must satisfy multiple QoS attributes under constrained service reconfigurations and hardware resources. A key desired capability, therefore, is to model and simulate not only the services consistent with SOA concepts and principles, but also the hardware and network components on which services must execute on. In this dissertation, SOC-DEVS - a novel co-design modeling methodology that enables simulation of software and hardware aspects of SBS for early architectural design evaluation is developed. A set of abstractions representing important service characteristics and service relationships are modeled. The proposed software/hardware co-design simulation capability is introduced into the DEVS-Suite simulator. Exemplar simulation models of a communication intensive Voice Communication System and a computation intensive Encryption System are developed and then validated using data from an existing real system. The applicability of the SOC-DEVS methodology is demonstrated in a simulation testbed aimed at facilitating the design & development of SBS. Furthermore, the simulation testbed is extended by integrating an existing prototype monitoring and adaptation system with the simulator to support basic experimentation towards design & development of Adaptive SBS. / Dissertation/Thesis / Ph.D. Computer Science 2011
226

Integration of OPC Unified Architecture with IIoT Communication Protocols in an Arrowhead Translator

Rönnholm, Jesper January 2018 (has links)
This thesis details the design of a protocol translator between the industrial-automation protocol OPC UA, and HTTP. The design is based on the architecture of the protocol translator of the Arrowhead framework, and is interoperable with all of its associated protocols. The design requirements are defined to comply with a service-oriented architecture (SOA) and RESTful interaction through HTTP, with minimal requirement of the consuming client to be familiar with OPC UA semantics. Effort is put into making translation as transparent as possible, but limits the scope of this work to exclude a complete semantic translation. The solution presented in this thesis satisfies structural- and foundational interoperability, and bridges interaction to be independent of OPC UA services. The resulting translator is capable of accessing the content of any OPC UA server with simple HTTP-requests, where addressing is oriented around OPC UA nodes.
227

Método para modelagem de processos de negócios na engenharia de requisitos de software

Santos, Sheila Leal January 2014 (has links)
Orientadora: Prof.ª Dr.ª Fabiana Soares Santana / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Ciência da Computação, 2014. / As empresas produtoras de software precisam de métodos eficientes para obter resultados competitivos. Uma das principais causas dos resultados negativos em projetos de software se deve às deficiências na engenharia de requisitos de software. A especificação de requisitos inadequada ou incompleta pode levar à construção de sistemas que não estão em conformidade com as necessidades dos clientes, resultando no aumento de custos, atrasos nos cronogramas e realização de atividades desnecessárias. A fim de minimizar os problemas na especificação de requisitos, as boas práticas de engenharia de software recomendam o entendimento adequado do ambiente de tecnologia da informação (TI) e das regras de negócio. O uso de processos de negócio tem sido adotado pela maioria das organizações para mapear as suas necessidades e alinhar o conhecimento entre as equipes de negócio e de TI. BPMN (Business Process Modeling Notation, no original em inglês, ou Notação para Modelagem de Processos de Negócios) é a notação mais comumente adotada pelo mercado para a modelagem de processos de negócio, com diversas ferramentas disponíveis para o mapeamento e simulação de processos. Além da preocupação com os processos de negócio, as organizações têm adotado arquiteturas orientadas a serviços (SOA, Service Oriented Architectures, no original em inglês) com o intuito de facilitar a integração entre processos e tecnologia, resultando em soluções mais flexíveis para atender às constantes necessidades de mudanças e oportunidades de negócio. A união de BPMN e SOA permite o melhor entendimento dos sistemas através do mapeamento e modelagem dos processos de negócio, a partir dos quais é possível identificar os serviços que devem ser encapsulados dentro de um determinado ambiente tecnológico. O resultado é o aumento na produtividade, a melhoria na qualidade dos sistemas (QoS, Quality of Software, no original em inglês) e a redução de custos. Este trabalho propõe um método para modelagem de processos na engenharia de requisitos, incorporando formalmente o uso de processos de negócios na especificação dos requisitos de software. Um estudo de caso foi desenvolvido para experimentar o método proposto e mostrar a sua aplicação. Embora experimentos adicionais sejam recomendados, os resultados do estudo de caso foram promissores e mostram que a análise minuciosa dos processos de negócios na etapa de especificação de requisitos auxilia no entendimento e na identificação mais precisa dos requisitos do sistema, melhorando o potencial de sucesso na produção de software. / Producing software companies need effective methods to achieve competitive results. A major cause of adverse outcomes in software projects is due to deficiencies in the software requirements engineering. The specification of inadequate or incomplete requirements can lead to the construction of systems that are not in accordance with customer needs, resulting in increased costs, schedule delays, and development of unnecessary activities. In order to minimize the problems in the requirements specification, best practices in software engineering recommend a proper understanding of the information technology (IT) environment and of the business rules. The use of business processes has been adopted by many organizations to map their needs and to align the knowledge among business teams and IT. BPMN (Business Process Modeling Notation) is the notation most commonly adopted by the software companies for business processes modeling. Various software tools are available for processes mapping and simulation. In addition to the concern with business processes, many organizations are adopting service-oriented architectures (SOA) in order to facilitate the integration between processes and technology, resulting in more flexible solutions to meet the ever changing IT needs and the new business opportunities. The union of BPMN and SOA allows a better understanding of the systems to be developed by mapping and modeling business processes, from which it is possible to identify the services that should be encapsulated within a particular technological environment. Results include increased productivity, improved quality of software (QoS) and cost reduction. This work proposes a method for including the processes modeling as part of the requirements engineering, formally incorporating the use of business processes in the software requirements specification. A case study was developed to experiment the proposed method and to illustrate its application. Although further experiments are recommended, the results of the case study are promising and show that a thorough analysis of the business processes as part of the requirements specification phase helps in understanding and obtaining a more accurate identification of the system requirements, improving the potential for successful software production.
228

Implementação de uma arquitetura de controle distribuído para sistema produtivo. / Implementation of a distributed control architecture for productive system.

Caio Cesar Fattori 20 August 2010 (has links)
Os mercados estão se tornando independentes de barreiras geográficas e as indústrias têm procurado novas configurações de sistemas produtivos (SPs), passando de estruturas centralizadas para estruturas distribuídas, deslocando suas plantas produtivas para países com reservas de energia e baixos custos operacionais. Para permitir a coordenação e gerenciamento deste tipo de SP disperso, aproveita-se dos avanços das tecnologias mecatrônicas e de informação, as quais permitem uma maior cooperação entre as partes do sistema e entre os atores (clientes, operadores, administradores, etc.) envolvidos. Cada parte do SP disperso que também é um SP tem seu grau de autonomia operacional. Esse tipo de sistema apresenta novos problemas de integração e coordenação de componentes, que têm que ser superados para se chegar a uma efetiva implementação. A falta de dados de testes já realizados com estruturas distribuídas dificulta o desenvolvimento prático de SPs dispersos. Este trabalho inicialmente adota uma arquitetura de controle para a negociação entre usuários de um SP disperso. Para a implementação da arquitetura foram desenvolvidos modelos computacionais explorando o potencial da rede de Petri e do PFS (production flow schema) para sistematizar a construção dos modelos. Pela análise dos modelos com base nas propriedades da rede de Petri avaliou-se a arquitetura de controle e estabeleceu-se as especificações que foram adotadas para sua implementação prática. A implementação e os testes foram realizados considerando os subsistemas autônomos de um sistema flexível de montagem que emula um SP disperso. Os estudos, análises e testes realizados foram fundamentais para adquirir maior experiência prática relacionado a concepção, projeto, implementação e operação de arquiteturas de controle distribuído aplicadas a SPs dispersos. / The markets are becoming independent of geographic and industry have sought new configurations of productive systems, from centralized structures to distributed structures, shifting their production plants to countries with energy reserves and low operating costs. To allow the coordination and management of this type of dispersed productive system, takes advantage of advances in mechatronics and information technologies, which allow greater cooperation between parts of the system and among stakeholders (customers, operators, administrators, etc.) involved. Each part of disperse productive system, that is also a productive system, has its own level of operational autonomy. This type of system presents new problems of integration and coordination of components that must be overcome to achieve effective implementation. The lack of data from tests already carried out with distributed structures hinders the practical development of disperse productive systems. This work initially adopts a control architecture for negotiation between users of a disperse productive system. For the implementation of the architecture were developed computational models exploring the potential of Petri networks (PN) and the production flow schema (PFS) to systematize the construction of models. For the analysis of models based on the PN properties we evaluated the control architecture and established the specifications that were used for its practical implementation. The implementation and the tests were performed considering the autonomous subsystems of a flexible mounting system that emulates a disperse productive system. The studies, analysis and tests were essential to acquire more practical experience related to conception, design, implementation and operation of distributed control architectures applied to disperse productive systems.
229

Service-oriented middleware for dynamic, real-time management of heterogeneous geosensors in flood management / Middleware orientado a serviços para gerenciar dinamicamente e em tempo-real geosensores heterogêneos na gestão de inundações

Luiz Fernando Ferreira Gomes de Assis 16 December 2015 (has links)
Natural disasters such as floods, droughts and storms cause many deaths and a great deal of damage worldwide. Recently, several countries have suffered from an the increased number of floods. This has led government agencies to seek to improve flood risk management by providing historical data obtained from stationary sensor networks to help communities that live in hazardous areas. However, the sensor networks can only help to check specific features (e.g. temperature and pressure), and are unable to contribute significantly to supplying the missing information that is required. In addition to stationary sensors, mobile sensors have also been used to monitor floods since they can provide images and reach distances that are not within the coverage of stationary sensors. By combining these heterogeneous sensors, an initiative called Sensor Web Enablement (SWE) seeks to free these applications from the idiosyncrasies that affect the implementation of these heterogeneous sensors. However, SWE cannot always be applied effectively in a context where sensors are embedded and removed dynamically. This dynamic context makes it a complex task to handle, control, access and discover sensors. In view of this, the aim of this work is to dynamically manage heterogeneous sensors involved in flood risk management in near real-time, by enabling interoperable access to their data and using open and reusable components. To achieve this goal, a service-oriented middleware was designed that contains a common protocol message, a dynamic sensor management component and a repository. This approach was evaluated performed by employing an application that prioritizes geographically social media messages based on sensor data. / Os desastres naturais, como inundações, secas e tempestades causam muitas mortes e danos em todo o mundo. Mais recentemente, alguns países sofreram com o aumento das inundações, comparado com outros tipos de desastres. Para melhor gerenciá-las, agências governamentais têm fornecido dados históricos de redes de sensores estáticas para ajudar comunidades que vivem em áreas de risco. No entanto, tais redes de sensores apenas ajudam a verificar propriedades específicas (por exemplo, temperatura e pressão) e pouco contribuem com a falta de informações presente nesse contexto. Além dos sensores estáticos, sensores móveis também têm sido utilizados para monitorar inundações, uma vez que podem fornecer imagens e alcançar distâncias onde sensores estáticos não funcionam adequadamente. Para combinar esses sensores, deve ser utilizado uma iniciativa chamada Sensor Web Enablement (SWE) que isola as aplicações das idiossíncrasias da implementação desses sensores heterogêneos. Entretanto, a SWE não gerencia completamente contextos em que sensores são inseridos e removidos dinamicamente. Este contexto dinâmico torna complexo o controle, o acesso e a descoberta de novos sensores. Logo, o objetivo deste trabalho é gerenciar dinamicamente e próximo do tempo-real sensores heterogêneos envolvidos na gestão de inundações, permitindo um acesso interoperável para seus dados usando componentes abertos e de re-uso. Para alcançar esse objetivo, um middleware orientado a serviços contendo um protocolo de mensagens comum, um componente de gerenciamento dinâmico de sensores e um repositório foi desenvolvido. A avaliação dessa abordagem foi feita considerando uma aplicação que prioriza geograficamente dados de mídias sociais baseados em dados de sensores.
230

Investigação de modelo de auditoria contínua para tribunais de contas

Pacheco Motta Junior, Eury 31 January 2010 (has links)
Made available in DSpace on 2014-06-12T15:58:03Z (GMT). No. of bitstreams: 2 arquivo3245_1.pdf: 1745357 bytes, checksum: 69355571b9b350b5663bdd9a8e5c0ba3 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010 / Tribunal de Contas de Pernambuco / A pressão pela melhoria dos mecanismos de controle e de transparência vem demandando a modernização das técnicas de auditoria. Nesta busca, os recursos de Tecnologia da Informação têm se mostrado os principais aliados, utilizados cada vez em maior escala, e cada vez mais sofisticados. Neste aspecto, a utilização da chamada Auditoria Contínua (AC) é um dos principais avanços em curso na iniciativa privada. Voltada para análise de dados em formato eletrônico, a abordagem vem sendo cada vez mais adotada, impulsionada pelo crescimento das transações sem papel e por imposições legais, como o ato Sarbanes-Oxley (SOX) de 2002 que procura garantir que as empresas possuam mecanismos de controles confiáveis, reforçando sua governança e transparência como meio de recuperar a credibilidade dos investidores após escândalos financeiros envolvendo grandes corporações americanas. Recentes alterações na legislação brasileira criam obrigações de transparência para o setor público que são semelhantes às criadas pelo ato SOX. A mudança determina que as informações relativas à execução orçamentária e financeira dos entes públicos sejam publicadas em tempo real. Com a mudança, surgem as condições para que os Tribunais de Contas (TCs) utilizem abordagens de AC para fiscalizar a aplicação dos recursos públicos em tempo real. Os modelos proposto para AC são voltados para o setor privado, e muitas vezes para o controle interno. O presente trabalho visa investigar um modelo de AC apropriado ao papel dos TCs no exercício do controle externo. Com esta atualização tecnológica as Cortes de Contas podem avançar muito no nível de efetividade da sua atuação, gerando melhores resultados para a sociedade e benefícios para o setor público brasileiro como um todo. Como resultado da investigação foi construído um modelo de Ambiente de AC para TCs. A proposta descreve as instituições participantes do ambiente e seus papéis; a arquitetura tecnológica que suporta o funcionamento do ambiente; e o desenho dos principais processos do ambiente. Adicionalmente, apresenta-se alguns cenários de evolução e sugestão de critérios para planejamento do ambiente, bem como os benefícios que a abordagem pode trazer

Page generated in 0.0522 seconds