• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 277
  • 189
  • 50
  • 48
  • 29
  • 24
  • 19
  • 16
  • 13
  • 11
  • 10
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 779
  • 197
  • 131
  • 118
  • 107
  • 93
  • 91
  • 88
  • 82
  • 81
  • 79
  • 77
  • 76
  • 70
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
671

Conception d’une architecture de services d’intelligence ambiante pour l’optimisation de la qualité de service de transmission de messages en e-santé / Design of an ambient intelligence services architecture for optimizing quality of service of message transmission in eHealth

Guizani, Nachoua 30 September 2016 (has links)
La gestion de l'acheminement de messages d'e-santé en environnement ubiquitaire soulève plusieurs défis majeurs liés à la diversité et à la spécificité des cas d'usage et des acteurs, à l'évolutivité des contextes médical, social, logistique, environnemental...Nous proposons une méthode originale d'orchestration autonome et auto-adaptative de services visant à optimiser le flux des messages et à personnaliser la qualité de transmission, en les adressant aux destinataires les plus appropriés dans les délais requis. Notre solution est une architecture générique dirigée par des modèles du domaine d'information considéré et des données contextuelles, basés sur l'identification des besoins et des contraintes soulevées par notre problématique.Notre approche consiste en la composition de services de fusion et de gestion dynamique en temps réel d'informations hétérogènes provenant des écosystèmes source, cible et message, pilotés par des méthodes d'intelligence artificielle pour l'aide à la prise de décision de routage. Le but est de garantir une communication fiable, personnalisable et sensible à l'évolution du contexte, quel que soit le scénario et le type de message (alarme, technique, etc.). Notre architecture, applicable à divers domaines, a été consolidée par une modélisation des processus métiers (BPM) explicitant le fonctionnement des services qui la composent.Le cadriciel proposé est basé sur des ontologies et est compatible avec le standard HL7 V3. L'auto-adaptation du processus décisionnel d'acheminement est assurée par un réseau bayésien dynamique et la supervision du statut des messages par une modélisation mathématique utilisant des réseaux de Petri temporels / Routing policy management of eHealth messages in ubiquitous environment leads to address several key issues, such as taking into account the diversity and specificity of the different use cases and actors, as well as the dynamicity of the medical, social, logistic and environmental contexts.We propose an original, autonomous and adaptive service orchestration methodology aiming at optimizing message flow and personalizing transmission quality by timely sending the messages to the appropriate recipients. Our solution consists in a generic, model-driven architecture where domain information and context models were designed according to user needs and requirements. Our approach consists in composing, in real time, services for dynamic fusion and management of heterogeneous information from source, target and message ecosystems, driven by artificial intelligence methods for routing decision support. The aim is to ensure reliable, personalized and dynamic context-aware communication, whatever the scenario and the message type (alarm, technical, etc.). Our architecture is applicable to various domains, and has been strengthened by business process modeling (BPM) to make explicit the services operation.The proposed framework is based on ontologies and is compatible with the HL7 V3 standard. Self-adaptation of the routing decision process is performed by means of a dynamic Bayesian network and the messages status supervision is based on timed Petri nets
672

Modelo cooperativo construtivista para autoria de cursos a distância usando tecnologia de Workflow / Cooperative constructivist model for e-learning authoring using workflow technology

Zeve, Carlos Mario Dal Col January 2003 (has links)
Este trabalho tem por fim especificar um modelo de descrição de tarefas, usando a tecnologia de workflow para representar a automação das atividades de autoria de cursos em ambientes distribuídos como a web, baseado em um modelo interacionista de cooperação. Busca, também, obter respostas relevantes às necessidades de especificação de workflow, tendo em vista a possibilidade de agregar ou modificar alguns elementos, para que possam expressar situações não previstas nos modelos atuais. O interesse deste trabalho em workflow devese ao fato de as tarefas nele desenvolvidas se relacionarem à autoria de documentos multimídia, tais como os utilizados em educação a distância, que compreendem não somente a construção, mas os processos cooperativos que sugerem as decisões, as escolhas, as preferências durante o processo de autoria. A abordagem proposta trata o problema da concepção do workflow de forma declarativa, através de um modelo que permita especificar tarefas, assim como sua ordenação temporal. A ordenação temporal pode ser obtida através do sequenciamento, seleção e interação de atividades, bem como através de propriedades que identificam o início e o fim de cada atividade. Por último, este trabalho visa estender as possibilidades da construção dos modelos de workflow, propondo uma técnica de planejamento que possibilite uma política de alocação dos autores associando a disponibilidade de tempo e as competências envolvidas na execução das atividades. Assim, o objetivo que se busca é um modelo de processo de autoria que possibilite expressar a interação e cooperação entre os autores, através de uma política de alocação que seja orientada pelas competências para execução de determinadas atividades. / This work aims to specify a model of description of tasks, using workflow technology to represent the automation of e-learning authoring activities in a distributed environment as the web, based on an interacionist model of cooperation. It also has the objective of obtaining important answers to workflow specification needs, taking into consideration the possibility to add or modify some elements, so that they can express situations not foreseen in current models. The interest of this work in workflow relies in the fact that the developed tasks in are related to the multimedia documents authoring, such as those used in distance education. These comprise not only construction, but the cooperative processes that suggest decisions, choices, and preferences during the authoring process. The proposed approach deals with the workflow conception problem of the in a declarative way, through a model that allows the specification of tasks, as well as the temporary ordination. The temporary ordination can be obtained by the sequence, selection and interaction of activities, as well as, by the properties which identify the beginning and the end of each activity. Finally, this work seeks to extend the possibilities of workflow models construction, proposing a planning technique that makes possible an author’s allocation politic which associates time availability and the competences involved in the activities. Therefore, the objective of this work is an authorship process model that enables to express the interaction and cooperation among authors, by a competence for performing certain activities guided allocation politic.
673

Um middleware para execução de processos estruturados em grades computacionais / A middleware for execution of structured processes in computer grids

Cicerre, Fábio Rodrigo de Lima 12 July 2007 (has links)
Orientadores: Luiz Eduardo Buzato, Edmundo Roberto Mauro Madeira / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-16T20:16:50Z (GMT). No. of bitstreams: 1 Cicerre_FabioRodrigodeLima_D.pdf: 1417286 bytes, checksum: 642d37f1cb522ec4a356bbb4e9b55b96 (MD5) Previous issue date: 2010 / Resumo: O conceito de grade surgiu com a necessidade crescente de se aproveitar recursos computacionais disponíveis em uma ou mais organizações para resolver problemas que exigem compartilhamento de dados e um grande poder de processamento. Uma grade computacional tem como objetivo principal permitir a execução distribuída e paralela de tarefas em recursos compartilhados. Uma grade é constituída de infra-estrutura física, composta de uma ou mais redes autônomas de computadores, e de um sistema de suporte (middleware), que provê serviços de gerenciamento de informações sobre os recursos da grade, controle de acesso e execução de tarefas sobre esse recursos e mecanismos de comunicação. Atualmente existem diversos sistemas que suportam a execução de tarefas independentes em uma grade computacional, mas poucos consideram a execução de processos de workflow, que permitem a definição de dependência explícita de dados e controle entre tarefas, o que impede um melhor aproveitamento de recursos, escalabilidade, desempenho de execução e recuperação automática de processos com manutenção de consistência. O sistema Xavantes, proposto e descrito nesse trabalho, procura suprir essas deficiências, tendo como principal objetivo suportar a execução distribuída de processos de workflow em máquinas heterogêneas, em uma ou mais organizações autônomas e dinâmicas, provendo um middleware que forneça uma melhor escalabilidade, desempenho e confiabilidade para a execução de aplicações em grades computacionais / Abstract: The grid concept has emerged from the increasing necessity of using available computational resources in one or more organizations in order to solve problems that require data sharing and large processing power. The main goal of a computational grid is to allow the distributed and parallel execution of tasks in shared resources. A grid is composed of a physical infra-structure, with one or more autonomous networks of computers, and a middleware, which provides services of information management about the grid resources, access control and tasks execution in these resources and communication mechanisms. Nowadays, there are some systems that support the execution of independent tasks in a computational grid, but only ones consider the execution of workflow processes, which allow the explicit definition of data and control dependencies among tasks, and this restricts a better use of available resources, scalability, execution performance, and automatic recovery of processes with the correct consistency maintenance. The Xavantes system, proposed and described in this work, is designed to reduce these deficiencies, having as its main goal the supports to the distributed execution of workflows in heterogeneous resources of one or more autonomous and dynamic organizations, providing a middleware that delivers a better scalability, performance and reliability to the application execution in grid computing / Doutorado / Sistemas Distribuídos e Redes de Computadores / Doutor em Ciência da Computação
674

Uma heuristica de agrupamento de caminhos para escalonamento de tarefas em grades computacionais

Bittencourt, Luiz Fernando, 1981- 15 March 2006 (has links)
Orientador: Edmundo Roberto Mauro Madeira / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-06T12:20:00Z (GMT). No. of bitstreams: 1 Bittencourt_LuizFernando_M.pdf: 1217558 bytes, checksum: dcbdeb1eaf538ae17a83304451a73126 (MD5) Previous issue date: 2006 / Resumo: Uma grade computacional é um sistema heterogêneo colaborativo, geograficamente distribuído, multi-institucional e dinâmico, onde qualquer recurso computacional ligado a uma rede, local ou não, é um potencial colaborador. Grades computacionais são atualmente um grande foco de estudos relacionados à execução de aplicações paralelas, tanto aquelas que demandam grande poder computacional quanto aquelas que se adaptam bem a ambientes distribuídos. Como os recursos de uma grade pertencem a vários domínios administrativos diferentes com políticas diferentes, cada recurso tem autonomia para participar ou deixar de participar da grade em qualquer momento. Essa característica dinâmica e a heterogeneidade tornam o escalonamento de aplicações, a gerência de recursos e a tolerância a falhas grandes desafios nesses sistemas. Particularmente, o escalonamento desempenha um papel de suma importância, pois é determinante no tempo de execução das aplicações. O escalonamento de tarefas é um problema NP-Completo [6], o que levou ao desenvolvimento de uma heurística para o problema de otimização associado. Neste trabalho apresentamos um escalonador de tarefas em grades computacionais baseado no Xavantes [3], um middleware que oferece suporte a execução de tarefas dependentes através de estruturas de controle hierárquicas chamadas controladores. O algoritmo desenvolvido, chamado de Path Clustering Heuristic (PCH), agrupa as tarefas com o objetivo de minimizar a comunicação entre os controladores e as tarefas, diminuindo o tempo de execução total do processo / Abstract: A computational grid is a collaborative heterogeneous, geographically distributed, multiinstitutional and dynamic system, where any computational resource with a network connection, local or remote, is a potential collaborator. In computational grids, problems related to the execution of parallel applications, those which need a lot of computational power, as well as those which fit well in distributed environments, are wide studied nowadays. As the grid resources belong to various different administrative domains with different policies, each resource has the autonomy to participate or leave the grid at any time. These dynamic and heterogeneous characteristics make the application scheduling, the resource management and the fault tolerance relevant issues on these systems. Particularly, the scheduler plays an important role, since it is determinative in the execution time of an application. The task scheduling problem is NP-Complete [6], what led to the development of a heuristic for the associated optimization problem. In this work we present a task scheduler for a computational grid based on Xavantes [3], a middleware that supports dependent task execution through control structures called controllers. The developed algorithm, called Path Clustering Heuristic (PCH), clusterizes tasks aiming to minimize the communication between controllers and tasks, reducing the process execution time / Mestrado / Sistemas de Computação / Mestre em Ciência da Computação
675

GPO : um middleware para orquestração de serviços em grades computacionais / A middleware for service orchestration in computacional grids

Senna, Carlos Roberto, 1956- 27 February 2007 (has links)
Orientador: Edmundo Roberto Mauro Madeira / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-09T11:43:09Z (GMT). No. of bitstreams: 1 Senna_CarlosRoberto_M.pdf: 1604896 bytes, checksum: 4d91ce46c46772043ce75490d16c3b98 (MD5) Previous issue date: 2007 / Resumo: No ambiente colaborativo das grades computacionais são poucas as ferramentas para gerência de processos e serviços orientadas ao usuário. Esta dissertação apresenta o Grid Process Orchestration (GPO), uma infraestrutura que faz orquestração de serviços e processos em grades computacionais, permitindo ao usuário criar e gerenciar ?uxos complexos, com tarefas fortemente acopladas, sem suporte adicional. O GPO é baseado na OGSA (Open Grid Services Architecture) e descreve os ?uxos usando o conceito de orquestra¸ao de serviços Web aplicados aos serviços das Grades Computacionais. A dissertação descreve a arquitetura da infraestrutura proposta, detalha seus principais componentes, suas funcionalidades para gerência de ?uxos e alguns aspectos do protótipo implementado. Além disso, propõe uma linguagem compacta para descrever os work?ows. Uma aplicação exemplo é apresentada ilustrando as facilidades da infraestrutura proposta / Abstract: In the collaborative environment of the Computational Grids, there are few tools used for process management and user guide services. This work presents the Grid Process Orchestration (GPO), an infrastructure for service and process orchestration in computational grids, which allows to create and manage complex work?ows composed of strongcoupled jobs with no additional support. The GPO is based on the Open Grid Services Architecture (OGSA) and describes work?ows using Web Services orchestration concepts applied to computational grid services. This work describes the architecture of the proposed infrastructure, detailing its main components, functionalities for ?ow management, and shows an implemented prototype. In addition to the architecture, it proposes a compact language for describing work?ows. An application example is presented to illustrate the facilities of the proposed infrastructure / Mestrado / Redes de Computadores / Mestre em Ciência da Computação
676

Compartilhamento de objetos compostos entre bases de dados orientadas a objetos / Sharing composite objects in object-oriented databases

João Eduardo Ferreira 05 July 1996 (has links)
Este trabalho apresenta uma proposta para o compartilhamento de dados entre bases de dados orientadas a objetos, em ambientes de desenvolvimento de projetos. O processo de compartilhamento e realizado através de três fases: separação, evolução e integração de dados. Esta forma de compartilhamento atua através de vínculos entre os objetos de base original com a base produto. Foram definidos seis tipos de vínculos, que são estabelecidos no processo de separação: apenas leitura, isolado, flagrante, mutuamente exclusivo, independente e on-line. Com isso, ambas as bases, respeitando as limitações impostas pelo tipo de vinculo entre as mesmas, podem evoluir separadamente e depois de um determinado tempo realizarem, se conveniente, um processo de reintegração. O processo de compartilhamento de dados tem por unidade de gerenciamento os objetos, compostos de base de dados. Os conceitos apresentados podem ser universalmente aplicados, em qualquer base de dados que efetue gerenciamento sobre a composição de seus objetos. Neste trabalho os conceitos de compartilhamento de dados são exemplificados através do modelo de dados SIRIUS. / This work presents a technique to share data stored in an object-oriented databases aimed at design environments. Three process enable the sharing of data between databases: separation, evolution and data integration. Whenever a block of data need to be shared between original and product database, it is spread among both, resulting in two block: one in the original database, and another in the receiver one, identified as the product of the sharing process. During the evolution phase of the sharing process, these blocks are not required to be kept identical. Six types of links to drive the updates were defined: read only, isolated, snapshot, mutually exclusive, independent and on-line. The original and product databases, both restricted by rules imposed by the type of links, can evolve alone. After a while they may enter into an reintegration process, which uses the composite objects as the control units. The presented concepts can be applied to any data model supporting composite objects. The SIRIUS datamodel is used to exemplify those concepts.
677

Evaluating the user experience in mobile games using session recording tools / Utvärdering av användarupplevelsen av mobilspel med hjälp av sessionsinspelningsverktyg

Börjesson, Veronica, Jonsson, Karolin January 2015 (has links)
This thesis work examines how the user experience of mobile games can be evaluated with the use of session recording tools. The thesis project was carried out at the mobile games development company MAG Interactive, and the aim was to produce a workflow for the company with guidelines for how to conduct user testing with session recording tools for mobile devices. In order to evaluate the tools and services, and to develop the workflow, several user tests have been conducted. When using mobile session recording tools, it is possible to record the screen of the device, the microphone input and in some tools also the front camera input while the user is playing the game. Recording the test session makes it easier to understand and evaluate the player experience of the game and also to identify usability issues. The thesis also covers other parts necessary when conducting user testing besides the actual session recording tool. These are test set up (instructions, tasks etc.), integration, distribution of the test and the application and also analysis of the recorded test session.
678

The Use of Patterns in Information System Engineering

Backlund, Per January 2001 (has links)
The aims of this dissertation are to investigate the use and usefulness of patterns in Information Systems Engineering and to identify future areas of research. In order to do this there is a need to survey different types of patterns and find a common concept of patterns. A pattern is based on experience found in the real world. A text or a model or a combination of the both can describe the pattern. A pattern is typically described in terms of context, forces, problem, and solution. These can be explicitly expressed or implicitly found in the description of the pattern. The types of patterns dealt with are: object-oriented patterns; design patterns, analysis patterns; data model patterns; domain patterns; business patterns; workflow patterns and the deontic pattern. The different types of patterns are presented using the authors' own terminology. The patterns described in the survey are classified with respect to different aspects. The intention of this analysis is to form a taxonomy for patterns and to bring order into the vast amount of patterns. This is an important step in order to find out how patterns are used and can be used in Information Systems Engineering. The aspects used in the classification are: level of abstraction; text or model emphasis; product or process emphasis; life cycle stage usage and combinations of these aspects. Finally an outline for future areas of research is presented. The areas that have been considered of interest are: patterns and Information Systems Engineering methods; patterns and tools (tool support for patterns); patterns as a pedagogical aid; the extraction and documentation of patterns and patterns and novel applications of information technology. Each future area of research is sketched out.
679

From machine learning to learning with machines:remodeling the knowledge discovery process

Tuovinen, L. (Lauri) 19 August 2014 (has links)
Abstract Knowledge discovery (KD) technology is used to extract knowledge from large quantities of digital data in an automated fashion. The established process model represents the KD process in a linear and technology-centered manner, as a sequence of transformations that refine raw data into more and more abstract and distilled representations. Any actual KD process, however, has aspects that are not adequately covered by this model. In particular, some of the most important actors in the process are not technological but human, and the operations associated with these actors are interactive rather than sequential in nature. This thesis proposes an augmentation of the established model that addresses this neglected dimension of the KD process. The proposed process model is composed of three sub-models: a data model, a workflow model, and an architectural model. Each sub-model views the KD process from a different angle: the data model examines the process from the perspective of different states of data and transformations that convert data from one state to another, the workflow model describes the actors of the process and the interactions between them, and the architectural model guides the design of software for the execution of the process. For each of the sub-models, the thesis first defines a set of requirements, then presents the solution designed to satisfy the requirements, and finally, re-examines the requirements to show how they are accounted for by the solution. The principal contribution of the thesis is a broader perspective on the KD process than what is currently the mainstream view. The augmented KD process model proposed by the thesis makes use of the established model, but expands it by gathering data management and knowledge representation, KD workflow and software architecture under a single unified model. Furthermore, the proposed model considers issues that are usually either overlooked or treated as separate from the KD process, such as the philosophical aspect of KD. The thesis also discusses a number of technical solutions to individual sub-problems of the KD process, including two software frameworks and four case-study applications that serve as concrete implementations and illustrations of several key features of the proposed process model. / Tiivistelmä Tiedonlouhintateknologialla etsitään automoidusti tietoa suurista määristä digitaalista dataa. Vakiintunut prosessimalli kuvaa tiedonlouhintaprosessia lineaarisesti ja teknologiakeskeisesti sarjana muunnoksia, jotka jalostavat raakadataa yhä abstraktimpiin ja tiivistetympiin esitysmuotoihin. Todellisissa tiedonlouhintaprosesseissa on kuitenkin aina osa-alueita, joita tällainen malli ei kata riittävän hyvin. Erityisesti on huomattava, että eräät prosessin tärkeimmistä toimijoista ovat ihmisiä, eivät teknologiaa, ja että heidän toimintansa prosessissa on luonteeltaan vuorovaikutteista eikä sarjallista. Tässä väitöskirjassa ehdotetaan vakiintuneen mallin täydentämistä siten, että tämä tiedonlouhintaprosessin laiminlyöty ulottuvuus otetaan huomioon. Ehdotettu prosessimalli koostuu kolmesta osamallista, jotka ovat tietomalli, työnkulkumalli ja arkkitehtuurimalli. Kukin osamalli tarkastelee tiedonlouhintaprosessia eri näkökulmasta: tietomallin näkökulma käsittää tiedon eri olomuodot sekä muunnokset olomuotojen välillä, työnkulkumalli kuvaa prosessin toimijat sekä niiden väliset vuorovaikutukset, ja arkkitehtuurimalli ohjaa prosessin suorittamista tukevien ohjelmistojen suunnittelua. Väitöskirjassa määritellään aluksi kullekin osamallille joukko vaatimuksia, minkä jälkeen esitetään vaatimusten täyttämiseksi suunniteltu ratkaisu. Lopuksi palataan tarkastelemaan vaatimuksia ja osoitetaan, kuinka ne on otettu ratkaisussa huomioon. Väitöskirjan pääasiallinen kontribuutio on se, että se avaa tiedonlouhintaprosessiin valtavirran käsityksiä laajemman tarkastelukulman. Väitöskirjan sisältämä täydennetty prosessimalli hyödyntää vakiintunutta mallia, mutta laajentaa sitä kokoamalla tiedonhallinnan ja tietämyksen esittämisen, tiedon louhinnan työnkulun sekä ohjelmistoarkkitehtuurin osatekijöiksi yhdistettyyn malliin. Lisäksi malli kattaa aiheita, joita tavallisesti ei oteta huomioon tai joiden ei katsota kuuluvan osaksi tiedonlouhintaprosessia; tällaisia ovat esimerkiksi tiedon louhintaan liittyvät filosofiset kysymykset. Väitöskirjassa käsitellään myös kahta ohjelmistokehystä ja neljää tapaustutkimuksena esiteltävää sovellusta, jotka edustavat teknisiä ratkaisuja eräisiin yksittäisiin tiedonlouhintaprosessin osaongelmiin. Kehykset ja sovellukset toteuttavat ja havainnollistavat useita ehdotetun prosessimallin merkittävimpiä ominaisuuksia.
680

Corporate publishing in South African banks : focus on formal, external publications

Mostert, Aleta 06 December 2004 (has links)
“What constitutes corporate publishing?” is the question that motivated the research for this study. It is not easily defined, but can be contextualised as part of the communications and marketing strategy of an organisation. In essence it entails the conceptualisation, planning and realisation of professional publications in an organisation. By conducting interviews with publishing personnel in selected South African banks, best practices pertaining to corporate publishing structures and processes were derived. It was found that traditional book publishing activities, such as commissioning; planning and creating content; reviewing, copy-editing and proofreading; design and layout; production, marketing; printing; and distribution can be used as basis for a corporate publishing venture. The convergence of media, however, is challenging publishers to rethink traditional methods of publishing. Electronic publishing is opening new vistas for organisations as it is an efficient tool for them to build and strengthen their corporate identity and to reach wider markets. To acommodate electronic dissemination, the adoption of an integrated, parallel publishing workflow is proposed in the study. Utilising a single source document for creating multiple formats enhances the publishing process and ensures the longevity of information. In order to draw all the publishing activities in an organisation together in a consistent and cohesive way, a centralised publishing strategy seems to be the most effective solution. The golden thread running through this study is the important role of corporate publishers as service providers in information-rich organisations. / Dissertation (MA (Publishing))--University of Pretoria, 2005. / Information Science / unrestricted

Page generated in 0.0392 seconds