• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 102
  • 54
  • 12
  • 4
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 216
  • 216
  • 65
  • 45
  • 40
  • 39
  • 38
  • 32
  • 30
  • 28
  • 28
  • 28
  • 26
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Modelo e framework para o desenvolvimento de ferramentas analíticas de apoio ao ensino, aprendizagem e gestão educacional

Rosales, Gislaine Cristina Micheloti 04 September 2014 (has links)
Made available in DSpace on 2016-06-02T19:03:59Z (GMT). No. of bitstreams: 1 6494.pdf: 2732916 bytes, checksum: 8b89ab1573d81b8a1e05c28c75369bc4 (MD5) Previous issue date: 2014-09-04 / The use of new information and communication technologies in education that go beyond the traditional Learning Management Systems (LMS), has generated a growing volume of data, making challenging and complex the analysis of data generated to meet the decision-making levels of teaching, learning and management. Despite high expectations for data analysis in education, current research in the area is focused more specifically on student data, processes and learning behaviors, even when the focus of the research is to improve the teaching or actions at the institutional level. In order to facilitate and extend the process of data analysis in the areas of teaching, learning and management for different stakeholders, this thesis presents a conceptual model that will guide the construction of educational analytical context aware applications that support educational decision making in the micro, meso and macro levels. The conceptual model proposes the collection of educational data from multiple, heterogeneous sources in a decentralized manner using logical and physical sensors. The Model supports analysis of data collected at three levels: descriptive analysis, predictive analysis and prescriptive analysis. The conceptual model was established from an open architecture framework, extensible and reusable, which offers a simpler and unified path for both the acquisition of user behaviors in online learning, and the modeling and analysis of the collected contexts. To validate the proposed conceptual model, three applications were developed, namely: ViTrackeR, to support self-regulated learning by providing visualization of data tracking and personalized recommendations; ViMonitor to support real-time, teams of teaching and academic management providing important information on students and tutors; and ViAssess, which provides support for secure assessments online. The conceptual model was evaluated and validated in a real environment (students, tutors, teachers and administrators). The framework was rated by both developers of educational and analytical tools and by expert researchers in the field of this research, obtaining very positive results. Evaluation results indicate that the proposed conceptual model supports the development of educational applications in the three analytical levels of decision making, micro, meso and macro, and also supports the three levels of analysis provided: descriptive, predictive and prescriptive. / O uso de novas tecnologias de informação e comunicação na área educacional, que vão além dos tradicionais Sistemas Gerenciadores da Aprendizagem (SGA), tem gerado um volume crescente de dados, que torna desafiadoras e complexas as análises de dados gerados para atender a tomada de decisão nos níveis de ensino, aprendizagem e gestão. Apesar das altas expectativas sobre a análise de dados no campo educacional, pesquisas atuais na área estão focadas mais especificamente sobre dados de alunos, seus processos e comportamentos de aprendizagem, até mesmo quando o foco da pesquisa é melhorar o ensino ou as ações em nível institucional. De modo a facilitar e estender o processo de análise dos dados para as áreas de ensino, aprendizagem e gestão para diferentes partes interessadas, este trabalho apresenta um Modelo Conceitual que deverá guiar a construção de aplicações analíticas educacionais cientes de contexto que apoiam a tomada de decisões educacionais nos níveis micro, meso e macro. O Modelo Conceitual propõe a coleta de dados educacionais a partir de diversas e heterogêneas fontes e de maneira descentralizada usando sensores lógicos e físicos. O Modelo suporta análises dos dados coletados em três níveis: análise descritiva, análise preditiva e análise prescritiva. O Modelo Conceitual foi implementado a partir de uma arquitetura de framework aberta, extensível e reusável, que oferece um caminho mais simples e unificado para a aquisição de comportamentos de usuários em aprendizagem online, a modelagem dos contextos coletados e análises. Para validação do Modelo Conceitual proposto, foram desenvolvidas três aplicações, a saber: ViTrackeR, para apoio à aprendizagem autorregulada provendo visualização de dados de rastreamento e recomendações personalizadas; ViMonitor, para apoio, em tempo real, às equipes de ensino e de gestão acadêmica fornecendo informações importantes sobre estudantes e tutores; e ViAssess, que provê suporte à segurança para aplicação de avaliações online. O Modelo Conceitual foi avaliado e validado em um ambiente real (por estudantes, tutores, professores e gestores). O framework foi avaliado por desenvolvedores de ferramentas analíticas educacionais e por pesquisadores especialistas no domínio desta pesquisa, obtendo resultados muito positivos. Os resultados das avaliações indicam que o Modelo Conceitual proposto suporta o desenvolvimento de aplicações analíticas educacionais nos três níveis de decisão, micro, meso e macro, e também suporta os três níveis de análises previstos: descritiva, preditiva e prescritiva.
192

MODELO PARA PREDIÇÃO DE AÇÕES E INFERÊNCIA DE SITUAÇÕES DE RISCO EM AMBIENTES SENSÍVEIS AO CONTEXTO / A MODEL FOR ACTION PREDICTION AND RISK SITUATION INFERENCE IN CONTEXT-AWARE ENVIRONMENTS

Fabro Neto, Alfredo Del 31 July 2015 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The availability of low cost sensors and mobile devices allowed many advances in research of ubiquitous and pervasive computing area. With the capture of contextual data provided by the sensors attached to these devices it is possible to obtain user state information and the environment, and thus map the relationship between them. One approach to map these relationships are the activities performed by the user, which also are part of the context itself. However, even that human activities could cause injuries, there is not much discussion in the academy of how ubiquitous computing could assess the risk related to them. In this sense, the Activity Project aims to determine the risk situations related to activities performed by people in a context aware environment, through a middleware that considers the risk in the actions that composes an activity and the user performance while performing an activity. This thesis aims to specify the Activity Manager middleware layer proposed for the Activity Project, whose goal is to address issues relating to the prediction of actions and activities and the detection of risk situation in the actions performed by an user. The model developed to address the composition and prediction of activities is based on the Activity Theory, while the risk in actions is determined by changes in the physiological context of the user caused by the actions performed by itself, modeled through the model named Hyperspace Analogous to Context. Tests were conducted and developed models outperformed proposals found for action prediction, with an accuracy of 78.69%, as well as for risk situations detection, with an accuracy of 98.94%, showing the efficiency of the proposed solution. / A popularização de sensores de baixo custo e de dispositivos móveis permitiu diversos avanços nas pesquisas da área de computação ubíqua e pervasiva. Com a captura dos dados contextuais providos pelos sensores acoplados a estes dispositivos é possível obter informações do estado do usuário e do ambiente, e dessa forma mapear a relação entre ambos. Uma das possíveis abordagens para mapear essas relações são as atividades executadas pelo usuário, que inclusive são parte constituinte do próprio contexto. Entretanto, mesmo que as atividades humanas possam causar danos físicos, não há muita discussão na academia de como a computação ubíqua poderia avaliar esse risco relacionado a elas. Neste sentido, o projeto Activity Project objetiva determinar situações de risco no momento da realização de atividades desempenhadas por pessoas em um ambiente sensível ao contexto, através de um middleware sensível ao contexto que considera o risco nas ações que compõe uma atividade e o desempenho do usuário enquanto executa uma atividade. Esta dissertação tem por objetivo especificar a camada Gerência de Atividades do middleware proposto para o Activity Project, cujo objetivo é tratar as questões referentes à predição de ações e atividades e a detecção de situações de risco em ações. O modelo desenvolvido para tratar a composição das atividades e a predição das mesmas baseia-se na Teoria da Atividade, enquanto que o risco em ações é determinado pelas mudanças no contexto fisiológico do usuário, modeladas através do modelo Hiperespaço Análogo ao Contexto. Nos testes realizados os modelos desenvolvidos superaram as propostas encontradas até o momento para a predição de ações com uma a precisão de 78,69%, bem como para a determinação de situações de risco com uma precisão de 98,94%, demonstrando a eficácia da solução proposta.
193

Uma ferramenta para anÃlise automÃtica de modelos de caracterÃsticas de linhas de produtos de software sensÃvel ao contexto / A tool for context aware software product lines feature diagram automatic analysis

Paulo Alexandre da Silva Costa 27 November 2012 (has links)
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior / As Linhas de produtos de software sÃo uma forma de maximizar o reuso de software, dado que proveem a customizaÃÃo de software em massa. Recentemente, Linhas de produtos de software (LPSs) tÃm sido usadas para oferecer suporte ao desenvolvimento de aplicaÃÃes sensÃveis ao contexto nas quais adaptabilidade em tempo de execuÃÃo à um requisito importante. Neste caso, as LPSs sÃo denominadas Linhas de produtos de software sensÃveis ao contexto (LPSSCs). O sucesso de uma LPSSC depende, portanto, da modelagem de suas caracterÃsticas e do contexto que lhe à relevante. Neste trabalho, essa modelagem à feita usando o diagrama de caracterÃsticas e o diagrama de contexto. Entretanto, um processo manual para construÃÃo e configuraÃÃo desses modelos pode facilitar a inclusÃo de diversos erros, tais como duplicaÃÃo de caracterÃsticas, ciclos, caracterÃsticas mortas e falsos opcionais sendo, portanto, necessÃrio o uso de tÃcnicas de verificaÃÃo de consistÃncia. A verificaÃÃo de consistÃncia neste domÃnio de aplicaÃÃes assume um papel importante, pois as aplicaÃÃes usam contexto tanto para prover serviÃos como para auto-adaptaÃÃo caso seja necessÃrio. Neste sentido, as adaptaÃÃes disparadas por mudanÃas de contexto podem levar a aplicaÃÃo a um estado indesejado. AlÃm disso, a descoberta de que algumas adaptaÃÃes podem levar a estados indesejados sà pode ser atestada durante a execuÃÃo pois o erro à condicionado à configuraÃÃo atual do produto. Ao considerar que tais aplicaÃÃes estÃo sujeitas a um grande volume de mudanÃas contextuais, a verificaÃÃo manual torna-se impraticÃvel. Logo, à interessante que seja possÃvel realizar a verificaÃÃo da consistÃncia de forma automatizada de maneira que uma entidade computacional possa realizar essas operaÃÃes. Dado o pouco suporte automatizado oferecido a esses processos, o objetivo deste trabalho à propor a automatizaÃÃo completa desses processos com uma ferramenta, chamada FixTure (FixTure), para realizar a verificaÃÃo da construÃÃo dos modelos de caracterÃsticas para LPSSC e da configuraÃÃo de produtos a partir desses modelos. A ferramenta FixTure tambÃm provà uma simulaÃÃo de situaÃÃes de contexto no ciclo de vida de uma aplicaÃÃo de uma LPSSC, com o objetivo de identificar inconsistÃncias que ocorreriam em tempo de execuÃÃo. / Software product lines are a way to maximize software reuse once it provides mass software customization. Software product lines (SPLs) have been also used to support contextaware applicationâs development where adaptability at runtime is an important issue. In this case, SPLs are known as Context-aware software product lines. Context-aware software product line (CASPL) success depends on the modelling of their features and relevant context. However, a manual process to build and configure these models can add several errors such as replicated features, loops, and dead and false optional features. Because of this, there is a need of techniques to verify the model consistency. In the context-aware application domain, the consistency verification plays an important role, since application in this domain use context to both provide services and self-adaptation, when it is needed. In this sense, context-triggered adaptations may lead the application to undesired state. Moreover, in some cases, the statement that a contex-triggered adaptation is undesired only can be made at runtime, because the error is conditioned to the current product configuration. Additionally, applications in this domain are submitted to large volumes of contextual changes, which imply that manual verification is virtually not viable. So, it is interesting to do consistency verification in a automated way such that a computational entity may execute these operations. As there is few automated support for these proccesses, the objective of this work is to propose the complete automation of these proccesses with a software tool, called FixTure, that does consistency verification of feature diagrams during their development and product configuration. FixTure tool also supports contextual changes simulation during the lifecycle of a CASPL application in order to identify inconsistencies that can happen at runtime.
194

REFlex: rule engine for flexible processes

Silva, Natália Cabral 31 January 2014 (has links)
Submitted by Nayara Passos (nayara.passos@ufpe.br) on 2015-03-10T14:26:26Z No. of bitstreams: 2 DISSERTAÇÃO Natália Cabral Silva.pdf: 2867606 bytes, checksum: 4e1c75788ce8db0420f34c1ca5195e63 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-11T17:30:06Z (GMT). No. of bitstreams: 2 DISSERTAÇÃO Natália Cabral Silva.pdf: 2867606 bytes, checksum: 4e1c75788ce8db0420f34c1ca5195e63 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2014 / Diante do ambiente complexo e dinâmico encontrado nas empresas atualmente, o sistema tradicional de Workflow não está sendo flexível suficiente para modelar Processos de Negócio. Nesse contexto, surgiram os Processos Flexíveis que tem por principal objetivo suprir a necessidade de modelar processos menos estáticos. Processo declarativo é um tipo de processo flexível que permite os participantes decidirem a ordem em que as atividades são executadas através de regras de negócio. As regras de negócio determinam as restrições e obrigações que devem ser satisfeitas durante a execução. Tais regras descrevem o que deve ou não deve ser feito durante a execução do processo, mas não definem como. Os métodos e ferramentas atualmente disponíveis para modelar e executar processos declarativos apresentam várias limitações que prejudicam a sua utilização para este fim. Em particular, a abordagem que emprega lógica temporal linear (LTL) sofre do problema de explosão de estados a medida que o tamanho do modelo do processo cresce. Embora mecanismos eficientes em relação a memória terem surgido, eles não são capazes de adequadamente garantir a conclusão correta do processo, uma vez que permitem o usuário alcançar estados proibidos ou que causem deadlock. Além disso, as implementações atuais de ferramentas para execução de processos declarativos se concentram apenas em atividades manuais. Comunicação automática com aplicações externas para troca de dados e reutilização de funcionalidade não é suportado. Essas oportunidades de automação poderiam ser melhor exploradas por uma engine declarativa que se integra com tecnologias SOC existentes. Este trabalho propõe uma nova engine de regras baseada em grafo, chamado de REFlex. Tal engine não compartilha os problemas apresentados pelas abordagens disponíveis, sendo mais adequada para modelar processos de negócio declarativos. Além disso, REFlex preenche a lacuna entre os processos declarativos e SOC. O orquestrador REFlex é um orquestrador de serviços declarativo, eficiente e dependente de dados. Ele permite que os participantes chamem serviços externos para executar tarefas automatizadas. Diferente dos trabalhos relacionados, o algoritmo de REFlex não depende da geração de todos os estados alcançáveis, o que o torna adequado para modelar processos de negócios grandes e complexos. Além disso, REFlex suporta regras de negócio dependentes de dados, o que proporciona sensibilidade ao contexto. / Declarative business process modeling is a flexible approach to business process management in which participants can decide the order in which activities are performed. Business rules are employed to determine restrictions and obligations that must be satisfied during execution time. Such business rules describe what must or must not be done during the process execution, but do not prescribe how. In this way, complex control-flows are simplified and participants have more flexibility to handle unpredicted situations. The methods and tools currently available to model and execute declarative processes present several limitations that impair their use to this application. In particular, the well-known approach that employs Linear Temporal Logic (LTL) has the drawback of the state space explosion as the size of the process model grows. Although approaches proposing memory efficient methods have been proposed in the literature, they are not able to properly guarantee the correct termination of the process, since they allow the user to reach deadlock states. Moreover, current implementations of declarative business process engines focus only on manual activities. Automatic communication with external applications to exchange data and reuse functionality is barely supported. Such automation opportunities could be better exploited by a declarative engine that integrates with existing SOC technologies. This work proposes a novel graph-based rule engine called REFlex that does not share the problems presented by other engines, being better suited to model declarative business processes than the techniques currently in use. Additionally, such engine fills this gap between declarative processes and SOC. The REFlex orchestrator is an efficient, data-aware declarative web services orchestrator. It enables participants to call external web services to perform automated tasks. Different from related work, the REFlex algorithm does not depend on the generation of all reachable states, which makes it well suited to model large and complex business processes. Moreover, REFlex is capable of modeling data-dependent business rules, which provides unprecedented context awareness and modeling power to the declarative paradigm.
195

A Smart-Dashboard : Augmenting safe & smooth driving

Akhlaq, Muhammad January 2010 (has links)
Annually, road accidents cause more than 1.2 million deaths, 50 million injuries, and US$ 518 billion of economic cost globally. About 90% of the accidents occur due to human errors such as bad awareness, distraction, drowsiness, low training, fatigue etc. These human errors can be minimized by using advanced driver assistance system (ADAS) which actively monitors the driving environment and alerts a driver to the forthcoming danger, for example adaptive cruise control, blind spot detection, parking assistance, forward collision warning, lane departure warning, driver drowsiness detection, and traffic sign recognition etc. Unfortunately, these systems are provided only with modern luxury cars because they are very expensive due to numerous sensors employed. Therefore, camera-based ADAS are being seen as an alternative because a camera has much lower cost, higher availability, can be used for multiple applications and ability to integrate with other systems. Aiming at developing a camera-based ADAS, we have performed an ethnographic study of drivers in order to find what information about the surroundings could be helpful for drivers to avoid accidents. Our study shows that information on speed, distance, relative position, direction, and size & type of the nearby vehicles & other objects would be useful for drivers, and sufficient for implementing most of the ADAS functions. After considering available technologies such as radar, sonar, lidar, GPS, and video-based analysis, we conclude that video-based analysis is the fittest technology that provides all the essential support required for implementing ADAS functions at very low cost. Finally, we have proposed a Smart-Dashboard system that puts technologies – such as camera, digital image processor, and thin display – into a smart system to offer all advanced driver assistance functions. A basic prototype, demonstrating three functions only, is implemented in order to show that a full-fledged camera-based ADAS can be implemented using MATLAB. / Phone# 00966-56-00-56-471
196

Délivrance de services média suivant le contexte au sein d'environnements hétérogènes pour les réseaux médias du futur / Context-aware media services delivery in heterogeneous environments for future media networks

Ait Chellouche, Soraya 09 December 2011 (has links)
La généralisation de l’usage de l’Internet, ces dernières années, a été marquée par deux tendances importantes. Nous citerons en premier, l’enthousiasme de plus en plus grand des utilisateurs pour les services médias. Cette tendance est particulièrement accentuée par l’avènement des contenus générés par les utilisateurs qui amènent dans les catalogues des fournisseurs de services un choix illimité de contenus. L’autre tendance est la diversification et l’hétérogénéité en ressources des terminaux et réseaux d’accès. Seule la valeur du service lui-même compte aujourd’hui pour les utilisateurs et non le moyen d’y accéder. Cependant, offrir aux utilisateurs un accès ubiquitaire à de plus en plus de services Internet, impose des exigences très rigoureuses sur l’infrastructure actuelle de l’Internet. En effet, L’évolution de l’Internet devient aujourd’hui une évidence et cette évolution est d’autant plus nécessaire dans un contexte de services multimédias qui sont connus pour leur sensibilité au contexte dans lequel ils sont consommés et pour générer d’énormes quantités de trafic. Dans le cadre de cette thèse, nous nous focalisons sur deux enjeux importants dans l’évolution de l’Internet. A savoir, faciliter le déploiement de services médias personnalisés et adaptatifs et améliorer les plateformes de distribution de ces derniers afin de permettre leur passage à l’échelle tout en gardant la qualité de service à un niveau satisfaisant pour les utilisateurs finaux. Afin de permettre ceci, nous introduisons en premier, une nouvelle architecture multi environnements et multi couches permettant un environnement collaboratif pour le partage et la consommation des services médias dans un cadre des réseaux média du futur. Puis, nous proposons deux contributions majeures que nous déployons sur la couche virtuelle formés par les Home-Boxes (passerelles résidentielles évoluées) introduite dans l’architecture précédente. Dans notre première contribution, nous proposons un environnement permettant le déploiement à grande échelle de services sensibles au contexte. Deux approches ont été considérées dans la modélisation et la gestion du contexte. La première approche est basée sur les langages de balisage afin de permettre un traitement du contexte plus léger et par conséquent des temps de réponse très petits. La seconde approche, quant à elle est basée sur les ontologies et les règles afin de permettre plus d’expressivité et un meilleur partage et réutilisation des informations de contexte. Les ontologies étant connues pour leur complexité, le but de cette proposition et de prouver la faisabilité d’une telle approche dans un contexte de services multimédias par des moyen de distribution de la gestion du contexte. Concernant notre deuxième contribution, l’idée et de tirer profit des ressources (disque et connectivité) des Home-Boxes déjà déployées, afin d’améliorer les plateformes de distribution des services médias et d’améliorer ainsi le passage à l’échelle, la performance et la fiabilité de ces derniers et ce, à moindre coût. Pour cela, nous proposons deux solutions pour deux problèmes communément traités dans la réplication des contenus : (1) la redirection de requêtes pour laquelle nous proposons un algorithme de sélection à deux niveaux de filtrage, un premier filtrage basé sur les règles afin de personnaliser les services en fonction du contexte de leur consommation suivi d’un filtrage basé sur des métriques réseaux (charges des serveurs et délais entre les serveurs et les clients) ; et (2) le placement et la distribution des contenus sur les caches pour lesquels on a proposé une stratégie de mise en cache online, basée sur la popularité des contenus. / Users’ willingness to consume media services along with the compelling proliferation of mobile devices interconnected via multiple wired and wireless networking technologies place high requirements on the Future Internet. It is a common belief today that Internet should evolve towards providing end users with ubiquitous and high quality media services and this, in a scalable, reliable, efficient and interoperable way. However, enabling such a seamless media delivery raises a number of challenges. On one hand, services should be more context-aware to enable their delivery to a large and disparate computational context. On another hand, current Internet media delivery infrastructures need to scale in order to meet the continuously growing number of users while keeping quality at a satisfying level. In this context, we introduce a novel architecture, enabling a novel collaborative framework for sharing and consuming Media Services within Future Internet (FI). The introduced architecture comprises a number of environments and layers aiming to improve today’s media delivery networks and systems towards a better user experience. In this thesis, we are particulary interested in enabling context-aware multimedia services provisioning that meets on one hand, the users expectations and needs and on another hand, the exponentially growing users’ demand experienced by these services. Two major and demanding challenges are then faced in this thesis (1) the design of a context-awareness framework that allows adaptive multimedia services provisioning and, (2) the enhancement of the media delivery platform to support large-scale media services. The proposed solutions are built on the newly introduced virtual Home-Box layer in the latter proposed architecture.First, in order to achieve context-awareness, two types of frameworks are proposed based on the two main models for context representation. The markup schemes-based framework aims to achieve light weight context management to ensure performance in term of responsiveness. The second framework uses ontology and rules to model and manage context. The aim is to allow higher formality and better expressiveness and sharing. However, ontology is known to be complex and thus difficult to scale. The aim of our work is then to prove the feasibility of such a solution in the field of multimedia services provisioning when the context management is distributed among the Home-Box layer. Concerning the media services delivery enhancement, the idea is to leverage the participating and already deployed Home-Boxes disk storage and uploading capabilities to achieve service performance, scalability and reliability. Towards this, we have addressed two issues that are commonly induced by the content replication: (1) the server selection for which we have proposed a two-level anycast-based request redirection strategy that consists in a preliminary filtering based on the clients’ contexts and in a second stage provides accurate network distance information, using not only the end-to-end delay metric but also the servers’ load one and, (2) the content placement and replacement in cache for which we have designed an adaptive online popularity-based video caching strategy among the introduced HB overlay.
197

Découverte de contexte pour une adaptation automatique de services en intelligence ambiante / Context discovery for the automatic adaptation of services in ambient intelligence

Benazzouz, Yazid 26 August 2011 (has links)
Cette thèse s’intéresse à la problématique de l’adaptation automatique de services dans ledomaine de l’intelligence ambiante. L’étude de la littérature montre que la sensibilité aucontexte est devenue un élément central pour la conception et la mise en place de servicesadaptatifs. Cependant, sa prise en compte se limite généralement à des descriptionsélémentaires de situations ou à des modèles prédéfinis. Afin de permettre une adaptation auxchangements d’habitudes des utilisateurs, à la dynamique de l’environnement et àl’hétérogénéité des sources de perception, nous proposons des mécanismes de découverte decontexte et de situations déclencheurs d’adaptation. Ces mécanismes s’appuient sur destechniques de fouille de données et sont intégrés au sein d’une architecture d’adaptationautomatique de services. Ces travaux ont été réalisés et appliqués à des projets d’intelligenceambiante pour de l’assistance à des personnes et plus particulièrement dans le cadre du projetITEA- MIDAS. / This thesis addresses the problem of dynamic adaptation of services in the context of ambientintelligence applications. Literature study shows how context-awareness plays a central rolein the design and implementation of adaptive services. However, its use is still limited toelementary descriptions and predefined situational models. Dynamic adaptation should becapable of following user habits to yield dynamic answers to environmental change, and tosupport heterogeneous sources of context. To this end, we propose mechanisms to discovercontexts and situations that trigger adaptation. These mechanisms rely on data miningtechniques, and are integrated within an architecture for dynamic adaptation of services. Thiswork was carried out and applied to ambient intelligence projects for the elderly, providingsupport and assistance in their daily lives, particularly in the context of the ITEA-MIDASproject.
198

Achieving Autonomic Web Service Compositions with Models at Runtime

Alférez Salinas, Germán Harvey 26 December 2013 (has links)
Over the last years, Web services have become increasingly popular. It is because they allow businesses to share data and business process (BP) logic through a programmatic interface across networks. In order to reach the full potential of Web services, they can be combined to achieve specifi c functionalities. Web services run in complex contexts where arising events may compromise the quality of the system (e.g. a sudden security attack). As a result, it is desirable to count on mechanisms to adapt Web service compositions (or simply called service compositions) according to problematic events in the context. Since critical systems may require prompt responses, manual adaptations are unfeasible in large and intricate service compositions. Thus, it is suitable to have autonomic mechanisms to guide their self-adaptation. One way to achieve this is by implementing variability constructs at the language level. However, this approach may become tedious, difficult to manage, and error-prone as the number of con figurations for the service composition grows. The goal of this thesis is to provide a model-driven framework to guide autonomic adjustments of context-aware service compositions. This framework spans over design time and runtime to face arising known and unknown context events (i.e., foreseen and unforeseen at design time) in the close and open worlds respectively. At design time, we propose a methodology for creating the models that guide autonomic changes. Since Service-Oriented Architecture (SOA) lacks support for systematic reuse of service operations, we represent service operations as Software Product Line (SPL) features in a variability model. As a result, our approach can support the construction of service composition families in mass production-environments. In order to reach optimum adaptations, the variability model and its possible con figurations are verifi ed at design time using Constraint Programming (CP). At runtime, when problematic events arise in the context, the variability model is leveraged for guiding autonomic changes of the service composition. The activation and deactivation of features in the variability model result in changes in a composition model that abstracts the underlying service composition. Changes in the variability model are refl ected into the service composition by adding or removing fragments of Business Process Execution Language (WS-BPEL) code, which are deployed at runtime. Model-driven strategies guide the safe migration of running service composition instances. Under the closed-world assumption, the possible context events are fully known at design time. These events will eventually trigger the dynamic adaptation of the service composition. Nevertheless, it is diffi cult to foresee all the possible situations arising in uncertain contexts where service compositions run. Therefore, we extend our framework to cover the dynamic evolution of service compositions to deal with unexpected events in the open world. If model adaptations cannot solve uncertainty, the supporting models self-evolve according to abstract tactics that preserve expected requirements. / Alférez Salinas, GH. (2013). Achieving Autonomic Web Service Compositions with Models at Runtime [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/34672 / TESIS
199

Dynamische Verwaltung heterogener Kontextquellen in global verteilten Systemen

Hamann, Thomas 05 December 2008 (has links)
Im Rahmen der Dissertation wurde ein Middlewaredienst entwickelt und realisiert. Es gestattet die dynamische Verwaltung heterogener Kontextquellen. Das zugrunde liegende Komponentenmodell selbstbeschreibender Context Provieder ermöglicht die lose Kopplung von Kontextquellen und -senken. Es wird durch Filter- und Konverterkomponenten zur generischen Providersselektion anhand domänenspezifischer Merkmale ergänzt. Die Kopplung der verteilten Dienstinstanzen erfolgt durch ein hybrides Peer-to-Peer-System. Dies trägt der Heterogenität der Endgeräte Rechnung, und erlaubt die skalierbare , verteilte Verwaltung von Kontextquellen in globalen Szenarien.
200

Strategies for context reasoning in assistive livings for the elderly / Stratégies pour le raisonnement sur le contexte dans les environnements d’assistance pour les personnes âgées

Tiberghien, Thibaut 18 November 2013 (has links)
Tirant parti de notre expérience avec une approche traditionnelle des environnements d'assistance ambiante (AAL) qui repose sur l'utilisation de nombreuses technologies hétérogènes dans les déploiements, cette thèse étudie la possibilité d'une approche simplifiée et complémentaire, ou seul un sous-ensemble hardware réduit est déployé, initiant un transfert de complexité vers le côté logiciel. Axé sur les aspects de raisonnement dans les systèmes AAL, ce travail a permis à la proposition d'un moteur d'inférence sémantique adapté à l'utilisation particulière à ces systèmes, répondant ainsi à un besoin de la communauté scientifique. Prenant en compte la grossière granularité des données situationnelles disponible avec une telle approche, un ensemble de règles dédiées avec des stratégies d'inférence adaptées est proposé, implémenté et validé en utilisant ce moteur. Un mécanisme de raisonnement sémantique novateur est proposé sur la base d'une architecture de raisonnement inspiré du système cognitif. Enfin, le système de raisonnement est intégré dans un framework de provision de services sensible au contexte, se chargeant de l'intelligence vis-à-vis des données contextuelles en effectuant un traitement des événements en direct par des manipulations ontologiques complexes. L’ensemble du système est validé par des déploiements in-situ dans une maison de retraite ainsi que dans des maisons privées, ce qui en soi est remarquable dans un domaine de recherche principalement cantonné aux laboratoires / Leveraging our experience with the traditional approach to ambient assisted living (AAL) which relies on a large spread of heterogeneous technologies in deployments, this thesis studies the possibility of a more “stripped down” and complementary approach, where only a reduced hardware subset is deployed, probing a transfer of complexity towards the software side, and enhancing the large scale deployability of the solution. Focused on the reasoning aspects in AAL systems, this work has allowed the finding of a suitable semantic inference engine for the peculiar use in these systems, responding to a need in this scientific community. Considering the coarse granularity of situational data available, dedicated rule-sets with adapted inference strategies are proposed, implemented, and validated using this engine. A novel semantic reasoning mechanism is proposed based on a cognitively inspired reasoning architecture. Finally, the whole reasoning system is integrated in a fully featured context-aware service framework, powering its context awareness by performing live event processing through complex ontological manipulation. the overall system is validated through in-situ deployments in a nursing home as well as private homes over a few months period, which itself is noticeable in a mainly laboratory-bound research domain

Page generated in 0.0665 seconds