• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 1
  • Tagged with
  • 10
  • 10
  • 5
  • 5
  • 5
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Low-Level Haskell Code: Measurements and Optimization Techniques

Peixotto, David 06 September 2012 (has links)
Haskell is a lazy functional language with a strong static type system and excellent support for parallel programming. The language features of Haskell make it easier to write correct and maintainable programs, but execution speed often suffers from the high levels of abstraction. While much past research focuses on high-level optimizations that take advantage of the functional properties of Haskell, relatively little attention has been paid to the optimization opportunities in the low-level imperative code generated during translation to machine code. One problem with current low-level optimizations is that their effectiveness is limited by the obscured control flow caused by Haskell's high-level abstractions. My thesis is that trace-based optimization techniques can be used to improve the effectiveness of low-level optimizations for Haskell programs. I claim three unique contributions in this work. The first contribution is to expose some properties of low-level Haskell codes by looking at the mix of operations performed by the selected benchmark codes and comparing them to the low-level codes coming from traditional programming languages. The low-level measurements reveal that the control flow is obscured by indirect jumps caused by the implementation of lazy evaluation, higher-order functions, and the separately managed stacks used by Haskell programs. My second contribution is a study on the effectiveness of a dynamic binary trace-based optimizer running on Haskell programs. My results show that while viable program traces frequently occur in Haskell programs the overhead associated with maintaing the traces in a dynamic optimization system outweigh the benefits we get from running the traces. To reduce the runtime overheads, I explore a way to find traces in a separate profiling step. My final contribution is to build and evaluate a static trace-based optimizer for Haskell programs. The static optimizer uses profiling data to find traces in a Haskell program and then restructures the code around the traces to increase the scope available to the low-level optimizer. My results show that we can successfully build traces in Haskell programs, and the optimized code yields a speedup over existing low-level optimizers of up to 86% with an average speedup of 5% across 32 benchmarks.
2

EnergyBox : A Trace-driven Tool for Data Transmission Energy Consumption Studies

Vergara Alonso, Ekhiotz Jon, Nadjm-Tehrani, Simin January 2013 (has links)
Although evolving mobile technologies bring millions of users closer to the vision of information anywhere-anytime, device battery depletions hamper the quality of experience to a great extent. We argue that the design of energy-efficient solutions starts by energy-awareness and propose EnergyBox, a tool that provides accurate and repeatable energy consumption studies for 3G and WiFi transmissions at the user end. We recognize that the energy consumption of data transmission is highly dependable on the traffic pattern, and provide the means for trace-based iterative packet-driven simulation to derive the operation states of wireless interfaces. The strength of EnergyBox is that it allows to modularly set the 3G network parameters specified at operator level, the adaptive power save mode mechanism for a WiFi device, and the different power levels of the operation states for different handheld devices. EnergyBox enables efficient energy consumption studies using real data, which complements the device-dependent laborious physical power measurements. Using real application transmission traces, we have validated EnergyBox showing an accuracy range of 94-99% for 3G and 93-99% for WiFi compared to the real measured energy consumption by a 3G modem and a smartphone with WiFi.
3

Modélisation et exploitation des traces d'interactions dans l'environnement de travail collaboratif / Modeling and exploitation of the traces of interactions in the collaborative working environment

Li, Qiang 09 July 2013 (has links)
Les sciences humaines et le progrès social ne peuvent pas se poursuivre sans collaboration. Avec le développement rapide des technologies de l'information et la popularité des appareils intelligents, le travail collaboratif est beaucoup plus simple et plus fréquents que jamais. Les gens peuvent travailler ensemble sans tenir compte de leur emplacement/ location géographique ou de la limitation de temps. Les environnements de travail de collaboration basés sur le Web sont conçus et consacrés à supporter/soutenir le travail individuel et le travail en groupe dans divers domaines: la recherche, les affaires, l'éducation, etc. N'importe quelle activité dans un système d'information produit un ensemble de traces. Dans un contexte de travail collaboratif, de telles traces peuvent être très volumineuses et hétérogènes. Pour un Environnement de Travail Collaboratif (ETC) typique Basé sur le Web, les traces sont principalement produites par des activités collaboratives ou des interactions collaboratives et peuvent être enregistrées. Les traces modélisées ne représentent pas seulement la connaissance, mais aussi l'expérience acquise par les acteurs via leurs interactions mutuelles ou les interactions qu'ils ont avec le système. Avec la complexité croissante de la structure de groupe et les besoins fréquents de collaboration, les interactions existantes deviennent de plus en plus difficiles à saisir et à analyser. Or, pour leurs travaux futurs, les gens ont souvent besoin de récupérer des informations issues de leurs activités de collaboration précédentes. Cette thèse se concentre sur la définition, la modélisation et l'exploitation des différentes traces dans le contexte d'Environnement de Travail Collaboratif et en particulier aux Traces Collaboratives dans l'espace de travail partagé de groupe (ou l'espace de travail collaboratif). Un modèle de traces de collaboration qui peuvent efficacement enrichir l'expérience du groupe et aider à la collaboration de groupe est proposé et détaillé. Nous présentons ensuite et définissons un type de filtre complexe comme un moyen possible d'exploiter ces traces. Plusieurs scénarios de base d'exploitation des traces collaboratives sont présentés. Pour chacun d'entre eux, nous présentons leurs effets et les avantages procurés par ces effets dans l'environnement de travail collaboratif. En effet, un cadre de l'exploitation des traces général est introduit et nous expliquons mis en œuvre dans un ETC. Trois approches collaboratives générant des traces sont discutées à l'aide d'exemples: l'Analyse SWOT, l'intégration de modèle de maturité de la capacité (CMMI) et le Système de Recommandation de Groupe. Une expérimentation de ce modèle a été réalisée dans le cadre de la plate-forme collaborative E-MEMORAe2.0. Cette expérience montre que notre modèle de trace collaborative et le cadre d'exploitation proposé pour l'environnement de travail collaboratif peuvent faciliter à la fois le travail personnel et de groupe. Notre approche peut être appliquée comme un moyen générique pour traiter différents sujets et problèmes, qu'il s'agisse de collaboration ou de l'exploitation des traces laissées dans un ECT. / Human science and social progress cannot continue without collaboration. With the rapid development of information technologies and the popularity of smart devices, collaborative work is much simpler and more common than ever. People can work together irrespective of their geographical location or time limitation. In recently years, Web-based Collaborative Working Environments (CWE) are designed and devoted to support both individual and group work to a greater extent in various areas: research, business, learning and etc. Any activity in an information system produces a set of traces. In a collaborative working context, such traces may be very voluminous and heterogeneous. For a typical Webbased Collaborative Working Environment, traces are mainly produced by collaborative activities or interactions and can be recorded. The modeled traces not only represent knowledge but also experience concerning the interactive actions among the actors or between actors and the system. With the increasing complexity of group structure and frequent collaboration needs, the existing interactions become more difficult to grasp and to analyze. And for the future work, people often need to retrieve more information from their previous collaborative activities. This thesis focuses on defining, modeling and exploiting the various traces in the context of CWE, in particular, Collaborative Traces (CTs) in the group shared/collaborativeworkspace. A model of collaborative traces that can efficiently enrich group experience and assist group collaboration is proposed and detailed. In addition, we introduce and define a type of complex filter as a possible means to exploit the traces. Several basic scenarios of collaborative traces exploitation are presented describing their effects and advantages in CWE. Furthermore, a general traces exploitation framework is introduced and implemented in CWE. Three possible traces based collaborative approaches are discussed with comprehensive examples: SWOT Analysis, Capability Maturity Model Integration (CMMI) and Group Recommendation System. As a practical experience we tested our model in the context of the E-MEMORAe2.0 collaborative platform. Practical cases show that our proposed CT model and the exploitation framework for CWE can facilitate both personal and group work. This approach can be applied as a generic way for addressing different types of collaboration and trace issues/problems in CWE.
4

Décision multicritère à base de traces pour les applications interactives à exécution adaptative / Trace-based multi-criteria decision making in interactive application for adaptive execution

Ho, Hoang Nam 04 December 2015 (has links)
Nos travaux sont menés dans le cadre des architectures logicielles pour des applications interactives dont le principe général d’exécution adaptative a été défini au sein du laboratoire. Nous nous plaçons dans l’hypothèse où une application interactive est contextualisée au moyen de situations. L’utilisateur exécute des actions dans le contexte de situations successives pour avancer dans l’application interactive jusqu’à atteindre un ou plusieurs objectifs prédéfinis par le concepteur. Au cours de son déroulement, il se peut que l’utilisateur ne puisse plus continuer selon la logique du concepteur à cause des blocages du système ou une insuffisance de données pour poursuivre la logique d’exécution. Pour y remédier, un système d’aide à la décision est indispensable pour permettre au système et/ou à l’utilisateur de faire un choix adapté au contexte pour poursuivre l’exécution de l’application. Nous proposons d’améliorer le processus de décision en utilisant les traces des exécutions précédentes. Pendant l’exécution de l’application, un système à base de traces (système de gestion de traces) va collecter toutes les traces générées par l’utilisateur et les traces d’activité (les logs) au cours de l’interaction avec le système. Les contributions de nos travaux se situent à plusieurs niveaux : la conception d’un algorithme à base de traces pour la pondération des critères de décision ; la conception d’un algorithme de détermination des alternatives ; la définition et la formalisation des logiques de choix de l’utilisateur (utilisation de la logique subjective) et du système (PROMETHEE II à base de traces) pour classer les alternatives et l’agrégation des différents choix pour suggérer à l’utilisateur un choix final à exécuter. Un cas d’étude Tamagotchi est présenté pour valider nos contributions. / Our work deals with software architectures for adaptive interactive applications. We assume that one application is structured with contextual interaction sequences called situations. Users perform actions in successive situations to reach one or more predefined designer’s objectives. During its execution, it could happen that the user cannot fulfil designer’s logic because of some system’s blockings or missing data. Our challenge is to propose a method that chooses the most appropriate situation according to the given one. We propose to improve the decision-making process by using the generated traces during previous executions. These traces represent users’ interactions and system activity logs. A trace-based system collects and manages all users’ generated traces (logs). Our main contributions are: the design of a trace-based algorithm for criteria weighting; the design of an alternatives determination algorithm; the design and the formalisation of the users’ choice (using the trace-based subjective logic) and the system’s choice (using the trace-based PROMETHEE II) to classify all the identified alternatives and the aggregation of different choices to suggest to the user the right option to follow. A Tamagotchi case study is presented to validate our contributions.
5

Approche organisationnelle basée sur le paradigme agent pour la synthèse et la réutilisation des connaissances en ingénierie collaborative / Organizational agent based approach for the synthesis and reuse of knowledge in collaborative engineering

Darwich Akoum, Hind 10 October 2014 (has links)
Il est bien connu qu’une nouvelle étude, menée au sein d’une entreprise, est souvent semblable à une étude précédente et par conséquent, peut être structurée selon un processus de référence commun au type d’étude correspondant. Dans notre travail, nous l’avons appelé démarche métier. Il s’agit d’une bonne pratique de réalisation d’un type d’études. La difficulté majeure réside dans la formalisation de cette démarche métier. Les approches traditionnelles de capitalisation des connaissances s’appuyant sur des verbalisations d’experts ont montré leur limite. Souvent réalisées en dehors de l’activité réelle, les experts omettent des détails qui peuvent être d’importance. Notre thèse repose sur l’idée qu’il est possible de construire le processus opérationnel mis en œuvre lors des activités collaboratives de conception, à partir des traces enregistrées lors de l’utilisation de l’outil numérique par les acteurs métier. Le processus opérationnel ainsi construit pourra aider les acteurs métiers et les experts à prendre du recul sur le travail réel et à formaliser et enrichir les démarches métier de l’entreprise. Notre travail s’est déroulé au sein du laboratoire ERPI (Équipe de Recherche sur les Processus Innovatifs) de l’Université de Lorraine et en collaboration avec la société TDC Software dans le cadre d’une thèse CIFRE. Les contributions que nous proposons sont les suivantes : • Un double cycle de capitalisation pour des activités instrumentées,• Une approche globale de gestion des démarches métier,• Une ontologie OntoProcess modélisant des aspects organisationnels génériques (séparant clairement des concepts liés aux traces et d’autres liés aux démarches métier) et des extensions métiers spécifiques à l’outil utilisé,• Un système multi-agents supportant l’approche globale de gestion des démarches métiers et s’appuyant sur l’ontologie OntoProcess,• Un système à base de traces permettant de construire un processus opérationnel à partir des traces enregistrées lors d’une étude / It is well known in the enterprises that each new projectto be carried out is usually similar to a certain previous projects. Those projects can be structured according to a common reference process depending on their type. In the dissertation, we call this reference process of the enterprise’s good practices the “expertise business process”. The main difficulty lies in the formalization of the expertise business process. The traditional knowledge capitalization approaches based on the experts’ debriefings showed their limits: the experts often leave out details which may be of relevance because the debriefings are habitually realizedexternally to the activities. Our thesis relies on the idea that it is possible to construct the operational process, implemented during the collaborative activities in a product development study, from the traces recorded by the used IT tools. The constructed operational process allows the business actors and experts to step back on their work and formalize the new deducted experience to enhance the expertise business processes of the firm. Our work had taken place in the ERPI (Equipe de Recherche sur les Processus Innovatifs) laboratory of the “Université de Lorraine” under a partnership with TDC Software society and through a CIFRE Convention. This dissertation offers five key contributions: • A double cycle to capitalize over the instrumented activities. • A global approach for the management of expertise business processes. • An ontology “OntoProcess” to conceive the generic organizational aspects, separating distinctly the concepts related to traces from those related to the business process, and providing extensions in function of the used tools. • A multi-agents system based on the ontology “OntoProcess” to support the presented global approach of the expertise business processes management. • A trace based system that allows the construction of the operational process from the traces registered over the study
6

Exploiting Energy Awareness in Mobile Communication

Vergara Alonso, Ekhiotz Jon January 2013 (has links)
Although evolving mobile technologies bring millions of users closer to the vision of information anywhere-anytime, device battery depletions hamper the quality of experience to a great extent. The massive explosion of mobile applications with the ensuing data exchange over the cellular infrastructure is not only a blessing to the mobile user, but also has a price in terms of rapid discharge of the device battery. Wireless communication is a large contributor to the energy consumption. Thus, the current call for energy economy in mobile devices poses the challenge of reducing the energy consumption of wireless data transmissions at the user end by developing energy-efficient communication. This thesis addresses the energy efficiency of data transmission at the user end in the context of cellular networks. We argue that the design of energy-efficient solutions starts by energy awareness and propose EnergyBox, a parametrised tool that enables accurate and repeatable energy quantification at the user end using real data traffic traces as input. EnergyBox abstracts the underlying states for operation of the wireless interfaces and allows to estimate the energy consumption for different operator settings and device characteristics. Next, we devise an energy-efficient algorithm that schedules the packet transmissions at the user end based on the knowledge of the network parameters that impact the handset energy consumption. The solution focuses on the characteristics of a given traffic class with the lowest quality of service requirements. The cost of running the solution itself is studied showing that the proposed cross-layer scheduler uses a small amount of energy to significantly extend the battery lifetime at the cost of some added latency.  Finally, the benefit of employing EnergyBox to systematically study the different design choices that developers face with respect to data transmissions of applications is shown in the context of location sharing services and instant messaging applications. The results show that quantifying energy consumption of communication patterns, protocols, and data formats can aid the design of tailor-made solutions with a significantly smaller energy footprint.
7

Trace-based reasoning for user assistance and recommendations / Raisonnement à partir de l'expérience tracée pour l'assistance à l'utilisateur et les recommandations

Zarka, Raafat 04 December 2013 (has links)
Dans le domaine des environnements numériques, un enjeu particulier consiste à construire des systèmes permettant aux utilisateurs de partager et de réutiliser leurs expériences. Cette thèse s'intéresse à la problématique générale des recommandations contextuelles pour des applications web dans un contexte particulier : tâche complexes, beaucoup de données, différents types d'utilisateurs (du débutant au professionnel), etc. Nous cherchons à fournir une assistance à l'utilisateur en prenant en compte le contexte et la dynamique des tâches que l'utilisateur effectue. On cherche à fournir des recommandations dynamiques qui sont enrichies au fur et à mesure des expériences. Pour fournir ces recommandations dynamiques, nous nous appuyons sur le Raisonnement à Partir de l'Expérience Tracée (RàPET). Dans le RàPET, les traces d'interaction constituent d'importants conteneurs de connaissances. Ces traces permettent de mieux comprendre le comportement des utilisateurs et leurs activités. Par conséquent, elles représentent également le contexte de l'activité. Les traces peuvent donc venir nourrir un assistant à partir d'expérience en lui fournissant des connaissances appropriées. Dans cette thèse, nous présentons un état de l'art sur les systèmes d'assistances dynamiques et nous rappelons les concepts généraux des systèmes à base de traces. Nous avons proposé une formalisation des traces modélisées et des processus qui permettent de manipuler ces traces. Nous avons notamment défini une méthode pour établir des mesures de similarité afin de comparer des traces modélisées. Nous avons implémenté ces propositions dans un outil appelé TStore. Cet outil permet le stockage, la transformation, la gestion et la réutilisation des traces modélisées. Ensuite, nous avons proposé un mécanisme de rejouage de traces pour permettre aux utilisateurs de revenir à un état précédent de l'application. Enfin, nous avons décrit une approche de recommandations à partir de traces. Le moteur de recommandations est alimenté par les traces d'interactions laissée par les précédents utilisateurs de l'application. Cette approche s'appuie sur les mesures de similarité proposées plus haut. Nous avons validé nos contributions théoriques à l'aide de deux applications web : SAP BusinessObjects Explorer pour l'analyse de données, et Wanaclip pour la génération semi-automatique de clips vidéos. Le mécanisme de rejouage de traces est démontré dans SAP BusinessObjects Explorer. Les recommandations à base de traces sont illustrées dans l'application Wanaclip. Dans la dernière partie du manuscrit, nous mesurons les performances de TStore et la qualité des recommandations et des mesures de similarité qu'il implémente. Nous discutons aussi des résultats du sondage que nous avons appliqué aux utilisateurs de Wanaclip pour mesurer leur satisfaction. Nos évaluations montrent que notre approche offre des recommandations satisfaisantes et un bon temps de réponse. / In the field of digital environments, a particular challenge is to build systems that enable users to share and reuse their experiences. In this thesis, we are interested in the general problem of contextual recommendations for specific web applications in a particular context: complex tasks, huge amount of data, various types of users (from novice to professional), etc. We focus on providing user assistance which takes into account the context and the dynamics of users’ tasks. We seek to provide dynamic recommendations that are enriched by new experiences over time. To provide these dynamic recommendations, we make use of Trace-Based Reasoning (TBR). TBR is a recent artificial intelligence paradigm that draws its inspiration from Case-Based Reasoning. In TBR, interaction traces act as an important knowledge container. They help to understand users’ behaviors and their activities. Therefore, they reflect the context of the activity. Traces can feed an experience-based assistant with the adequate and appropriate knowledge. In this thesis, we introduce a state of the art about dynamic assistance systems and the general concepts of Trace-Based Systems. In order to provide experience-based assistance, we have made several contributions. First, we propose a formal representation of modeled traces and a description of the processes involved in their manipulation. Notably, we define a method for computing similarity measures for comparing modeled traces. These proposals have been implemented in a framework named TStore for the storage, transformation, management, and reuse of modeled traces. Next, we describe a trace replay mechanism enabling users to go back to a particular state of the application. This mechanism supports impact propagation of changes during the replay process. Last, we define a recommendation approach based on interaction traces. The recommendation engine is fed by interaction traces left by previous users of the application and stored in a manager, such as TStore. This approach facilitates knowledge sharing between communities of users and relies, among other things, on the similarity measures mentioned above. We have validated our theoretical contributions on two different web applications: SAP BusinessObjects Explorer for data reporting and Wanaclip for generating video clips. The trace replay mechanism is demonstrated in SAP BusinessObjects. Trace-Based Reasoning recommendations are illustrated with Wanaclip to guide users in both video selection, and the actions to perform in order to make quality video clips. In the last part of this manuscript, we measure the performances of TStore and the quality of recommendations and similarity measures implemented in TStore. We also discuss the results of the survey that the users of Wanaclip answered in order to measure their satisfaction. Our evaluations show that our approach offers satisfactory recommendations and good response time.
8

Network Emulation, Pattern Based Traffic Shaping and KauNET Evaluation

Awan, Zafar Iqbal, Azim, Abdul January 2008 (has links)
Quality of Service is major factor for a successful business in modern and future network services. A minimum level of services is assured indulging quality of Experience for modern real time communication introducing user satisfaction with perceived service quality. Traffic engineering can be applied to provide better services to maintain or enhance user satisfaction through reactive and preventive traffic control mechanisms. Preventive traffic control can be more effective to manage the network resources through admission control, scheduling, policing and traffic shaping mechanisms maintaining a minimum level before it get worse and affect user perception. Accuracy, dynamicity, uniformity and reproducibility are objectives of vast research in network traffic. Real time tests, simulation and network emulation are applied to test uniformity, accuracy, reproducibility and dynamicity. Network Emulation is performed over experimental network to test real time application, protocol and traffic parameters. DummyNet is a network emulator and traffic shaper which allows nondeterministic placement of packet losses, delays and bandwidth changes. KauNet shaper is a network emulator which creates traffic patterns and applies these patterns for exact deterministic placement of bit-errors, packet losses, delay changes and bandwidth changes. An evaluation of KauNet with different patterns for packet losses, delay changes and bandwidth changes on emulated environment is part of this work. The main motivation for this work is to check the possibility to delay and drop the packets of a transfer/session in the same way as it has happened before (during the observation period). This goal is achieved to some extent using KauNet but some issues with pattern repetitions are still needed to be solved to get better results. The idea of history and trace-based traffic shaping using KauNet is given to make this possibility a reality.
9

Contribution à la spécification et à l’élaboration d’une plateforme de maintenance orientée connaissances / A contribution to the specification and the development of an oriented knowledge platform for maintenance

Karray, Mohamed Hedi 09 March 2012 (has links)
Le maintien en condition opérationnelle des équipements industriels est un des enjeux importants de l'entreprise, et a fait passer la maintenance d'un centre de coût à un centre de profit, ce qui a eu pour conséquence une éclosion de logiciels d'aide à la maintenance allant de la GMAO aux plateformes de e-maintenance. Ces systèmes d'aide fournissent aux différents acteurs de la maintenance, un support à la décision et un ensemble de services permettant une gestion informatisée d'activités de base appartenant au processus de maintenance (exemple l'intervention, la planification, le diagnostic, etc.). Toutefois, les besoins des utilisateurs évoluent dans le temps en fonction de nouvelles contraintes, de leur expertise, des nouvelles connaissances. Par contre les services fournis n'évoluent pas et nécessitent une réactualisation. Afin de tenir compte de l'évolution de ces connaissances, pour que ces systèmes d'aide puissent répondre aux besoins des utilisateurs et puissent proposer des services à la demande et des services évolutifs nous avons fait le point dans cette thèse sur les avantages et limites des systèmes informatiques d'aide existants notamment les plateformes de e-maintenance (systèmes les plus avancés aujourd'hui en maintenance). Pour pallier le manque des systèmes existants, nous avons proposé le concept de s-maintenance qui est caractérisé principalement par les échanges collaboratifs entre applications et utilisateurs, par des connaissances communes du domaine de maintenance. Pour mettre en œuvre ce concept, nous avons proposé une plateforme orientée connaissances assurant des fonctionnalités auto-x (auto-traçabilité, auto-apprentissage, autogestion) qui permettent de répondre aux caractéristiques de la s-maintenance. L'architecture à base de composants de cette plateforme prend appui sur une base de connaissances partagée entre les différents composants qu'elle intègre au profit de l'interopérabilité sémantique ainsi que de la capitalisation des connaissances. Nous avons par ailleurs développé une ontologie du domaine de maintenance sur laquelle s'appuie cette base de connaissances. Finalement, afin de développer les fonctionnalités auto-x assurées par la plateforme nous avons proposé un système à base de traces exploitant la base de connaissances et l'ontologie associée / Operational condition maintenance of industrial equipment is a principal challenge for the firm production. This fact transfer the maintenance from the cost center to the profit center which has lead to massif development of maintenance support system starting from the GMAO to the e-maintenance platform. These systems provide to the maintenance agent, decision-support, and set of services allowing a computerized management of core activities for maintenance process. (e.g. intervention, planning, diagnostic,...). However, the user request continues evolving in time with respect of their expertise, their renewed knowledge and new constraints. On the other hand, the existing services are not following their requirements and they need to be updated. In this thesis, an overview on the advantage and drawback of existing computerized support system, in particular the e-maintenance platform (the most advanced maintenance system) is presented in order to meet the users needs and propose scalable and on-demand services. To overcome the existing system shortage, we propose the s-maintenance concept characterized by the collaborative exchange between users and applications and the common knowledge of the maintenance field. Thus, to implement this concept, a knowledge-oriented platform is proposed providing the auto-x functionalities (auto-traceability, auto-learning and auto-management) and meeting the s-maintenance characteristics. The architecture based on components of this platform, is also based on shared knowledge between integrated components for the benefit of the semantic interoperability as well as for the knowledge capitalization. Maintenance domain ontology is also developed on which the knowledge base is rested. Finally, in order to develop the auto-x functionalities, provided by the platform, a trace-based system is proposed by exploiting the knowledge base and the associated ontology.
10

Model-Checking Infinite-State Systems For Information Flow Security Properties

Raghavendra, K R 12 1900 (has links) (PDF)
Information flow properties are away of specifying security properties of systems ,dating back to the work of Goguen and Meseguer in the eighties. In this framework ,a system is modeled as having high-level (or confidential)events as well as low-level (or public) events, and a typical property requires that the high-level events should not “influence ”the occurrence of low-level events. In other words, the sequence of low-level events observed from a system execution should not reveal “too much” information about the high-level events that may have taken place. For example, the trace-based “non-inference” property states that for every trace produced by the system, its projection to low-level events must also be a possible trace of the system. For a system satisfying non-inference, a low-level adversary (who knows the language generated by the system) viewing only the low-level events in any execution cannot infer any in-formation about the occurrence of high-level events in that execution. Other well-known properties include separability, generalized non-interference, non-deducibility of outputs etc. These properties are trace-based. Similarly there is another class of properties based on the structure of the transition system called bisimulation-based information flow properties, defined by Focardiand Gorrieriin1995. In our thesis we study the problem of model-checking the well-known trace-based and bisimulation-based properties for some popular classes of infinite-state system models. We first consider trace-based properties. We define some language-theoretic operations that help to characterize language-inclusion in terms of satisfaction of these properties. This gives us a reduction of the language inclusion problem for a class of system models, say F, to the model-checking problem for F, whenever F, is effectively closed under these language-theoretic operations. We apply this result to show that the model-checking problem for Petri nets, push down systems and for some properties on deterministic push down systems is undecidable. We also consider the class of visibly pushdown systems and show that their model-checking problem is undecidable in general(for some properties).Then we show that for the restricted class of visibly pushdown systems in which all the high (confidential) event are internal, the model-checking problem becomes decidable. Similarly we show that the problem of model-checking bisimulation-based properties is undecidable for Petrinets, pushdown systems and process algebras. Next we consider the problem of detecting information leakage in programs. Here the programs are modeled to have low and high inputs and low outputs. The well known definition of“ non-interference” on programs says that in no execution should the low outputs depend on the high inputs. However this definition was shown to be too strong to be used in practice, with a simple(and considered to be safe)“password-checking” program failing it.“Abstract non-interference(ANI)”and its variants were proposed in the literature to generalize or weaken non-interference. We call these definitions qualitative refinements of non-interference. We study the problem of model-checking many classes of finite-data programs(variables taking values from a bounded domain)for these refinements. We give algorithms and show that this problem is in PSPACE for while, EXPTIME for recursive and EXPSPACE for asynchronous finite-data programs. We finally study different quantitative refinements of non-interference pro-posed in the literature. We first characterize these measures in terms of pre images. These characterizations potentially help designing analysis computing over and under approximations for these measures. Then we investigate the applicability of these measures on standard cryptographic functions.

Page generated in 0.0487 seconds