• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 2
  • 2
  • 1
  • Tagged with
  • 21
  • 21
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Evaluation of functional data models for database design and use

Kulkarni, Krishnarao Gururao January 1983 (has links)
The problems of design, operation, and maintenance of databases using the three most popular database management systems (Hierarchical, CQDASYL/DBTG, and Relational) are well known. Users wishing to use these systems have to make conscious and often complex mappings between the real-world structures and the data structuring options (data models) provided by these systems. In addition, much of the semantics associated with the data either does not get expressed at all or gets embedded procedurally in application programs in an ad-hoc way. In recent years, a large number of data models (called semantic data models) have been proposed with the aim of simplifying database design and use. However, the lack of usable implementations of these proposals has so far inhibited the widespread use of these concepts. The present work reports on an effort to evaluate and extend one such semantic model by means of an implementation. It is based on the functional data model proposed earlier by Shipman (SHIP81). We call this 'Extended Functional Data Model' (EFDM). EFDM, like Shipman's proposals, is a marriage of three of the advanced modelling concepts found in both database and artificial intelligence research: the concept of entity to represent an object in the real world, the concept of type hierarchy among entity types, and the concept of derived data for modelling procedural knowledge. The functional notation of the model lends itself to high level data manipulation languages. The data selection in these languages is expressed simply as function application. Further, the functional approach makes it possible to incorporate general purpose computation facilities in the data languages without having to embed them in procedural languages. In addition to providing the usual database facilities, the implementation also provides a mechanism to specify multiple user views of the database.
12

Máquina e modelo de dados dedicados para aplicações de engenharia / A Data model and a database machine for engineering applications

Caetano Traina Junior 03 December 1986 (has links)
Esta tese envolve duas áreas de conhecimento, nomeadamente a de modelagem de dados para Sistemas de Gerecnciamento de Bases de Dados, e a de desenvolvimento de Máquinas de Bases de Dados. Devido a isso, esta tese apresenta-se também dividida em duas partes. Na primeira parte analisam-se os modelos já existentes e a partir das deficiências que apresentam para aplicações como Base de Dados para Engenharia, define-se o Modelo de Representação de Objetos. Na segunda parte são analisados arquiteturas de Máquinas de Base de Dados existentes e faz-se a proposta de uma nova arquitetura dedicada, para suportar uma implementação capaz de aproveitar o paralelismo que o modelo apresentado permite. Nas duas partes faz-se um levantamento de trabalhos relevantes que existem nas respectivas áreas, e mostra-se como as soluções apresentadas satisfazem as necessidades inerentes de cada parte / This work deals with two research areas: the data modeling for Database Management Systems; and the development of Database Machines. This work is divided in two parts. The first one analyzes already existent data models, and based on the characteristics required from engineering database applications, the Object Representation Model is defined. The second part analyzes existing database machines architectures and proposes a new one intended to support the intrinsic parallelism of the algorithms developed to implement the presented data model. A survey of relevant results obtained in both areas is included and a thorough discussion concludes the work
13

A semantic data model for intellectual database access

Watanabe, Toyohide, Uehara, Yuusuke, Yoshida, Yuuji, Fukumura, Teruo 03 1900 (has links)
No description available.
14

Semantic Web mechanisms in Cloud Environment

Haddadi Makhsous, Saeed January 2014 (has links)
Virtual Private Ontology Server (VPOS) is a middleware software with focus on ontologies (semantic models). VPOS is offering a smart way to its users how to access relevant part of ontology dependent on their context. The user context can be expertise level or level of experience or job position in a hierarchy structure. Instead of having numerous numbers of ontologies associated to different user contexts, VPOS keeps only one ontology but offers sub-ontologies to users on the basis of their context. VPOS also supports reasoning to infer new consequences out of assertions stated in the ontology. These consequences are also visible for certain contexts which have access to enough assertions inside ontology to be able to deduct them. There are some issues within current implementation of VPOS. The application uses the random-access memory of local machine for loading the ontology which could be the cause of scalability issue when ontology size exceeds memory space. Also assuming that each user of VPOS holds her own instance of application it might result into maintainability issues such as inconsistency between ontologies of different users and waste of computational resources. This thesis project is about to find some practical solutions to solve the issues of current implementation, first by upgrading the architecture of application using new framework to address scalability issue and then moving to cloud addressing maintainability issues. The final production of this thesis project would be Cloud-VPOS which is an application made to deal with semantic web mechanisms and function over cloud plat-form. Cloud-VPOS would be an application where semantic web meets cloud computing by employing semantic web mechanisms as cloud services. / ebbits project (Enabling business-based Internet of Things and Services)
15

INTERACTIVE VISUAL QUERYING AND ANALYSIS FOR URBAN TRAJECTORY DATA

AL-Dohuki, Shamal Mohammed Ameen 16 April 2019 (has links)
No description available.
16

Using Semantic Data for Penetration Testing : A Study on Utilizing Knowledge Graphs for Offensive Cybersecurity / Användning av Semantisk Teknologi för Sårbarhetstestning : En Studie för att Applicera Kunskapsgrafer för Offensiv Cybersäkerhet

Wei, Björn January 2022 (has links)
Cybersecurity is an expanding and prominent field in the IT industry. As the amount of vulnerabilities and breaches continue to increase, there is a need to properly test these systems for internal weaknesses in order to prevent intruders proactively. Penetration testing is the act of emulating an adversary in order to test a system’s behaviour. However, due to the amount of possible vulnerabilities and attack methods that exists, the prospect of efficiently choosing a viable weakness to test or selecting a fairly adequate attack method becomes a cumbersome task for the penetration tester. The main objective of this thesis is to explore and show how the semantic data concept of Knowledge Graphs can assist a penetration tester during decision-making and vulnerability analysis. Such as providing insight to attacks a system could experience based on a set of discovered vulnerabilities, and emulate these attacks in order to test the system. Additionally, design aspects for developing a Knowledge Graph based penetration testing system are made and discussions on challenges and complications for the combined fields are also addressed. In this work, three design proposals are made based on inspiration from Knowledge Graph standards and related work. A prototype is also created, based on a penetration testing tool for web applications, OWASP ZAP. Which is then connected to a vulnerability database in order to gain access to various cybersecurity related data, such as attack descriptions on specific types of vulnerabilities. The analysis of the implemented prototype illustrates that Knowledge Graphs display potential for improving data extracted from a vulnerability scan. By connecting a Knowledge Graph to a vulnerability database, penetration testers can extract information and receive suggestions of attacks, reducing their cognitive burden. The drawbacks of this works prototype indicate that in order for a Knowledge Graph penetration testing system to work, the method of extracting information needs to be interfaced in a more user-friendly manner. Additionally, the reliance on specific standardizations create the need to develop several integration ­modules.
17

Interopérabilité des systèmes distribués produisant des flux de données sémantiques au profit de l'aide à la prise de décision / Interoperability of distributed systems producing semantic data stream for decision-making

Belghaouti, Fethi 26 January 2017 (has links)
Internet est une source infinie de données émanant de sources telles que les réseaux sociaux ou les capteurs (domotique, ville intelligente, véhicule autonome, etc.). Ces données hétérogènes et de plus en plus volumineuses, peuvent être gérées grâce au web sémantique, qui propose de les homogénéiser et de les lier et de raisonner dessus, et aux systèmes de gestion de flux de données, qui abordent essentiellement les problèmes liés au volume, à la volatilité et à l’interrogation continue. L’alliance de ces deux disciplines a vu l’essor des systèmes de gestion de flux de données sémantiques RSP (RDF Stream Processing systems). L’objectif de cette thèse est de permettre à ces systèmes, via de nouvelles approches et algorithmes à faible coût, de rester opérationnels, voire plus performants, même en cas de gros volumes de données en entrée et/ou de ressources système limitées.Pour atteindre cet objectif, notre thèse s’articule principalement autour de la problématique du : "Traitement de flux de données sémantiques dans un contexte de systèmes informatiques à ressources limitées". Elle adresse les questions de recherche suivantes : (i) Comment représenter un flux de données sémantiques ? Et (ii) Comment traiter les flux de données sémantiques entrants, lorsque leurs débits et/ou volumes dépassent les capacités du système cible ?Nous proposons comme première contribution une analyse des données circulant dans les flux de données sémantiques pour considérer non pas une succession de triplets indépendants mais plutôt une succession de graphes en étoiles, préservant ainsi les liens entre les triplets. En utilisant cette approche, nous avons amélioré significativement la qualité des réponses de quelques algorithmes d’échantillonnage bien connus dans la littérature pour le délestage des flux. L’analyse de la requête continue permet d’optimiser cette solution en repèrant les données non pertinentes pour être délestées les premières. Dans la deuxième contribution, nous proposons un algorithme de détection de motifs fréquents de graphes RDF dans les flux de données RDF, appelé FreGraPaD (Frequent RDF Graph Patterns Detection). C’est un algorithme en une passe, orienté mémoire et peu coûteux. Il utilise deux structures de données principales un vecteur de bits pour construire et identifier le motif de graphe RDF assurant une optimisation de l’espace mémoire et une table de hachage pour le stockage de ces derniers. La troisième contribution de notre thèse consiste en une solution déterministe de réduction de charge des systèmes RSP appelée POL (Pattern Oriented Load-shedding for RDF Stream Processing systems). Elle utilise des opérateurs booléens très peu coûteux, qu’elle applique aux deux motifs binaires construits de la donnée et de la requête continue pour déterminer et éjecter celle qui est non-pertinente. Elle garantit un rappel de 100%, réduit la charge du système et améliore son temps de réponse. Enfin, notre quatrième contribution est un outil de compression en ligne de flux RDF, appelé Patorc (Pattern Oriented Compression for RSP systems). Il se base sur les motifs fréquents présents dans les flux qu’il factorise. C’est une solution de compression sans perte de données dont l’interrogation sans décompression est très envisageable. Les solutions apportées par cette thèse permettent l’extension des systèmes RSP existants en leur permettant le passage à l’échelle dans un contexte de Bigdata. Elles leur permettent ainsi de manipuler un ou plusieurs flux arrivant à différentes vitesses, sans perdre de leur qualité de réponse et tout en garantissant leur disponibilité au-delà même de leurs limites physiques. Les résultats des expérimentations menées montrent que l’extension des systèmes existants par nos solutions améliore leurs performances. Elles illustrent la diminution considérable de leur temps de réponse, l’augmentation de leur seuil de débit de traitement en entrée tout en optimisant l’utilisation de leurs ressources systèmes / Internet is an infinite source of data coming from sources such as social networks or sensors (home automation, smart city, autonomous vehicle, etc.). These heterogeneous and increasingly large data can be managed through semantic web technologies, which propose to homogenize, link these data and reason above them, and data flow management systems, which mainly address the problems related to volume, volatility and continuous querying. The alliance of these two disciplines has seen the growth of semantic data stream management systems also called RSP (RDF Stream Processing Systems). The objective of this thesis is to allow these systems, via new approaches and "low cost" algorithms, to remain operational, even more efficient, even for large input data volumes and/or with limited system resources.To reach this goal, our thesis is mainly focused on the issue of "Processing semantic data streamsin a context of computer systems with limited resources". It directly contributes to answer the following research questions : (i) How to represent semantic data stream ? And (ii) How to deal with input semantic data when their rates and/or volumes exceed the capabilities of the target system ?As first contribution, we propose an analysis of the data in the semantic data streams in order to consider a succession of star graphs instead of just a success of andependent triples, thus preserving the links between the triples. By using this approach, we significantly impoved the quality of responses of some well known sampling algoithms for load-shedding. The analysis of the continuous query allows the optimisation of this solution by selection the irrelevant data to be load-shedded first. In the second contribution, we propose an algorithm for detecting frequent RDF graph patterns in semantic data streams.We called it FreGraPaD for Frequent RDF Graph Patterns Detection. It is a one pass algorithm, memory oriented and "low-cost". It uses two main data structures : A bit-vector to build and identify the RDF graph pattern, providing thus memory space optimization ; and a hash-table for storing the patterns.The third contribution of our thesis consists of a deterministic load-shedding solution for RSP systems, called POL (Pattern Oriented Load-shedding for RDF Stream Processing systems). It uses very low-cost boolean operators, that we apply on the built binary patterns of the data and the continuous query inorder to determine which data is not relevant to be ejected upstream of the system. It guarantees a recall of 100%, reduces the system load and improves response time. Finally, in the fourth contribution, we propose Patorc (Pattern Oriented Compression for RSP systems). Patorc is an online compression toolfor RDF streams. It is based on the frequent patterns present in RDF data streams that factorizes. It is a data lossless compression solution whith very possible querying without any need to decompression.This thesis provides solutions that allow the extension of existing RSP systems and makes them able to scale in a bigdata context. Thus, these solutions allow the RSP systems to deal with one or more semantic data streams arriving at different speeds, without loosing their response quality while ensuring their availability, even beyond their physical limitations. The conducted experiments, supported by the obtained results show that the extension of existing systems with the new solutions improves their performance. They illustrate the considerable decrease in their engine’s response time, increasing their processing rate threshold while optimizing the use of their system resources
18

An analysis of semantic data quality defiencies in a national data warehouse: a data mining approach

Barth, Kirstin 07 1900 (has links)
This research determines whether data quality mining can be used to describe, monitor and evaluate the scope and impact of semantic data quality problems in the learner enrolment data on the National Learners’ Records Database. Previous data quality mining work has focused on anomaly detection and has assumed that the data quality aspect being measured exists as a data value in the data set being mined. The method for this research is quantitative in that the data mining techniques and model that are best suited for semantic data quality deficiencies are identified and then applied to the data. The research determines that unsupervised data mining techniques that allow for weighted analysis of the data would be most suitable for the data mining of semantic data deficiencies. Further, the academic Knowledge Discovery in Databases model needs to be amended when applied to data mining semantic data quality deficiencies. / School of Computing / M. Tech. (Information Technology)
19

Multi-utilisation de données complexes et hétérogènes : application au domaine du PLM pour l’imagerie biomédicale / Multi-use of complex and heterogenous data : application in the domain of PLM for biomedical imaging

Pham, Cong Cuong 15 June 2017 (has links)
L’émergence des technologies de l’information et de la communication (TIC) au début des années 1990, notamment internet, a permis de produire facilement des données et de les diffuser au reste du monde. L’essor des bases de données, le développement des outils applicatifs et la réduction des coûts de stockage ont conduit à l’augmentation quasi exponentielle des quantités de données au sein de l’entreprise. Plus les données sont volumineuses, plus la quantité d’interrelations entre données augmente. Le grand nombre de corrélations (visibles ou cachées) entre données rend les données plus entrelacées et complexes. Les données sont aussi plus hétérogènes, car elles peuvent venir de plusieurs sources et exister dans de nombreux formats (texte, image, audio, vidéo, etc.) ou à différents degrés de structuration (structurées, semi-structurées, non-structurées). Les systèmes d’information des entreprises actuelles contiennent des données qui sont plus massives, complexes et hétérogènes. L’augmentation de la complexité, la globalisation et le travail collaboratif font qu’un projet industriel (conception de produit) demande la participation et la collaboration d’acteurs qui viennent de plusieurs domaines et de lieux de travail. Afin d’assurer la qualité des données, d’éviter les redondances et les dysfonctionnements des flux de données, tous les acteurs doivent travailler sur un référentiel commun partagé. Dans cet environnement de multi-utilisation de données, chaque utilisateur introduit son propre point de vue quand il ajoute de nouvelles données et informations techniques. Les données peuvent soit avoir des dénominations différentes, soit ne pas avoir des provenances vérifiables. Par conséquent, ces données sont difficilement interprétées et accessibles aux autres acteurs. Elles restent inexploitées ou non exploitées au maximum afin de pouvoir les partager et/ou les réutiliser. L’accès aux données (ou la recherche de données), par définition est le processus d’extraction des informations à partir d’une base de données en utilisant des requêtes, pour répondre à une question spécifique. L’extraction des informations est une fonction indispensable pour tout système d’information. Cependant, cette dernière n’est jamais facile car elle représente toujours un goulot majeur d’étranglement pour toutes les organisations (Soylu et al. 2013). Dans l’environnement de données complexes, hétérogènes et de multi-utilisation de données, fournir à tous les utilisateurs un accès facile et simple aux données devient plus difficile pour deux raisons : - Le manque de compétences techniques. Pour formuler informatiquement une requête complexe (les requêtes conjonctives), l’utilisateur doit connaitre la structuration de données, c’est-à-dire la façon dont les données sont organisées et stockées dans la base de données. Quand les données sont volumineuses et complexes, ce n’est pas facile d’avoir une compréhension approfondie sur toutes les dépendances et interrelations entre données, même pour les techniciens du système d’information. De plus, cette compréhension n’est pas forcément liée au savoir et savoir-faire du domaine et il est donc, très rare que les utilisateurs finaux possèdent les compétences suffisantes. - Différents points de vue des utilisateurs. Dans l’environnement de multi-utilisation de données, chaque utilisateur introduit son propre point de vue quand il ajoute des nouvelles données et informations techniques. Les données peuvent être nommées de manières très différentes et les provenances de données ne sont pas suffisamment fournies. / The emergence of Information and Comunication Technologies (ICT) in the early 1990s, especially the Internet, made it easy to produce data and disseminate it to the rest of the world. The strength of new Database Management System (DBMS) and the reduction of storage costs have led to an exponential increase of volume data within entreprise information system. The large number of correlations (visible or hidden) between data makes them more intertwined and complex. The data are also heterogeneous, as they can come from many sources and exist in many formats (text, image, audio, video, etc.) or at different levels of structuring (structured, semi-structured, unstructured). All companies now have to face with data sources that are more and more massive, complex and heterogeneous.technical information. The data may either have different denominations or may not have verifiable provenances. Consequently, these data are difficult to interpret and accessible by other actors. They remain unexploited or not maximally exploited for the purpose of sharing and reuse. Data access (or data querying), by definition, is the process of extracting information from a database using queries to answer a specific question. Extracting information is an indispensable function for any information system. However, the latter is never easy but it always represents a major bottleneck for all organizations (Soylu et al. 2013). In the environment of multiuse of complex and heterogeneous, providing all users with easy and simple access to data becomes more difficult for two reasons : - Lack of technical skills : In order to correctly formulate a query a user must know the structure of data, ie how the data is organized and stored in the database. When data is large and complex, it is not easy to have a thorough understanding of all the dependencies and interrelationships between data, even for information system technicians. Moreover, this understanding is not necessarily linked to the domain competences and it is therefore very rare that end users have sufficient theses such skills. - Different user perspectives : In the multi-use environment, each user introduces their own point of view when adding new data and technical information. Data can be namedin very different ways and data provenances are not sufficiently recorded. Consequently, they become difficultly interpretable and accessible by other actors since they do not have sufficient understanding of data semantics. The thesis work presented in this manuscript aims to improve the multi-use of complex and heterogeneous data by expert usiness actors by providing them with a semantic and visual access to the data. We find that, although the initial design of the databases has taken into account the logic of the domain (using the entity-association model for example), it is common practice to modify this design in order to adapt specific techniques needs. As a result, the final design is often a form that diverges from the original conceptual structure and there is a clear distinction between the technical knowledge needed to extract data and the knowledge that the expert actors have to interpret, process and produce data (Soylu et al. 2013). Based on bibliographical studies about data management tools, knowledge representation, visualization techniques and Semantic Web technologies (Berners-Lee et al. 2001), etc., in order to provide an easy data access to different expert actors, we propose to use a comprehensive and declarative representation of the data that is semantic, conceptual and integrates domain knowledge closeed to expert actors.
20

Automatic sensor discovery and management to implement effective mechanism for data fusion and data aggregation / Découverte et gestion autonomique des capteurs pour une mise en oeuvre de mécanismes efficaces de fusion et d’agrégation de données

Nachabe Ismail, Lina 06 October 2015 (has links)
Actuellement, des descriptions basées sur de simples schémas XML sont utilisées pour décrire un capteur/actuateur et les données qu’il mesure et fournit. Ces schémas sont généralement formalisés en utilisant le langage SensorML (Sensor Model Language), ne permettant qu’une description hiérarchique basique des attributs des objets sans aucune notion de liens sémantiques, de concepts et de relations entre concepts. Nous pensons au contraire que des descriptions sémantiques des capteurs/actuateurs sont nécessaires au design et à la mise en œuvre de mécanismes efficaces d’inférence, de fusion et de composition de données. Cette ontologie sémantique permettra de masquer l’hétérogénéité des données collectées et facilitera leur fusion et leur composition au sein d’un environnement de gestion de capteur similaire à celui d’une architecture ouverte orientée services. La première partie des travaux de cette thèse porte donc sur la conception et la validation d’une ontologie sémantique légère, extensible et générique de description des données fournies par un capteur/actuateur. Cette description ontologique de données brutes devra être conçue : • d’une manière extensible et légère afin d’être applicable à des équipements embarqués hétérogènes, • comme sous élément d’une ontologie de plus haut niveau (upper level ontology) utilisée pour modéliser les capteurs et actuateurs (en tant qu’équipements et non plus de données fournies), ainsi que les informations mesurées (information veut dire ici donnée de plus haut niveau issue du traitement et de la fusion des données brutes). La seconde partie des travaux de cette thèse portera sur la spécification et la qualification : • d’une architecture générique orientée service (SOA) permettant la découverte et la gestion d’un capteur/actuateur, et des données qu’il fournit (incluant leurs agrégation et fusion en s’appuyant sur les mécanismes de composition de services de l’architecture SOA), à l’identique d’un service composite de plus haut niveau, • d’un mécanisme amélioré de collecte de données à grande échelle, au dessus de cette ontologie descriptive. L’objectif des travaux de la thèse est de fournir des facilitateurs permettant une mise en œuvre de mécanismes efficaces de collecte, de fusion et d’agrégation de données, et par extension de prise de décisions. L’ontologie de haut niveau proposée sera quant à elle pourvue de tous les attributs permettant une représentation, une gestion et une composition des ‘capteurs, actuateurs et objets’ basées sur des architectures orientées services (Service Oriented Architecture ou SOA). Cette ontologie devrait aussi permettre la prise en compte de l’information transporter (sémantique) dans les mécanismes de routage (i.e. routage basé information). Les aspects liés à l’optimisation et à la modélisation constitueront aussi une des composantes fortes de cette thèse. Les problématiques à résoudre pourraient être notamment : • La proposition du langage de description le mieux adapté (compromis entre richesse, complexité et flexibilité), • La définition de la structure optimum de l’architecture de découverte et de gestion d’un capteur/actuateur, • L’identification d’une solution optimum au problème de la collecte à grande échelle des données de capteurs/actuateurs / The constant evolution of technology in terms of inexpensive and embedded wireless interfaces and powerful chipsets has leads to the massive usage and development of wireless sensor networks (WSNs). This potentially affects all aspects of our lives ranging from home automation (e.g. Smart Buildings), passing through e-Health applications, environmental observations and broadcasting, food sustainability, energy management and Smart Grids, military services to many other applications. WSNs are formed of an increasing number of sensor/actuator/relay/sink devices, generally self-organized in clusters and domain dedicated, that are provided by an increasing number of manufacturers, which leads to interoperability problems (e.g., heterogeneous interfaces and/or grounding, heterogeneous descriptions, profiles, models …). Moreover, these networks are generally implemented as vertical solutions not able to interoperate with each other. The data provided by these WSNs are also very heterogeneous because they are coming from sensing nodes with various abilities (e.g., different sensing ranges, formats, coding schemes …). To tackle this heterogeneity and interoperability problems, these WSNs’ nodes, as well as the data sensed and/or transmitted, need to be consistently and formally represented and managed through suitable abstraction techniques and generic information models. Therefore, an explicit semantic to every terminology should be assigned and an open data model dedicated for WSNs should be introduced. SensorML, proposed by OGC in 2010, has been considered an essential step toward data modeling specification in WSNs. Nevertheless, it is based on XML schema only permitting basic hierarchical description of the data, hence neglecting any semantic representation. Furthermore, most of the researches that have used semantic techniques for developing their data models are only focused on modeling merely sensors and actuators (this is e.g. the case of SSN-XG). Other researches dealt with data provided by WSNs, but without modelling the data type, quality and states (like e.g. OntoSensor). That is why the main aim of this thesis is to specify and formalize an open data model for WSNs in order to mask the aforementioned heterogeneity and interoperability between different systems and applications. This model will also facilitate the data fusion and aggregation through an open management architecture like environment as, for example, a service oriented one. This thesis can thus be split into two main objectives: 1)To formalize a semantic open data model for generically describing a WSN, sensors/actuators and their corresponding data. This model should be light enough to respect the low power and thus low energy limitation of such network, generic for enabling the description of the wide variety of WSNs, and extensible in a way that it can be modified and adapted based on the application. 2)To propose an upper service model and standardized enablers for enhancing sensor/actuator discovery, data fusion, data aggregation and WSN control and management. These service layer enablers will be used for improving the data collection in a large scale network and will facilitate the implementation of more efficient routing protocols, as well as decision making mechanisms in WSNs

Page generated in 0.0809 seconds