• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 199
  • 187
  • 118
  • 26
  • 15
  • 8
  • 7
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 621
  • 167
  • 161
  • 159
  • 135
  • 116
  • 98
  • 96
  • 94
  • 87
  • 82
  • 70
  • 63
  • 62
  • 58
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Contextual integration of heterogeneous data in an open and opportunistic smart environment : application to humanoid robots / Intégration contextuelle de données hétérogènes dans un environnement ambiant ouvert et opportuniste : application aux robots humanoïdes

Ramoly, Nathan 02 July 2018 (has links)
L'association de robots personnels et d’intelligences ambiantes est une nouvelle voie pour l’aide à domicile. Grâce aux appareils intelligents de l'environnement, les robots pourraient fournir un service de haute qualité. Cependant, des verrous existent pour la perception, la cognition et l’action.En effet, une telle association cause des problèmes de variétés, qualités et conflits, engendrant des données hétérogènes et incertaines. Cela complique la perception du contexte et la cognition, i.e. le raisonnement et la prise de décision. La connaissance du contexte est utilisée par le robot pour effectuer des actions. Cependant, il se peut qu’il échoue, à cause de changements de contexte ou par manque de connaissance. Ce qui annule ou retarde son plan. La littérature aborde ces sujets, mais n’offre aucune solution viable et complète. Face à ces verrous, nous avons proposé des contributions, autour à la fois du raisonnement et de l’apprentissage. Nous avons d’abord conçu un outil d'acquisition de contexte qui gère et modélise l’incertitude. Puis, nous avons proposé une technique de détection de situations anormales à partir de données incertaines. Ensuite, un planificateur dynamique, qui considère les changements de contexte, a été proposé. Enfin, nous avons développé une méthode d'apprentissage par renforcement et expérience pour éviter proactivement les échecs.Toutes nos contributions ont été implémentées et validées via simulation ou à l’aide d’un robot dans une plateforme d’espaces intelligents / Personal robots associated with ambient intelligence are an upcoming solution for domestic care. In fact, helped with devices dispatched in the environment, robots could provide a better care to users. However, such robots are encountering challenges of perception, cognition and action.In fact, such an association brings issues of variety, data quality and conflicts, leading to the heterogeneity and uncertainty of data. These are challenges for both perception, i.e. context acquisition, and cognition, i.e. reasoning and decision making. With the knowledge of the context, the robot can intervene through actions. However, it may encounter task failures due to a lack of knowledge or context changes. This causes the robot to cancel or delay its agenda. While the literature addresses those topics, it fails to provide complete solutions. In this thesis, we proposed contributions, exploring both reasoning and learning approaches, to cover the whole spectrum of problems. First, we designed novel context acquisition tool that supports and models uncertainty of data. Secondly, we proposed a cognition technique that detects anomalous situation over uncertain data and takes a decision in accordance. Then, we proposed a dynamic planner that takes into consideration the last context changes. Finally, we designed an experience-based reinforcement learning approach to proactively avoid failures.All our contributions were implemented and validated through simulations and/or with a small robot in a smart home platform
52

AMAN-DA : une approche basée sur la réutilisation de la connaissance pour l’ingénierie des exigences de sécurité / A knowledge reuse based approach to the domain specific security requirements engineering

Souag, Amina 13 November 2015 (has links)
Au cours de ces dernières années, la sécurité des Systèmes d'Information (SI) est devenue une préoccupation importante, qui doit être prise en compte dans toutes les phases du développement du SI, y compris dans la phase initiale de l'ingénierie des exigences (IE). Prendre en considération la sécurité durant les premières phases du développement des SI permet aux développeurs d'envisager les menaces, leurs conséquences et les contre-mesures avant qu'un système soit mis en place. Les exigences de sécurité sont connues pour être "les plus difficiles des types d’exigences", et potentiellement celles qui causent le plus de risque si elles ne sont pas correctes. De plus, les ingénieurs en exigences ne sont pas principalement intéressés à, ou formés sur la sécurité. Leur connaissance tacite de la sécurité et leur connaissance primitive sur le domaine pour lequel ils élucident des exigences de sécurité rendent les exigences de sécurité résultantes pauvres et trop génériques. Cette thèse explore l'approche de l’élucidation des exigences fondée sur la réutilisation de connaissances explicites. Tout d'abord, la thèse propose une étude cartographique systématique et exhaustive de la littérature sur la réutilisation des connaissances dans l'ingénierie des exigences de sécurité identifiant les différentes formes de connaissances. Suivi par un examen et une classification des ontologies de sécurité comme étant la principale forme de réutilisation. Dans la deuxième partie, AMAN-DA est présentée. AMAN-DA est la méthode développée dans cette thèse. Elle permet l’élucidation des exigences de sécurité d'un système d'information spécifique à un domaine particulier en réutilisant des connaissances encapsulées dans des ontologies de domaine et de sécurité. En outre, la thèse présente les différents éléments d'AMAN-DA : (I) une ontologie de sécurité noyau, (II) une ontologie de domaine multi-niveau, (iii) des modèles syntaxique de buts et d’exigences de sécurité, (IV) un ensemble de règles et de mécanismes nécessaires d'explorer et de réutiliser la connaissance encapsulée dans les ontologies et de produire des spécifications d’exigences de sécurité. La dernière partie rapporte l'évaluation de la méthode. AMAN-DA a été implémenté dans un prototype d'outil. Sa faisabilité a été évaluée et appliquée dans les études de cas de trois domaines différents (maritimes, applications web, et de vente). La facilité d'utilisation et l’utilisabilité de la méthode et de son outil ont également été évaluées dans une expérience contrôlée. L'expérience a révélé que la méthode est bénéfique pour l’élucidation des exigences de sécurité spécifiques aux domaines, et l'outil convivial et facile à utiliser. / In recent years, security in Information Systems (IS) has become an important issue that needs to be taken into account in all stages of IS development, including the early phase of Requirement Engineering (RE). Considering security during early stages of IS development allows IS developers to envisage threats, their consequences and countermeasures before a system is in place. Security requirements are known to be “the most difficult of requirements types”, and potentially the ones causing the greatest risk if they are not correct. Moreover, requirements engineers are not primarily interested in, or knowledgeable about, security. Their tacit knowledge about security and their primitive knowledge about the domain for which they elicit security requirements make the resulting security requirements poor and too generic. This thesis explores the approach of eliciting requirements based on the reuse of explicit knowledge. First, the thesis proposes an extensive systematic mapping study of the literature on the reuse of knowledge in security requirements engineering identifying the different knowledge forms. This is followed by a review and classification of security ontologies as the main reuse form. In the second part, AMAN-DA is presented. AMAN-DA is the method developed in this thesis. It allows the elicitation of domain-specific security requirements of an information system by reusing knowledge encapsulated in domain and security ontologies. Besides that, the thesis presents the different elements of AMANDA: (I) a core security ontology, (II) a multi-level domain ontology, (III) security goals and requirements’ syntactic models, (IV) a set of rules and mechanisms necessary to explore and reuse the encapsulated knowledge of the ontologies and produce security requirements specifications. The last part reports the evaluation of the method. AMAN-DA was implemented in a prototype tool. Its feasibility was evaluated and applied in case studies of three different domains (maritime, web applications, and sales). The ease of use and the usability of the method and its tool were also evaluated in a controlled experiment. The experiment revealed that the method is beneficial for the elicitation of domain specific security requirements, and that the tool is friendly and easy to use.
53

Aligning and Merging Biomedical Ontologies

Tan, He January 2006 (has links)
Due to the explosion of the amount of biomedical data, knowledge and tools that are often publicly available over the Web, a number of difficulties are experienced by biomedical researchers. For instance, it is difficult to find, retrieve and integrate information that is relevant to their research tasks. Ontologies and the vision of a Semantic Web for life sciences alleviate these difficulties. In recent years many biomedical ontologies have been developed and many of these ontologies contain overlapping information. To be able to use multiple ontologies they have to be aligned or merged. A number of systems have been developed for aligning and merging ontologies and various alignment strategies are used in these systems. However, there are no general methods to support building such tools, and there exist very few evaluations of these strategies. In this thesis we give an overview of the existing systems. We propose a general framework for aligning and merging ontologies. Most existing systems can be seen as instantiations of this framework. Further, we develop SAMBO (System for Aligning and Merging Biomedical Ontologies) according to this framework. We implement different alignment strategies and their combinations, and evaluate them in terms of quality and processing time within SAMBO. We also compare SAMBO with two other systems. The work in this thesis is a first step towards a general framework that can be used for comparative evaluations of alignment strategies and their combinations. / <p>Report code: LiU-Tek-Lic-2006:6.</p>
54

Semantic Analysis in Web Usage Mining

Norguet, Jean-Pierre E 20 March 2006 (has links)
With the emergence of the Internet and of the World Wide Web, the Web site has become a key communication channel in organizations. To satisfy the objectives of the Web site and of its target audience, adapting the Web site content to the users' expectations has become a major concern. In this context, Web usage mining, a relatively new research area, and Web analytics, a part of Web usage mining that has most emerged in the corporate world, offer many Web communication analysis techniques. These techniques include prediction of the user's behaviour within the site, comparison between expected and actual Web site usage, adjustment of the Web site with respect to the users' interests, and mining and analyzing Web usage data to discover interesting metrics and usage patterns. However, Web usage mining and Web analytics suffer from significant drawbacks when it comes to support the decision-making process at the higher levels in the organization. Indeed, according to organizations theory, the higher levels in the organizations need summarized and conceptual information to take fast, high-level, and effective decisions. For Web sites, these levels include the organization managers and the Web site chief editors. At these levels, the results produced by Web analytics tools are mostly useless. Indeed, most of these results target Web designers and Web developers. Summary reports like the number of visitors and the number of page views can be of some interest to the organization manager but these results are poor. Finally, page-group and directory hits give the Web site chief editor conceptual results, but these are limited by several problems like page synonymy (several pages contain the same topic), page polysemy (a page contains several topics), page temporality, and page volatility. Web usage mining research projects on their part have mostly left aside Web analytics and its limitations and have focused on other research paths. Examples of these paths are usage pattern analysis, personalization, system improvement, site structure modification, marketing business intelligence, and usage characterization. A potential contribution to Web analytics can be found in research about reverse clustering analysis, a technique based on self-organizing feature maps. This technique integrates Web usage mining and Web content mining in order to rank the Web site pages according to an original popularity score. However, the algorithm is not scalable and does not answer the page-polysemy, page-synonymy, page-temporality, and page-volatility problems. As a consequence, these approaches fail at delivering summarized and conceptual results. An interesting attempt to obtain such results has been the Information Scent algorithm, which produces a list of term vectors representing the visitors' needs. These vectors provide a semantic representation of the visitors' needs and can be easily interpreted. Unfortunately, the results suffer from term polysemy and term synonymy, are visit-centric rather than site-centric, and are not scalable to produce. Finally, according to a recent survey, no Web usage mining research project has proposed a satisfying solution to provide site-wide summarized and conceptual audience metrics. In this dissertation, we present our solution to answer the need for summarized and conceptual audience metrics in Web analytics. We first described several methods for mining the Web pages output by Web servers. These methods include content journaling, script parsing, server monitoring, network monitoring, and client-side mining. These techniques can be used alone or in combination to mine the Web pages output by any Web site. Then, the occurrences of taxonomy terms in these pages can be aggregated to provide concept-based audience metrics. To evaluate the results, we implement a prototype and run a number of test cases with real Web sites. According to the first experiments with our prototype and SQL Server OLAP Analysis Service, concept-based metrics prove extremely summarized and much more intuitive than page-based metrics. As a consequence, concept-based metrics can be exploited at higher levels in the organization. For example, organization managers can redefine the organization strategy according to the visitors' interests. Concept-based metrics also give an intuitive view of the messages delivered through the Web site and allow to adapt the Web site communication to the organization objectives. The Web site chief editor on his part can interpret the metrics to redefine the publishing orders and redefine the sub-editors' writing tasks. As decisions at higher levels in the organization should be more effective, concept-based metrics should significantly contribute to Web usage mining and Web analytics.
55

A framework for the management of heterogeneous models in Systems Engineering

Simon-Zayas, David 08 June 2012 (has links) (PDF)
De nos jours, la complexité des systèmes implique fréquemment la participation des différentes équipes d'ingénierie dans la gestion des modèles descriptifs. Chaque équipe ayant une diversité d'expériences, de connaissances du domaine et de pratiques de modélisation, l'hétérogénéité des modèles mêmes est une conséquence logique. Ainsi, malgré la bonne gestion des modèles d'un point de vue individuel, leur variabilité devient un problème quand les ingénieurs nécessitent partager leurs modèles afin d'effectuer des validations globales. Nous défendons l'utilisation des connaissances implicites comme un moyen important de réduction de l'hétérogénéité. Ces connaissances sont implicites car elles sont dans la tête des ingénieurs mais elles n'ont pas été formalisées dans les modèles bien qu'elles soient essentielles pour les comprendre. Après avoir analysé les approches actuelles concernant l'intégration de modèles et l'explicitation de connaissances implicites nous proposons une méthodologie qui permet de compléter (annoter) les modèles fonctionnels et de conception d'un système avec des connaissances partagées du domaine formalisées sous la forme d'ontologies. Ces annotations facilitent l'intégration des modèles et la validation de contraintes intermodèles. En outre, il s'agit d'une approche non intrusive car les modèles originaux ne sont pas modifiés directement. En effet, ils sont exportés dans un environnement unifié en exprimant leurs méta-modèles dans un langage de modélisation partagé qui permet l'homogénéisation syntactique. L'approche a été validée formellement en utilisant le langage de modélisation EXPRESS en tant que langage partagé. Ensuite, afin de la valider d'un point de vue industriel, trois cas d'étude du domaine aéronautique ont été implémentés en appliquant l'approche. Cet aspect industriel a été complété par le développement d'un prototype permettant de travailler avec les ingénieurs depuis une perspective processus.ctly modified. Thus, they are exported into a unified framework by expressing their meta-models in a shared modeling language that permits the syntactical homogenization. The approach has been formally validated by using the EXPRESS modeling language as shared language. Then, in order to validate it from an industrial point of view, three aircraft domain case studies have been implemented by applying the approach. This industrial aspect has been completed by the development of a prototype allowing engineers to work from a process perspective.
56

Software agents for Internet-based knowledge engineering

Crow, Louise Rebecca January 2000 (has links)
No description available.
57

Contributions à l'alignement d'ontologies OWL par agrégation de similarités / Contributions to OWL ontologies alignment using similarity aggregation

Zghal, Sami 21 December 2010 (has links)
Dans le cadre de cette thèse, nous avons proposé plusieurs méthodes d'alignement à savoir: la méthode EDOLA, la méthode SODA et la méthode OACAS. Les trois méthodes procèdent dans une première phase à la transformation des deux ontologies à aligner sous la forme d'un graphe, O-Graph, pour chaque ontologie. Ces graphes permettent la représentation des ontologies sous une forme facile à l'exploitation. La méthode EDOLA est une approche se basant sur un modèle de calcul des similarités locale et globale. Ce modèle suit la structure du graphe pour calculer les mesures de similarité entre les noeuds des deux ontologies. Le module d'alignement associe pour chaque catégorie de noeuds une fonction d'agrégation. La fonction d'agrégation prend en considération toutes les mesures de similarités entre les couples de noeuds voisins au couple de noeud à apparier. La méthode SODA est une amélioration de la méthode EDOLA. En effet, la méthode SODA opère sur les ontologies OWL-DL, pour les aligner, à la place des ontologies décrites en OWL-Lite. La méthode SODA est une approche structurelle pour l'alignement d'ontologies OWL-DL. Elle opère en 3 étapes successives. La première étape permet de calculer la similarité linguistique à travers des mesures de similarité plus adaptées aux descripteurs des constituants des ontologies à apparier. La seconde étape détermine la similarité structurelle en exploitant la structure des deux graphes O-Graphs. La troisième étape déduit la similarité sémantique, en prenant en considération les deux types de similarités déjà calculées. La méthode d'alignement, OACAS, opère en 3 étapes successives pour produire l'alignement. La première étape permet de calculer la similarité linguistique composée. La similarité linguistique composée prend en considération tous les descripteurs des entités ontologiques à aligner. La seconde étape détermine la similarité de voisinage par niveau. La troisième étape agrège les composants de la similarité linguistique composée et la similarité de voisinage par niveau pour déterminer la similarité agrégée. / In this thesis, we have proposed three ontology alignment methods: EDOLA (Extended Diameter OWL-Lite Alignment) method, SODA (Structural Ontology OWL-DL Alignment) method and OACAS (Ontologies Alignment using Composition and Aggregation of Similarities) method. These methods rely on aggregation and composition of similarities and check the spread structure of the ontologies to be aligned. EDOLA method allows to align OWL-Lite ontologies whereas SODA and OACAS consider OWL-DL ontologies. The three proposed methods operate in a first step by transforming both ontologies to aligned as a graph, named O-Graph, for each ontology. This graph reproduces OWL ontologies to be easily manipulated during the alignment process. The obtained graphs describe all the information contained in the ontologies: entities, relations between entities and instances. Besides, the EDOLA method is a new approach that computes local and global similarities using a propagation technique of similarities through the O-Graphs. This model explores the structure of the considered O-Graphs to compute the similarity values between the nodes of both ontologies. The alignment model associates for each category of nodes an aggregation function. This function takes in consideration all the similarity measures of the couple of nodes to match. This aggregation function explores all descriptive information of this couple. EDOLA operates in two succesive steps. The first step computes the local similarity, terminological one, whereas the second step computes the global one. The SODA method is an improved version of EDOLA. In fact, it uses OWL-DL ontologies. SODA method is a structures approach for OWL-DL ontologies. The method operates in three successive steps and explores the structure the ontologies using O-Graphs. The first step computes linguistic similarity using appropriate similarity measures corresponding to the descriptors of ontological entities. The second step allows to compute structural similarity using the two graphs O-Graphs. The third step deduces the semantic similarity, by combining both similarities already computed, in order to outperform the alignment task.
58

SemIndex: Semantic-Aware Inverted Index

Chbeir, Richard, Luo, Yi, Tekli, Joe, Yetongnon, Kokou, Raymundo Ibañez, Carlos Arturo, Traina, Agma J. M., Traina Jr, Caetano, Al Assad, Marc, Universidad Peruana de Ciencias Aplicadas (UPC) 10 February 2015 (has links)
carlos.raymundo@upc.edu.pe / This paper focuses on the important problem of semanticaware search in textual (structured, semi-structured, NoSQL) databases. This problem has emerged as a required extension of the standard containment keyword based query to meet user needs in textual databases and IR applications. We provide here a new approach, called SemIndex, that extends the standard inverted index by constructing a tight coupling inverted index graph that combines two main resources: a general purpose semantic network, and a standard inverted index on a collection of textual data. We also provide an extended query model and related processing algorithms with the help of SemIndex. To investigate its effectiveness, we set up experiments to test the performance of SemIndex. Preliminary results have demonstrated the effectiveness, scalability and optimality of our approach.
59

Intégration des connaissances ontologiques dans la fouille de motifs séquentiels avec application à la personnalisation Web

Adda, Mehdi January 2008 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal.
60

An Ontology Centric Architecture For Mediating Interactions In Semantic Web-Based E-Commerce Environments

Thomas, Manoj 07 March 2008 (has links)
Information freely generated, widely distributed and openly interpreted is a rich source of creative energy in the digital age that we live in. As we move further into this irrevocable relationship with self-growing and actively proliferating information spaces, we are also finding ourselves overwhelmed, disheartened and powerless in the presence of so much information. We are at a point where, without domain familiarity or expert guidance, sifting through the copious volumes of information to find relevance quickly turns into a mundane task often requiring enormous patience. The realization of accomplishment soon turns into a matter of extensive cognitive load, serendipity or just plain luck. This dissertation describes a theoretical framework to analyze user interactions based on mental representations in a medium where the nature of the problem-solving task emphasizes the interaction between internal task representation and the external problem domain. The framework is established by relating to work in behavioral science, sociology, cognitive science and knowledge engineering, particularly Herbert Simon’s (1957; 1989) notion of satisficing on bounded rationality and Schön’s (1983) reflective model. Mental representations mediate situated actions in our constrained digital environment and provide the opportunity for completing a task. Since assistive aids to guide situated actions reduce complexity in the task environment (Vessey 1991; Pirolli et al. 1999), the framework is used as the foundation for developing mediating structures to express the internal, external and mental representations. Interaction aids superimposed on mediating structures that model thought and action will help to guide the “perpetual novice” (Borgman 1996) through the vast digital information spaces by orchestrating better cognitive fit between the task environment and the task solution. This dissertation presents an ontology centric architecture for mediating interactions is presented in a semantic web based e-commerce environment. The Design Science approach is applied for this purpose. The potential of the framework is illustrated as a functional model by using it to model the hierarchy of tasks in a consumer decision-making process as it applies in an e-commerce setting. Ontologies are used to express the perceptual operations on the external task environment, the intuitive operations on the internal task representation, and the constraint satisfaction and situated actions conforming to reasoning from the cognitive fit. It is maintained that actions themselves cannot be enforced, but when the meaning from mental imagery and the task environment are brought into coordination, it leads to situated actions that change the present situation into one closer to what is desired. To test the usability of the ontologies we use the Web Ontology Language (OWL) to express the semantics of the three representations. We also use OWL to validate the knowledge representations and to make rule-based logical inferences on the ontological semantics. An e-commerce application was also developed to show how effective guidance can be provided by constructing semantically rich target pages from the knowledge manifested in the ontologies.

Page generated in 0.0309 seconds