• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 328
  • 217
  • 76
  • 44
  • 24
  • 20
  • 19
  • 18
  • 17
  • 14
  • 8
  • 7
  • 7
  • 6
  • 6
  • Tagged with
  • 839
  • 839
  • 249
  • 189
  • 176
  • 155
  • 139
  • 112
  • 108
  • 105
  • 105
  • 104
  • 102
  • 97
  • 94
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

在點對點網路上以BT為基礎的數位媒體語意式搜尋系統

張易修 Unknown Date (has links)
在目前的網際網路中,點對點(P2P)網路的應用扮演了一個重要的角色。由於在點對點網路架構下,檔案分享系統中檔案的數量以及來源大量增加,造成使用者必須花費更多的時間找尋想要的資源,因此搜尋功能就顯得格外地重要。BitTorrent(BT)作為一個點對點檔案分享系統,用戶量不斷增加,已逐漸成為網路頻寬的主要消耗者之一,但是其協定中沒有提供搜尋的功能,而且檔案分散於各個檔案發佈站之間,以及各發佈站之間無法有效搜尋彼此擁有的檔案資源,導致使用者查詢時效率不佳。因此本研究期望建立一套語意式搜尋機制來幫助使用者解決上述的問題。藉由使用語意網技術(Semantic Web),針對BT檔案分享系統設計本體論,進行資源描述和建立簡單的分類,利用檔案的metadata來提供搜尋的功能以及彙整各檔案分佈站的檔案資源,做為上述議題的一個解決方案,讓使用者能夠更有效率地找到更完整的資源。 / In current World Wide Web, P2P network application plays an important role. Because the number and origin of files increase greatly in file sharing system under the architecture of P2P, causing users to spend more time searching for the resources they want. Therefore, the search function appears especially important. BitTorrent(BT), one kind of P2P file sharing system, has more and more users and becomes one of the biggest consumer of the network bandwidth. But it doesn't support any search function and shared files disperse between many web sites. Furthermore, these web sites can't exchange the shared files they own efficiently. These problems result in inefficient search performance. My research expects to propose a semantic search method to solve the problems mentioned above. By the means of Semantic Web technology, we design the ontology of BT file sharing system to describe the resources and establish simple taxonomy. In addition, using the metadata of files to provide BT for search function and collect the shared files between web sites. Let users find the shared files more efficiently and completely.
392

運用文字探勘技術輔助建構法律條文之語意網路-以公司法為例

張露友 Unknown Date (has links)
本論文運用文字探勘相關技術,嘗試自動計算法條間的相似度,輔助專家從公司法眾多法條中整理出規則,建立法條之間的關聯,使整個法典並不是獨立的法條條號與法條內容的集合,而是在法條之間透過語意的方式連結成網路,並從分析與解釋關聯的過程中,探討文字探勘技術運用於法律條文上所遭受之困難及限制,以供後續欲從事相關研究之參考。 本論文的研究結果,從積極面來看,除了可以建立如何運用文字探勘於輔助法律知識擷取的方法之外,另一方面,從消極面來看,倘若研究結果顯示,文字探勘技術並不完全適用於法律條文的知識擷取上,那麼對於從事類似研究的專業人員而言,本論文所提出的結論與建議,亦可作為改善相關技術的重要參考。 / This thesis tries to use text mining technique to calculate, compare and analyze the correlation of legal codes. And based on the well-known defined legal concept and knowledge, it also tries to help explain and evaluate the relations above using the result of automatic calculation. Furthermore, this thesis also wishes to contribute on how to apply information technology effectively onto legal knowledge domain. If the research reveals the positive result, it could be used for knowledge build-up on how to utilize text mining technology onto legal domain. However, if the study shows that text mining doesn’t apparently apply to knowledge extracting of legal domain, then the conclusion and suggestion from this thesis could also be regarded as a important reference to other professionals in the similar research fields.
393

實踐具語意的著作權管理規範來逼近合理使用 / Semantic Enforcement of DRM Policies to Approximate Fair Use

林光德, Lin, Guang De Unknown Date (has links)
法律明定使用者可以對別人的創作內容有一定程度的合理使用(Fair Use)範圍,像是以教學為用途而可影印書籍部份內容。然而合理使用的規範判定在電腦上難以實作,現有的DRM (Digital Rights Management, 著作權管理)系統甚少實現這樣的理想。 本研究在現有的ODRL2語言上架上使用本體論(Ontology)語言來加強本身所缺乏的語意,讓本體論與規則語言(Rule Language)結合,來達到規範(Policy)標示的能力,並且使用現有的推論引擎(Reasoning Engine),拿到規範正確執行的能力。最後在帶有規範執行的機制底下,標示合理使用的基本規範,並加強ODRL的標示能力去逼近合理使用的精神。 / United States copyright laws grants users the rights to make “fair use” of copyrighted works, e.g. copying part of a book for the use of education purpose. Current digital rights management (DRM) systems are hard to enforce fair use doctrine for two reasons. First is that the current XML-based rights expression language (REL) are unable to describe the rights of fair use; and second is that the architecture of DRM systems interferes with fair use. This approach proposed a rights expression language based on ODRL 2.0 with Semantic Web technologies to get the ability to describe fair use policies. In addition, we design a fair use procedure mechanism for managing and enforcing fair use policies defined by trusted third parties. Both REL improvement and new procedure are done to approximate fair use.
394

Role of description logic reasoning in ontology matching

Reul, Quentin H. January 2012 (has links)
Semantic interoperability is essential on the Semantic Web to enable different information systems to exchange data. Ontology matching has been recognised as a means to achieve semantic interoperability on the Web by identifying similar information in heterogeneous ontologies. Existing ontology matching approaches have two major limitations. The first limitation relates to similarity metrics, which provide a pessimistic value when considering complex objects such as strings and conceptual entities. The second limitation relates to the role of description logic reasoning. In particular, most approaches disregard implicit information about entities as a source of background knowledge. In this thesis, we first present a new similarity function, called the degree of commonality coefficient, to compute the overlap between two sets based on the similarity between their elements. The results of our evaluations show that the degree of commonality performs better than traditional set similarity metrics in the ontology matching task. Secondly, we have developed the Knowledge Organisation System Implicit Mapping (KOSIMap) framework, which differs from existing approaches by using description logic reasoning (i) to extract implicit information as background knowledge for every entity, and (ii) to remove inappropriate correspondences from an alignment. The results of our evaluation show that the use of Description Logic in the ontology matching task can increase coverage. We identify people interested in ontology matching and reasoning techniques as the target audience of this work
395

XML-Based Agent Scripts and Inference Mechanisms

Sun, Guili 08 1900 (has links)
Natural language understanding has been a persistent challenge to researchers in various computer science fields, in a number of applications ranging from user support systems to entertainment and online teaching. A long term goal of the Artificial Intelligence field is to implement mechanisms that enable computers to emulate human dialogue. The recently developed ALICEbots, virtual agents with underlying AIML scripts, by A.L.I.C.E. foundation, use AIML scripts - a subset of XML - as the underlying pattern database for question answering. Their goal is to enable pattern-based, stimulus-response knowledge content to be served, received and processed over the Web, or offline, in the manner similar to HTML and XML. In this thesis, we describe a system that converts the AIML scripts to Prolog clauses and reuses them as part of a knowledge processor. The inference mechanism developed in this thesis is able to successfully match the input pattern with our clauses database even if words are missing. We also emulate the pattern deduction algorithm of the original logic deduction mechanism. Our rules, compatible with Semantic Web standards, bring structure to the meaningful content of Web pages and support interactive content retrieval using natural language.
396

Structuration de débats en ligne à l’aide d’Annotationssocio-sémantiquesVers une analyse de réseaux sociaux centrés sur l’interaction / Soci-semantic annotations to structurate online debates.Toward a social network analysis based on interactions.

Seilles, Antoine 25 January 2012 (has links)
Cette thèse traite de l'usage de l'annotation socio-sémantique dans le cadre de la dé-mocratie électronique et plus particulièrement des débats en ligne. L'annotation socio-sémantique est utilisée ici comme solution de structuration des débats. La représentationdes données des débats est pensée pour faciliter la mise en place de méthodes d'extrac-tion et d'analyses du réseau social des utilisateurs, en particulier pour faciliter l'extractionde groupes d'opinions. La thèse est réalisée dans le contexte de l'ANR Intermed qui vise àproduire des outils d'aide à la concertation en ligne, en particulier pour la gestion de zonescôtiéres.En nous appuyant sur la tendance 2.0, nous définissons la notion de débat 2.0. Débat àgrande échelle, au moins l'échelle d'une collectivité territoriale, s'appuyant sur l'usage detechnologies du Web 2.0 pour faciliter les interactions entre les citoyens. Dans ce contexte,l'interopérabilité est un enjeu crucial. Si les annotations discursives s'inscrivent dans latendance 2.0 et permettent aux citoyens de discuter en ligne, le traitement des donnéesproduites en vue de structurer les débats, de synthétiser les discussions, de modérer, d'éva-luer la représentativité ... devient une tâche de plus en plus complexe avec l'augmentationde la quantité de données produites. Nous proposons d'utiliser les technologies du websémantique, et donc des annotations à la fois discursives et sémantiques (appelées anno-tations socio-sémantique), pour représenter les données produites par les citoyens dansun outil de débat 2.0 et pour faciliter l'interopérabilité de ces données, faciliter la créationd'autres services comme par exemple un service d'analyse du réseau social, un service derecommandation, un service de visualisation des débats ... Nous présentons donc un mé-canisme d'annotation structurant les discussions, fruit d'un processus incrémental d'im-plémentation et d'expérimentation sur le terrain. / This pdh deals with socio-semantic annotations for e-democracy and online debates. Socio-semantic annotation are used to structurate debates. Data representation was designed to facilitate social network analysis and community detection based on opinion mining. This phd was made during the ANR project Intermed wich has to develop e-participation tools for geolocalised planning.Based on Web 2.0 trend, we define debate 2.0 concept as great scale online debates. A debate 2.0 is a debate that involves at least an important part of the inhabitants of a county and that uses web 2.0 tools. Interoperability is a main challenge of debates 2.0. If discursive annotations are a web 2.0 way of interaction between citizen, to process data from citizen participation is a complicate and expensive task. We recommand to use web semantic technology and socio-semantic annotations to represent data produced by citizen. Il will increase interoperability and easiness to create new applications and features consuming this data. We propose an annotation mecanism to structurate discussions and we have developped a platform through an agile loop with on field experiment.
397

Individual Information Adaptation Based on Content Description

Wallin, Erik Oskar January 2004 (has links)
Today’s increasing information supply raises the needfor more effective and automated information processing whereindividual information adaptation (personalization) is onepossible solution. Earlier computer systems for personalizationlacked the ability to easily define and measure theeffectiveness of personalization efforts. Numerous projectsfailed to live up to the their expectations, and the demand forevaluation increased. This thesis presents some underlying concepts and methodsfor implementing personalization in order to increase statedbusiness objectives. A personalization system was developedthat utilizes descriptions of information characteristics(metadata) to perform content based filtering in anon-intrusive way. Most of the described measurement methods forpersonalization in the literature are focused on improving theutility for the customer. The evaluation function of thepersonalization system described in this thesis takes thebusiness operator’s standpoint and pragmatically focuseson one or a few measurable business objectives. In order toverify operation of the personalization system, a functioncalled bifurcation was created. The bifurcation functiondivides the customers stochastically into two or morecontrolled groups with different personalizationconfigurations. Bygiving one of the controlled groups apersonalization configuration that deactivates thepersonalization, a reference group is created. The referencegroup is used to measure quantitatively objectives bycomparison with the groups with active personalization. Two different companies had their websites personalized andevaluated: one of Sweden’s largest recruitment servicesand the second largest Swedish daily newspaper. The purposewith the implementations was to define, measure, and increasethe business objectives. The results of the two case studiesshow that under propitious conditions, personalization can bemade to increase stated business objectives. Keywords:metadata, semantic web, personalization,information adaptation, one-to-one marketing, evaluation,optimization, personification, customization,individualization, internet, content filtering, automation. / <p>QCR 20161027</p>
398

Création de nouvelles connaissances décisionnelles pour une organisation via ses ressources sociales et documentaires / Creation of new decisional knowledge for an organization by analysing its social and documentary resources

Deparis, Etienne 19 December 2013 (has links)
L'aide à la décision se fonde sur l'observation d'un environnement évolutif dont on scrute les évènements. Ces évènements peuvent être de différentes natures, dont les connexions qui peuvent se créer au sein d'un réseau d'acteurs. L'observation des bases documentaires ne semble plus suffisante pour nourrir l'aide à la décision. En effet, les nouveaux outils de communication et de collaboration, dont l'usage se répand rapidement au sein des organisations, sont sources de nouvelles formes d'informations peu ou mal utilisées par les systèmes actuels d'aide à la décision des organisations. L'objectif de la thèse est de concevoir une plate-forme (modélisation et développement) pour les organisations permettant à leurs membres de bénéficier de médias sociaux et à leurs décideurs de bénéficier d'outils d'aide à la décision prenant en compte tous les types de ressources circulant sur cette plate-forme. / Decision support is partly based on the observation of a dynamic and mutating environment (Situation Awareness). The events of such environments can be of different types, including new relations created within a network of actors. We think classical documentary databases are no longer sufficient to serve situation awarness. The quick spread and adoption of new communication and collaboration tools in organizations, bring new kind of information, like the social network of the organization, which are currently not or badly taken in account by organizational decision support systems. The aim of this thesis is to design a platform, which provides to organizations both the social media to help their members to collaborate and the decision tools, which take into account all types of information exchanged in the platform.
399

Interopérabilité sémantique des connaissances des modèles de produits à base de features / Semantic interoperability of knowledge in feature-based CAD models

Abdul Ghafour, Samer 09 July 2009 (has links)
Dans un environnement collaboratif de développement de produit, plusieurs acteurs, ayant différents points de vue et intervenant dans plusieurs phases du cycle de vie de produit, doivent communiquer et échanger des connaissances entre eux. Ces connaissances, existant sous différents formats hétérogènes, incluent potentiellement plusieurs concepts tels que l’historique de conception, la structure du produit, les features, les paramètres, les contraintes, et d’autres informations sur le produit. Les exigences industrielles de réduction du temps et du coût de production nécessitent l’amélioration de l’interopérabilité sémantique entre les différents processus de développement afin de surmonter ces problèmes d’hétérogénéité tant au niveau syntaxique, structurel, que sémantique. Dans le domaine de la CAO, la plupart des méthodes existantes pour l’échange de données d’un modèle de produit sont, effectivement, basées sur le transfert des données géométriques. Cependant, ces données ne sont pas suffisantes pour saisir la sémantique des données, telle que l’intention de conception, ainsi que l’édition des modèles après leur échange. De ce fait, nous nous sommes intéressés à l’échange des modèles « intelligents », autrement dit, définis en termes d’historique de construction, de fonctions intelligentes de conception appelées features, y compris les paramètres et les contraintes. L’objectif de notre thèse est de concevoir des méthodes permettant d’améliorer l’interopérabilité sémantique des systèmes CAO moyennant les technologies du Web Sémantique comme les ontologies OWL DL et le langage des règles SWRL. Nous avons donc élaboré une approche d’échange basée sur une ontologie commune de features de conception, que nous avons appelée CDFO « Common Design Features Ontology », servant d’intermédiaire entre les différents systèmes CAO. Cette approche s’appuie principalement sur deux grandes étapes. La première étape consiste en une homogénéisation des formats de représentation des modèles CAO vers un format pivot, en l’occurrence OWL DL. Cette homogénéisation sert à traiter les hétérogénéités syntaxiques entre les formats des modèles. La deuxième étape consiste à définir des règles permettant la mise en correspondance sémantique entre les ontologies d’application de CAO et notre ontologie commune. Cette méthode de mise en correspondance se base principalement, d’une part, sur la définition explicite des axiomes et des règles de correspondance permettant l’alignement des entités de différentes ontologies, et d’autre part sur la reconnaissance automatique des correspondances sémantiques supplémentaires à l’aide des capacités de raisonnement fournies par les moteurs d’inférence basés sur les logiques de description. Enfin, notre méthode de mise en correspondance est enrichie par le développement d’une méthode de calcul de similarité sémantique appropriée pour le langage OWL DL, qui repose principalement sur les composants des entités en question tels que leur description et leur contexte. / A major issue in product development is the exchange and sharing of product knowledge among many actors. This knowledge includes many concepts such as design history, component structure, features, parameters, constraints, and more. Heterogeneous tools and multiple designers are frequently involved in collaborative product development, and designers often use their own terms and definitions to represent a product design. Thus, to efficiently share design information among multiple designers, the design intent should be persistently captured and the semantics of the modeling terms should be semantically processed both by design collaborators and intelligent systems. Regarding CAD models, most of the current CAD systems provide feature-based design for the construction of solid models. Features are devised to carry, semantically, product information throughout its life cycle. Consequently, features should be maintained in a CAD model during its migration among different applications. However, existing solutions for exchanging product information are limited to the process of geometrical data, where semantics assigned to product model are completely lost during the translation process. Current standards, such as ISO 10303, known as STEP have attempted to solve this problem, but they define only syntactic data representation so that semantic data integration is not possible. Moreover, STEP does not provide a sound basis to reason with knowledge. Our research investigates the use of Semantic Web technologies, such as ontologies and rule languages; e.g. SWRL, for the exchange of “intelligent” CAD models among different systems, while maintaining the original relations among entities of the model. Thus, we have proposed an ontological approach based on the construction of a common design features ontology, used as an Interlingua for the exchange of product data. This ontology is represented formally with OWL DL. Furthermore, axioms and mapping rules are defined to achieve the semantic integration between the applications ontologies and the common ontology. The integration process relies basically on reasoning capabilities provided by description logics in order to recognize automatically additional mappings among ontologies entities. Furthermore, the mapping process is enhanced with a semantic similarity measure in order to detect similar design features. However, this will enable data analysis, as well as manage and discover implicit relationships among product data based on semantic modeling and reasoning.
400

Formalisation automatique et sémantique de règles métiers / Automatic and semantic formalization of business rules

Kacfah Emani, Cheikh Hito 01 December 2016 (has links)
Cette thèse porte sur la transformation automatique et sémantique de règles métiers en des règles formelles. Ces règles métiers sont originellement rédigées sous la forme de textes en langage naturel, de tableaux et d'images. L'objectif est de mettre à la disposition des experts métiers, un ensemble de services leur permettant d'élaborer des corpus de règles métiers formelles. Le domaine de la Construction est le champ d'application de ces travaux. Disposer d'une version formelle et exécutable de ces règles métiers servira à effectuer des contrôles de conformité automatique sur les maquettes numériques des projets de construction en cours de conception.Pour cela, nous avons mis à disposition des experts métiers les deux principales contributions de cette thèse. La première est la mise sur pied d'un langage naturel contrôlé, dénommé RAINS. Il permet aux experts métiers de réécrire les règles métiers sous la forme de règles formelles. Les règles RAINS se composent de termes du vocabulaire métier et de mots réservés tels que les fonctions de comparaisons, les marques de négation et de quantification universelle et les littéraux. Chaque règle RAINS a une sémantique formelle unique qui s'appuie sur les standards du web sémantique. La seconde contribution majeure est un service de formalisation des règles métiers. Ce service implémente une approche de formalisation proposée dans le cadre de cette thèse et dénommée FORSA. Ce service propose des versions RAINS des règles métiers en langage naturel qui lui sont soumises. FORSA fait appel à des outils du traitement automatique du langage naturel et à des heuristiques. Pour évaluer FORSA, nous avons mis sur pied un benchmark adapté à la tâche de formalisation des règles métiers. Les données de ce benchmark sont issues de normes du domaine de la Construction / This thesis focuses on automatic and semantic transformation of business rules into formal rules. These business rules are originally drafted in the form of natural language text, tables and images. Our goal is to provide to business experts a set of services allowing them to develop corpora of formal business rules. We carry out this work in the field of building engineering construction. Having formal and executable versions of the business rules enables to perform automatic compliance checking of digital mock-ups of construction projects under design.For this we made available to business experts, the two main contributions of this thesis. The first is the development of a controlled natural language, called RAINS. It allows business experts to rewrite business rules in the form of formal rules. A RAINS rule consists of terms of the business vocabulary and reserved words such as comparison predicates, negation and universal quantification markers and literals. Each RAINS rule has a unique formal semantics which is based on the standards of the Semantic Web. The second major contribution is a service for formalization of business rules. This service implements a formalized approach proposed in this thesis and called FORSA. This service offers RAINS versions of natural language business rules submitted to it. FORSA uses natural language processing tools and heuristics. To evaluate FORSA, we have set up a benchmark adapted to the formalization of business rules task. The dataset from this benchmark are from norms in the field of Construction

Page generated in 0.1508 seconds