• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 37
  • 26
  • 19
  • 18
  • 9
  • 8
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 315
  • 121
  • 101
  • 96
  • 63
  • 55
  • 44
  • 32
  • 29
  • 29
  • 28
  • 22
  • 22
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Implementation of Network Services Supporting Multi-Party Policies

Proddatoori, Santosh C 01 January 2009 (has links) (PDF)
Next-generation network architectures support complex services in the data-path of routers. A key challenge is the integration of multiple policy constraints from senders, receivers, and network providers when using such services. We introduce a multi-party service specification framework based on our “service socket” API. We illustrate the operation of this approach in an IPTV scenario that uses a video transcoding service implemented on a Cisco ISR platform.
202

An Automated Methodology For A Comprehensive Definition Of The Supply Chain Using Generic Ontological Components

Fayez, Mohamed 01 January 2005 (has links)
Today, worldwide business communities are in the era of the Supply Chains. A Supply Chain is a collection of several independent enterprises that partner together to achieve specific goals. These enterprises may plan, source, produce, deliver, or transport materials to satisfy an immediate or projected market demand, and may provide the after sales support, warranty services, and returns. Each enterprise in the Supply Chain has roles and elements. The roles include supplier, customer, or carrier and the elements include functional units, processes, information, information resources, materials, objects, decisions, practices, and performance measures. Each enterprise, individually, manages these elements in addition to their flows, their interdependencies, and their complex interactions. Since a Supply Chain brings several enterprises together to complement each other to achieve a unified goal, the elements in each enterprise have to complement each other and have to be managed together as one unit to achieve the unified goal efficiently. Moreover, since there are a large number of elements to be defined and managed in a single enterprise, then the number of elements to be defined and managed when considering the whole Supply Chain is massive. The supply chain community is using the Supply Chain Operations Reference model (SCOR model) to define their supply chains. However, the SCOR model methodology is limited in defining the supply chain. The SCOR model defines the supply chain in terms of processes, performance metrics, and best practices. In fact, the supply chain community, SCOR users in particular, exerts massive effort to render an adequate supply chain definition that includes the other elements besides the elements covered in the SCOR model. Also, the SCOR model is delivered to the user in a document, which puts a tremendous burden on the user to use the model and makes it difficult to share the definition within the enterprise or across the supply chain. This research is directed towards overcoming the limitations and shortcomings of the current supply chain definition methodology. This research proposes a methodology and a tool that will enable an automated and comprehensive definition of the Supply Chain at any level of details. The proposed comprehensive definition methodology captures all the constituent parts of the Supply Chain at four different levels which are, the supply chain level, the enterprise level, the elements level, and the interaction level. At the Supply Chain level, the various enterprises that constitute the supply chain are defined. At the enterprise level, the enterprise elements are identified. At the enterprises' elements level, each element in the enterprise is explicitly defined. At the interaction level, the flows, interdependence, and interactions that exist between and within the other three levels are identified and defined. The methodology utilized several modeling techniques to generate generic explicit views and models that represents the four levels. The developed views and models were transformed to a series of questions and answers, where the questions correspond to what a view provides and the answers are the knowledge captured and generated from the view. The questions and answers were integrated to render a generic multi-view of the supply chain. The methodology and the multi-view were implemented in an ontology-based tool. The ontology includes sets of generic supply chain ontological components that represent the supply chain elements and a set of automated procedures that can be utilized to define a specific supply chain. A specific supply chain can be defined by re-using the generic components and customizing them to the supply chain specifics. The ontology-based tool was developed to function in the supply chain dynamic, information intensive, geographically dispersed, and heterogeneous environment. To that end, the tool was developed to be generic, sharable, automated, customizable, extensible, and scalable.
203

Iterchanging Discrete Event Simulationprocess Interaction Modelsusing The Web Ontology Language - Owl

Lacy, Lee 01 January 2006 (has links)
Discrete event simulation development requires significant investments in time and resources. Descriptions of discrete event simulation models are associated with world views, including the process interaction orientation. Historically, these models have been encoded using high-level programming languages or special purpose, typically vendor-specific, simulation languages. These approaches complicate simulation model reuse and interchange. The current document-centric World Wide Web is evolving into a Semantic Web that communicates information using ontologies. The Web Ontology Language – OWL, was used to encode a Process Interaction Modeling Ontology for Discrete Event Simulations (PIMODES). The PIMODES ontology was developed using ontology engineering processes. Software was developed to demonstrate the feasibility of interchanging models from commercial simulation packages using PIMODES as an intermediate representation. The purpose of PIMODES is to provide a vendor-neutral open representation to support model interchange. Model interchange enables reuse and provides an opportunity to improve simulation quality, reduce development costs, and reduce development times.
204

Nesting ecology of the great horned owl Bubo virginianus in central western Utah

Smith, Dwight Glenn 01 August 1968 (has links)
Information was collected on the nesting ecology of the Great Horned Owl, with particular emphasis placed on aspects of its population and distribution, territoriality and predation. The study was conducted for the two years, 1967 and 1968 in the Thorpe and Topliff hills of central western Utah. Nesting densities on the study area were .36 pairs per square mile in 1967 and .40 pairs per square mile in 1968. Nests averaged one mile apart and were distributed in the periphery of the hills, overlooking the desert valleys. Favorite nest sites were cliff niches, but abandoned quarries and junipers were also utilized. Territorial studies of three nesting pairs indicate that these owls maintained hunting areas ranging from 172 acres to 376 acres in coverage. Owls ranged as far as one mile into the adjacent desert valleys, but extended little activity into the mountainous interior. The black-tailed jackrabbit and desert cottontail contribute the bulk of the Horned Owl food, followed by the kangaroo rat. Other mammals, birds and invertebrates are also utilized, but to a lesser extent.
205

Discovery and Prioritization of Drug Candidates for Repositioning Using Semantic Web-based Representation of Integrated Diseasome-Pharmacome Knowledge

Qu, Xiaoyan Angela January 2009 (has links)
No description available.
206

OWL query answering using machine learning

Huster, Todd 21 December 2015 (has links)
No description available.
207

A Performance Analysis Framework for Coreference Resolution Algorithms

Patel, Chandankumar Johakhim 29 August 2016 (has links)
No description available.
208

Extração de casos de teste utilizando Redes de Petri hierárquicas e validação de resultados utilizando OWL. / Test case extraction using hierarchical Petri Nets and results validation using OWL.

Baumgartner Neto, August 27 April 2015 (has links)
Este trabalho propõe dois métodos para teste de sistemas de software: o primeiro extrai ideias de teste de um modelo desenvolvido em rede de Petri hierárquica e o segundo valida os resultados após a realização dos testes utilizando um modelo em OWL-S. Estes processos aumentam a qualidade do sistema desenvolvido ao reduzir o risco de uma cobertura insuficiente ou teste incompleto de uma funcionalidade. A primeira técnica apresentada consiste de cinco etapas: i) avaliação do sistema e identificação dos módulos e entidades separáveis, ii) levantamento dos estados e transições, iii) modelagem do sistema (bottom-up), iv) validação do modelo criado avaliando o fluxo de cada funcionalidade e v) extração dos casos de teste usando uma das três coberturas de teste apresentada. O segundo método deve ser aplicado após a realização dos testes e possui cinco passos: i) primeiro constrói-se um modelo em OWL (Web Ontology Language) do sistema contendo todas as informações significativas sobre as regras de negócio da aplicação, identificando as classes, propriedades e axiomas que o regem; ii) em seguida o status inicial antes da execução é representado no modelo através da inserção das instâncias (indivíduos) presentes; iii) após a execução dos casos de testes, a situação do modelo deve ser atualizada inserindo (sem apagar as instâncias já existentes) as instâncias que representam a nova situação da aplicação; iv) próximo passo consiste em utilizar um reasoner para fazer as inferências do modelo OWL verificando se o modelo mantém a consistência, ou seja, se não existem erros na aplicação; v) finalmente, as instâncias do status inicial são comparadas com as instâncias do status final, verificando se os elementos foram alterados, criados ou apagados corretamente. O processo proposto é indicado principalmente para testes funcionais de caixa-preta, mas pode ser facilmente adaptado para testes em caixa branca. Obtiveram-se casos de testes semelhantes aos que seriam obtidos em uma análise manual mantendo a mesma cobertura do sistema. A validação provou-se condizente com os resultados esperados, bem como o modelo ontológico mostrouse bem fácil e intuitivo para aplicar manutenções. / This paper proposes two test methods for system software testing: the first one extracts test workflow processes from a model developed in Hierarchical Petri Nets and the other validates results after test execution using a domain model in OWL-S. Both processes increase the quality of the system developed by reducing the risk of having an insufficient coverage or an incomplete functionality test. The first technique consists of five steps: i) system evaluation and identification of separable sub modules and entities, ii) identification of states and transitions, iii) system modeling (bottom-up), iv) validation of the created model by evaluating the workflow for each functionality, and v) extraction of test cases using one of the three test coverage presented. The second method must be applied after the execution of the previous method and has also five steps: i) first a system model in OWL (Web Ontology Language) is built containing all significant information and business rules of the application; ii) then, the initial status before the test execution is represented in the model by the insertion of the instances (individuals) presented; iii) after the execution of test cases, the state model is updated by inserting (without deleting already existing instances) new instances to represent the domain sate after test; iv) in the next step we use a reasoner to make OWL model checking inferences to prove model consistency, that is, if there is no error in the application; finally, the initial status instances is compared with the final status in order to verify if these instances have been changed, created or deleted correctly. The process is indicated for blackbox functional tests, but can be easily adapted for white-box tests. There was obtained test cases similar to those that will be obtained in a manual analysis keeping the same test coverage. Validation has proved to be consistent compare to the expected results. Also, the ontological model has showed to be easy and intuitive for maintenance.
209

Automatic key discovery for Data Linking / Découverte des clés pour le Liage de Données

Symeonidou, Danai 09 October 2014 (has links)
Dans les dernières années, le Web de données a connu une croissance fulgurante arrivant à un grand nombre des triples RDF. Un des objectifs les plus importants des applications RDF est l’intégration de données décrites dans les différents jeux de données RDF et la création des liens sémantiques entre eux. Ces liens expriment des correspondances sémantiques entre les entités d’ontologies ou entre les données. Parmi les différents types de liens sémantiques qui peuvent être établis, les liens d’identité expriment le fait que différentes ressources réfèrent au même objet du monde réel. Le nombre de liens d’identité déclaré reste souvent faible si on le compare au volume des données disponibles. Plusieurs approches de liage de données déduisent des liens d’identité en utilisant des clés. Une clé représente un ensemble de propriétés qui identifie de façon unique chaque ressource décrite par les données. Néanmoins, dans la plupart des jeux de données publiés sur le Web, les clés ne sont pas disponibles et leur déclaration peut être difficile, même pour un expert.L’objectif de cette thèse est d’étudier le problème de la découverte automatique de clés dans des sources de données RDF et de proposer de nouvelles approches efficaces pour résoudre ce problème. Les données publiées sur le Web sont général volumineuses, incomplètes, et peuvent contenir des informations erronées ou des doublons. Aussi, nous nous sommes focalisés sur la définition d’approches capables de découvrir des clés dans de tels jeux de données. Par conséquent, nous nous focalisons sur le développement d’approches de découverte de clés capables de gérer des jeux de données contenant des informations nombreuses, incomplètes ou erronées. Notre objectif est de découvrir autant de clés que possible, même celles qui sont valides uniquement dans des sous-ensembles de données.Nous introduisons tout d’abord KD2R, une approche qui permet la découverte automatique de clés composites dans des jeux de données RDF pour lesquels l’hypothèse du nom Unique est respectée. Ces données peuvent être conformées à des ontologies différentes. Pour faire face à l’incomplétude des données, KD2R propose deux heuristiques qui per- mettent de faire des hypothèses différentes sur les informations éventuellement absentes. Cependant, cette approche est difficilement applicable pour des sources de données de grande taille. Aussi, nous avons développé une seconde approche, SAKey, qui exploite différentes techniques de filtrage et d’élagage. De plus, SAKey permet à l’utilisateur de découvrir des clés dans des jeux de données qui contiennent des données erronées ou des doublons. Plus précisément, SAKey découvre des clés, appelées "almost keys", pour lesquelles un nombre d’exceptions est toléré. / In the recent years, the Web of Data has increased significantly, containing a huge number of RDF triples. Integrating data described in different RDF datasets and creating semantic links among them, has become one of the most important goals of RDF applications. These links express semantic correspondences between ontology entities or data. Among the different kinds of semantic links that can be established, identity links express that different resources refer to the same real world entity. By comparing the number of resources published on the Web with the number of identity links, one can observe that the goal of building a Web of data is still not accomplished. Several data linking approaches infer identity links using keys. Nevertheless, in most datasets published on the Web, the keys are not available and it can be difficult, even for an expert, to declare them.The aim of this thesis is to study the problem of automatic key discovery in RDF data and to propose new efficient approaches to tackle this problem. Data published on the Web are usually created automatically, thus may contain erroneous information, duplicates or may be incomplete. Therefore, we focus on developing key discovery approaches that can handle datasets with numerous, incomplete or erroneous information. Our objective is to discover as many keys as possible, even ones that are valid in subparts of the data.We first introduce KD2R, an approach that allows the automatic discovery of composite keys in RDF datasets that may conform to different schemas. KD2R is able to treat datasets that may be incomplete and for which the Unique Name Assumption is fulfilled. To deal with the incompleteness of data, KD2R proposes two heuristics that offer different interpretations for the absence of data. KD2R uses pruning techniques to reduce the search space. However, this approach is overwhelmed by the huge amount of data found on the Web. Thus, we present our second approach, SAKey, which is able to scale in very large datasets by using effective filtering and pruning techniques. Moreover, SAKey is capable of discovering keys in datasets where erroneous data or duplicates may exist. More precisely, the notion of almost keys is proposed to describe sets of properties that are not keys due to few exceptions.
210

Γραμματειακή υποστήριξη σχολών πανεπιστημίων : Ανάπτυξη ιστοσελίδας με χρήση τεχνολογιών Σημασιολογικού Ιστού (Semantic Web)

Φωτεινός, Γεώργιος 30 April 2014 (has links)
Ένα υποσύνολο του τεράστιου όγκου πληροφοριών του Ιστού αφορά τα Ανοικτά Δεδομένα (Open Data), τα οποία αποτελούν πληροφορίες, δημόσιες ή άλλες, στις οποίες ο καθένας μπορεί να έχει πρόσβαση και να τις χρησιμοποιεί περαιτέρω για οποιονδήποτε σκοπό με στόχο να προσθέσει αξία σε αυτές. Η δυναμική των ανοιχτών δεδομένων γίνεται αντιληπτή όταν σύνολα δεδομένων των δημόσιων οργανισμών μετατρέπονται σε πραγματικά ανοιχτά δεδομένα, δηλαδή χωρίς νομικούς, οικονομικούς ή τεχνολογικούς περιορισμούς για την περαιτέρω χρήση τους από τρίτους. Τα ανοικτά δεδομένα ενός Τμήματος ή Σχολής Πανεπιστημίου μπορούν να δημιουργήσουν προστιθέμενη αξία και να έχουν θετικό αντίκτυπο σε πολλές διαφορετικές περιοχές, στη συμμετοχή, την καινοτομία, τη βελτίωση της αποδοτικότητας και αποτελεσματικότητας των Πανεπιστημιακών υπηρεσιών, την παραγωγή νέων γνώσεων από συνδυασμό στοιχείων κ.α. Ο τελικός στόχος είναι τα ανοικτά δεδομένα να καταστούν Ανοικτά Διασυνδεδεμένα Δεδομένα. Τα Διασυνδεδεμένα Δεδομένα, αποκτούν νόημα αντιληπτό και επεξεργάσιμο από μηχανές, επειδή περιγράφονται σημασιολογικά με την χρήση οντολογιών. Έτσι τα δεδομένα γίνονται πιο «έξυπνα» και πιο χρήσιμα μέσα από την διάρθρωση που αποκτούν. Στην παρούσα διπλωματική εργασία, υλοποιείται μια πρότυπη δικτυακή πύλη με την χρήση του Συστήματος Διαχείρισης Περιεχομένου CMS Drupal, το οποίο ενσωματώνει τεχνολογίες Σημασιολογικού Ιστού στον πυρήνα του, με σκοπό την μετατροπή των δεδομένων ενός Τμήματος ή Σχολής Πανεπιστημίου σε Ανοικτά Διασυνδεδεμένα Δεδομένα διαθέσιμα στην τρίτη γενιά του Ιστού τον Σημασιολογικό Ιστό. / A subset of the vast amount of information of the web is concerned with open data, which is information, whether public or other, in which everyone can have access and use it for any purpose with a view to add value. The dynamics of open data becomes noticeable when datasets of public bodies are transformed into truly open data , i.e. without legal, financial or technological limitations for further use by third parties. The open data of a university department or faculty can add value and have a positive impact on many different areas such as participation, innovation, improvisation of the efficiency and effectiveness of university services, generating new knowledge from a combination of elements , etc. The ultimate goal is to transform open data into open linked data. The linked data , become meaningful and processable by machines, given that they are semantically described, using ontologies. Thus, the data become more " intelligent " and more useful through the structure they acquire. In this thesis , a prototype web portal is implemented using the content management system CMS Drupal, which incorporates semantic web technologies in the core, in order to convert the data of a University Department or School in open linked data available in the third generation web semantic web.

Page generated in 0.0345 seconds