• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 328
  • 217
  • 76
  • 44
  • 24
  • 20
  • 19
  • 18
  • 17
  • 14
  • 8
  • 7
  • 7
  • 6
  • 6
  • Tagged with
  • 839
  • 839
  • 249
  • 189
  • 176
  • 155
  • 139
  • 112
  • 108
  • 105
  • 105
  • 104
  • 102
  • 97
  • 94
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Automatic Composition Of Semantic Web Services With The Abductive Event Calculus

Kirci, Esra 01 September 2008 (has links) (PDF)
In today&#039 / s world, composite web services are widely used in service oriented computing, web mashups and B2B Applications etc. Most of these services are composed manually. However, the complexity of manually composing web services increase exponentially with the increase in the number of available web services, the need for dynamically created/updated/discovered services and the necessity for higher amount of data bindings and type mappings in longer compositions. Therefore, current highly manual web service composition techniques are far from being the answer to web service composition problem. Automatic web service composition methods are recent research efforts to tackle the issues with manual techniques. Broadly, these methods fall into two groups: (i) workflow based methods and (ii) methods using AI planning. This thesis investigates the application of AI planning techniques to the web service composition problem and in particular, it proposes the use of the abductive event calculus in this domain. Web service compositions are defined as templates using OWL-S (&quot / OWL for Services&quot / ). These generic composition definitions are converted to Prolog language as axioms for the abductive event calculus planner and solutions found by the planner constitute the specific result plans for the generic composition plan. In this thesis it is shown that abductive planning capabilities of the event calculus can be used to generate the web service composition plans that realize the generic procedure.
432

Abductive Planning Approach For Automated Web Service Composition Using Only User Specified Inputs And Outputs

Kuban, Esat Kaan 01 February 2009 (has links) (PDF)
In recent years, web services have become an emerging technology for communication and integration between applications in many areas such as business to business (B2B) or business to commerce (B2C). In this growing technology, it is hard to compose web services manually because of the increasing number and compexity of web services. Therefore, automation of this composition process has gained a considerable amount of popularity. Automated web service composition can be achieved either by generating the composition plan dynamically using given inputs and outputs, or by locating the correct services if an abstract process model is given. This thesis investigates the former method which is dynamicly generating the composition by using the abductive lanning capabilities of the Event Calculus. Event calculus axioms in Prolog language, are generated using the available OWL-S web service descriptions in the service repository, values given to selected inputs from ontologies used by those semantic web services and desired output types selected again from the ontologies. Abductive Theorem Prover which is the AI planner used in this thesis, generates composition plans and execution results according to the generated event calculus axioms. In this thesis, it is shown that abductive event calculus can be used for generating web services composition plans automatically, and returning the results of the generated plans by executing the necessary web services.
433

Automatic Web Service Composition With Ai Planning

Kuzu, Mehmet 01 July 2009 (has links) (PDF)
In this thesis, some novel ideas are presented for solving automated web service composition problem. Some possible real world problems such as partial observability of environment, nondeterministic effects of web services, service execution failures are solved through some mechanisms. In addition to automated web service composition, automated web service invocation task is handled in this thesis by using reflection mechanism. The proposed approach is based on AI planning. Web service composition problem is translated to AI planning problem and a novel AI planner namely &ldquo / Simplanner&rdquo / that is designed for working in highly dynamic environments under time constraints is adapted to the proposed system. World altering service calls are done by conforming to the WS-Coordination and WS-Business Activity web service transaction specifications in order to physically repair failure situations and prevent undesired side effects of aborted web service composition efforts.
434

Mining Frequent Semantic Event Patterns

Soztutar, Enis 01 September 2009 (has links) (PDF)
Especially with the wide use of dynamic page generation, and richer user interaction in Web, traditional web usage mining methods, which are based on the pageview concept are of limited usability. For overcoming the difficulty of capturing usage behaviour, we define the concept of semantic events. Conceptually, events are higher level actions of a user in a web site, that are technically independent of pageviews. Events are modelled as objects in the domain of the web site, with associated properties. A sample event from a video web site is the &#039 / play video event&#039 / with properties &#039 / video&#039 / , &#039 / length of video&#039 / , &#039 / name of video&#039 / , etc. When the event objects belong to the domain model of the web site&#039 / s ontology, they are referred as semantic events. In this work, we propose a new algorithm and associated framework for mining patterns of semantic events from the usage logs. We present a method for tracking and logging domain-level events of a web site, adding semantic information to events, an ordering of events in respect to the genericity of the event, and an algorithm for computing sequences of frequent events.
435

A Monolithic Approach To Automated Composition Of Semantic Web Services With The Event Calculus

Okutan, Cagla 01 September 2009 (has links) (PDF)
In this thesis, a web service composition and execution framework is presented for semantically annotated web services. A monolithic approach to automated web service composition and execution problem is chosen, which provides some benefits by separating the composition and execution phases. An AI planning method using a logical formalism called Event Calculus is chosen for the composition phase. This formalism allows one to generate a narrative of actions and temporal orderings using abductive planning techniques given a goal. Functional properties of services, namely input/output/precondition/effects(IOPE) are taken into consideration in the composition phase and non-functional properties, namely quality of service (QoS) parameters are used in selecting the most appropriate solution to be executed. The repository of OWL-S semanticWeb services are translated to Event Calculus axioms and the resulting plans found by the Abductive Event Calculus Planner are converted to graphs. These graphs can be sorted according to a score calculated using the defined quality of service parameters of the atomic services in the composition to determine the optimal solution. The selected graph is converted to an OWL-S file which is executed consequently.
436

Providing Scalability For An Automated Web Service Composition Framework

Kaya, Ertay 01 June 2010 (has links) (PDF)
In this thesis, some enhancements to an existing automatic web service composition and execution system are described which provide a practical significance to the existing framework with scalability, i.e. the ability to operate on large service sets in reasonable time. In addition, the service storage mechanism utilized in the enhanced system presents an effective method to maintain large service sets. The described enhanced system provides scalability by implementing a pre-processing phase that extracts service chains and problem initial and goal state dependencies from service descriptions. The service storage mechanism is used to store this extracted information and descriptions of available services. The extracted information is used in a forward chaining algorithm which selects the potentially useful services for a given composition problem and eliminates the irrelevant ones according to the given problem initial and goal states. Only the selected services are used during the AI planning and execution phases which generate the composition and execute the services respectively.
437

A Lightweight Framework for Universal Fragment Composition

Henriksson, Jakob 06 January 2009 (has links) (PDF)
Domain-specific languages (DSLs) are useful tools for coping with complexity in software development. DSLs provide developers with appropriate constructs for specifying and solving the problems they are faced with. While the exact definition of DSLs can vary, they can roughly be divided into two categories: embedded and non-embedded. Embedded DSLs (E-DSLs) are integrated into general-purpose host languages (e.g. Java), while non-embedded DSLs (NE-DSLs) are standalone languages with their own tooling (e.g. compilers or interpreters). NE-DSLs can for example be found on the Semantic Web where they are used for querying or describing shared domain models (ontologies). A common theme with DSLs is naturally their support of focused expressive power. However, in many cases they do not support non–domain-specific component-oriented constructs that can be useful for developers. Such constructs are standard in general-purpose languages (procedures, methods, packages, libraries etc.). While E-DSLs have access to such constructs via their host languages, NE-DSLs do not have this opportunity. Instead, to support such notions, each of these languages have to be extended and their tooling updated accordingly. Such modifications can be costly and must be done individually for each language. A solution method for one language cannot easily be reused for another. There currently exist no appropriate technology for tackling this problem in a general manner. Apart from identifying the need for a general approach to address this issue, we extend existing composition technology to provide a language-inclusive solution. We build upon fragment-based composition techniques and make them applicable to arbitrary (context-free) languages. We call this process for the composition techniques’ universalization. The techniques are called fragment-based since their view of components— reusable software units with interfaces—are pieces of source code that conform to an underlying (context-free) language grammar. The universalization process is grammar-driven: given a base language grammar and a description of the compositional needs wrt. the composition techniques, an adapted grammar is created that corresponds to the specified needs. The result is thus an adapted grammar that forms the foundation for allowing to define and compose the desired fragments. We further build upon this grammar-driven universalization approach to allow developers to define the non–domain-specific component-oriented constructs that are needed for NE-DSLs. Developers are able to define both what those constructs should be, and how they are to be interpreted (via composition). Thus, developers can effectively define language extensions and their semantics. This solution is presented in a framework that can be reused for different languages, even if their notion of ‘components’ differ. To demonstrate the approach and show its applicability, we apply it to two Semantic Web related NE-DSLs that are in need of component-oriented constructs. We introduce modules to the rule-based Web query language Xcerpt and role models to the Web Ontology Language OWL.
438

Ontologiebasierte Indexierung und Kontextualisierung multimedialer Dokumente für das persönliche Wissensmanagement / Ontology-based Indexing and Contextualization of Multimedia Documents for Personal Information Management

Mitschick, Annett 07 April 2010 (has links) (PDF)
Die Verwaltung persönlicher, multimedialer Dokumente kann mit Hilfe semantischer Technologien und Ontologien intelligent und effektiv unterstützt werden. Dies setzt jedoch Verfahren voraus, die den grundlegenden Annotations- und Bearbeitungsaufwand für den Anwender minimieren und dabei eine ausreichende Datenqualität und -konsistenz sicherstellen. Im Rahmen der Dissertation wurden notwendige Mechanismen zur semi-automatischen Modellierung und Wartung semantischer Dokumentenbeschreibungen spezifiziert. Diese bildeten die Grundlage für den Entwurf einer komponentenbasierten, anwendungsunabhängigen Architektur als Basis für die Entwicklung innovativer, semantikbasierter Lösungen zur persönlichen Dokumenten- und Wissensverwaltung. / Personal multimedia document management benefits from Semantic Web technologies and the application of ontologies. However, an ontology-based document management system has to meet a number of challenges regarding flexibility, soundness, and controllability of the semantic data model. The first part of the dissertation proposes necessary mechanisms for the semi-automatic modeling and maintenance of semantic document descriptions. The second part introduces a component-based, application-independent architecture which forms the basis for the development of innovative, semantic-driven solutions for personal document and information management.
439

Individual Information Adaptation Based on Content Description

Wallin, Erik Oskar January 2004 (has links)
<p>Today’s increasing information supply raises the needfor more effective and automated information processing whereindividual information adaptation (personalization) is onepossible solution. Earlier computer systems for personalizationlacked the ability to easily define and measure theeffectiveness of personalization efforts. Numerous projectsfailed to live up to the their expectations, and the demand forevaluation increased.</p><p>This thesis presents some underlying concepts and methodsfor implementing personalization in order to increase statedbusiness objectives. A personalization system was developedthat utilizes descriptions of information characteristics(metadata) to perform content based filtering in anon-intrusive way.</p><p>Most of the described measurement methods forpersonalization in the literature are focused on improving theutility for the customer. The evaluation function of thepersonalization system described in this thesis takes thebusiness operator’s standpoint and pragmatically focuseson one or a few measurable business objectives. In order toverify operation of the personalization system, a functioncalled bifurcation was created. The bifurcation functiondivides the customers stochastically into two or morecontrolled groups with different personalizationconfigurations. Bygiving one of the controlled groups apersonalization configuration that deactivates thepersonalization, a reference group is created. The referencegroup is used to measure quantitatively objectives bycomparison with the groups with active personalization.</p><p>Two different companies had their websites personalized andevaluated: one of Sweden’s largest recruitment servicesand the second largest Swedish daily newspaper. The purposewith the implementations was to define, measure, and increasethe business objectives. The results of the two case studiesshow that under propitious conditions, personalization can bemade to increase stated business objectives.</p><p><b>Keywords:</b>metadata, semantic web, personalization,information adaptation, one-to-one marketing, evaluation,optimization, personification, customization,individualization, internet, content filtering, automation.</p>
440

Προηγμένες τεχνικές και αλγόριθμοι εξόρυξης γνώσης για την προσωποποίηση της πρόσβασης σε δικτυακούς τόπους / Advanced techniques and algorithms of knowledge mining from Web Sites

Γιαννακούδη, Θεοδούλα 16 May 2007 (has links)
Η προσωποποίηση του ιστού είναι ένα πεδίο που έχει κερδίσει μεγάλη προσοχή όχι μόνο στην ερευνητική περιοχή, όπου πολλές ερευνητικές μονάδες έχουν ασχοληθεί με το πρόβλημα από διαφορετικές μεριές, αλλά και στην επιχειρησιακή περιοχή, όπου υπάρχει μία ποικιλία εργαλείων και εφαρμογών που διαθέτουν ένα ή περισσότερα modules στη διαδικασία της εξατομίκευσης. Ο στόχος όλων αυτών είναι, εξερευνώντας τις πληροφορίες που κρύβονται στα logs του εξυπηρετητή δικτύου να ανακαλύψουν τις αλληλεπιδράσεις μεταξύ των επισκεπτών των ιστότοπων και των ιστοσελίδων που περιέχονται σε αυτούς. Οι πληροφορίες αυτές μπορούν να αξιοποιηθούν για τη βελτιστοποίηση των δικτυακών τόπων, εξασφαλίζοντας έτσι αποτελεσματικότερη πλοήγηση για τον επισκέπτη και διατήρηση του πελάτη στην περίπτωση του επιχειρηματικού τομέα. Ένα βασικό βήμα πριν την εξατομίκευση αποτελεί η εξόρυξη χρησιμοποίησης από τον ιστό, ώστε να αποκαλυφθεί τη γνώση που κρύβεται στα log αρχεία ενός web εξυπηρετητή. Εφαρμόζοντας στατιστικές μεθόδους και μεθόδους εξόρυξης δεδομένων στα web log δεδομένα, μπορούν να προσδιοριστούν ενδιαφέροντα πρότυπα που αφορούν τη συμπεριφορά πλοήγησης των χρηστών, όπως συστάδες χρηστών και σελίδων και πιθανές συσχετίσεις μεταξύ web σελίδων και ομάδων χρηστών. Τα τελευταία χρόνια, γίνεται μια προσπάθεια συγχώνευσης του περιεχομένου του ιστού στη διαδικασία εξόρυξης χρησιμοποίησης, για να επαυξηθεί η αποτελεσματικότητα της εξατομίκευσης. Το ενδιαφέρον σε αυτή τη διπλωματική εργασία εστιάζεται στο πεδίο της εξόρυξης γνώσης για τη χρησιμοποίηση δικτυακών τόπων και πώς η διαδικασία αυτή μπορεί να επωφεληθεί από τα χαρακτηριστικά του σημασιολογικού ιστού. Αρχικά, παρουσιάζονται τεχνικές και αλγόριθμοι που έχουν προταθεί τα τελευταία χρόνια για εξόρυξη χρησιμοποίησης από τα log αρχεία των web εξυπηρετητών. Έπειτα εισάγεται και ο ρόλος του περιεχομένου στη διαδικασία αυτή και παρουσιάζονται δύο εργασίες που λαμβάνουν υπόψη και το περιεχόμενο των δικτυακών τόπων: μία τεχνική εξόρυξης χρησιμοποίησης με βάση το PLSA, η οποία δίνει στο τέλος και τη δυνατότητα ενοποίησης του περιεχομένου του ιστού και ένα σύστημα προσωποποίησης το οποίο χρησιμοποιεί το περιεχόμενο του ιστοτόπου για να βελτιώσει την αποτελεσματικότητα της μηχανής παραγωγής προτάσεων. Αφού αναλυθεί θεωρητικά το πεδίο εξόρυξης γνώσης από τα logs μέσα από την περιγραφή των σύγχρονων τεχνικών, προτείνεται το σύστημα ORGAN-Ontology-oRiented usaGe ANalysis- το οποίο αφορά στη φάση της ανάλυσης των log αρχείων και την εξόρυξη γνώσης για τη χρησιμοποίηση των δικτυακών τόπων με άξονα τη σημασιολογία του ιστοτόπου. Τα σημασιολογικά χαρακτηριστικά του δικτυακού τόπου έχουν προκύψει με τεχνικές εξόρυξης δεδομένων από το σύνολο των ιστοσελίδων και έχουν σχολιαστεί από μία OWL οντολογία. Το ORGAN παρέχει διεπαφή για την υποβολή ερωτήσεων σχετικών με την επισκεψιμότητα και τη σημασιολογία των σελίδων, αξιοποιώντας τη γνώση για το site, όπως αναπαρίσταται πάνω στην οντολογία. Περιγράφεται διεξοδικά ο σχεδιασμός, η ανάπτυξη και η πειραματική αξιολόγηση του συστήματος και σχολιάζονται τα αποτελέσματα του. / Web personalization is a domain which has gained great momentum not only in the research area, where many research units have addressed the problem form different perspectives, but also in the industrial area, where a variety of modules for the personalization process is available. The objective is, researching the information hidden in the web server log files to discover the interactions between web sites visitors and web sites pages. This information can be further exploited for web sites optimization, ensuring more effective navigation for the user and client retention in the industrial case. A primary step before the personalization is the web usage mining, where the knowledge hidden in the log files is revealed. Web usage mining is the procedure where the information stored in the Web server logs is processed by applying statistical and data mining techniques such as clustering, association rules discovery, classification, and sequential pattern discovery, in order to reveal useful patterns that can be further analyzed. Recently, there has been an effort to incorporate Web content in the web usage mining process, in order to enhance the effectiveness of personalization. The interest in this thesis is focused on the domain of the knowledge mining for usage of web sites and how this procedure can get the better of attributes of the semantic web. Initially, techniques and algorithms that have been proposed lately in the field of web usage mining are presented. After, the role of the context in the usage mining process is introduced and two relevant works are presented: a usage mining technique based on the PLSA model, which may integrate attributes of the site content, and a personalization system which uses the site content in order to enhance a recommendation engine. After analyzing theoretically the usage mining domain, a new system is proposed, the ORGAN, which is named after Ontology-oRiented usaGe ANalysis. ORGAN concerns the stage of log files analysis and the domain of knowledge mining for the web site usage based on the semantic attributes of the web site. The web site semantic attributes have resulted from the web site pages applying data mining techniques and have been annotated by an OWL ontology. ORGAN provides an interface for queries submission concerning the average level of visitation and the semantics of the web site pages, exploiting the knowledge for the site, as it is derived from the ontology. There is an extensive description of the design, the development and the experimental evaluation of the system.

Page generated in 2.0816 seconds