• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4123
  • 1988
  • 826
  • 747
  • 596
  • 596
  • 572
  • 285
  • 196
  • 131
  • 114
  • 113
  • 73
  • 72
  • 54
  • Tagged with
  • 11632
  • 1944
  • 1481
  • 1332
  • 1268
  • 1184
  • 1110
  • 1039
  • 977
  • 950
  • 931
  • 930
  • 923
  • 886
  • 864
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Development of dynamically-generated pages on a website

Wagner, Jodi January 2006 (has links) (PDF)
Thesis (M.S.C.I.T.)--Regis University, Denver, Colo., 2006. / Title from PDF title page (viewed on Sept. 19, 2006). Includes bibliographical references.
82

[en] SEMANTIC MODELING DESIGN OF WEB APPLICATION / [pt] MODELAGEM SEMÂNTICA DE APLICAÇÕES NA WWW

FERNANDA LIMA 13 October 2003 (has links)
[pt] Este trabalho apresenta um modelo para projeto e implementação de aplicações hipermídia no contexto da Web semântica. A partir dos princípios o Object Oriented Hypermedia Design Method, utilizamos as noções de ontologias para definir o modelo conceitual de uma aplicação, estendendo o poder expressivo daquele método. Os modelos de navegação são definidos utilizando-se uma linguagem de consulta que permite referências tanto ao esquema de dados quanto às suas instâncias, possibilitando a definição de estruturas de navegação flexíveis e abrangentes. Adicionalmente, propomos a utilização de estruturas de acesso facetadas para o apoio à escolha de objetos de navegação utilizando múltiplos critérios. Finalmente, apresentamos uma arquitetura de implementação que permite a utilização direta da especificação da aplicação na derivação da implementação da aplicação final. / [en] In this thesis we present a method for the design and implementation of web applications for the Semantic Web. Based on the Object Oriented Hypermedia Design Method approach, we used ontology concepts to define an application conceptual model, extending the expressive power of the original method. The navigational models definitions use a query language capable of querying both schema and instances, enabling the specification of flexible access structures. Additionally, we propose the use of faceted access structures to improve the selection of navigational objects organized by multiple criteria. Finally, we present an implementation architecture that allows the direct use of the application specifications when deriving a final application implementation.
83

A quality-centered approach for web application engineering / Une approche centrée sur la qualité pour l'ingénierie des applications Web

Do, Tuan Anh 18 December 2018 (has links)
Les développeurs d'applications Web ne sont pas tous des experts. Même s'ils utilisent des méthodes telles que UWE (UML web engineering) et les outils CASE, ils ne sont pas toujours capables de prendre de bonnes décisions concernant le contenu de l'application web, le schéma de navigation et / ou la présentation des informations. La littérature leur fournit de nombreuses lignes directrices (guidelines) pour ces tâches. Cependant, ces connaissances sont disséminées dans de nombreuses sources et non structurées. Dans cette dissertation, nous capitalisons sur les connaissances offertes par ces lignes directrices. Notre contribution est triple: (i) nous proposons un méta-modèle permettant une représentation riche de ces lignes directrices, (ii) nous proposons une grammaire permettant la description des lignes directrices existantes, (iii) sur la base de cette grammaire, nous développons un outil de gestion des lignes directrices . Nous enrichissons la méthode UWE avec cette base de connaissances menant à une approche basée sur la qualité. Ainsi, notre outil enrichit les prototypes existants d'ingénierie logicielle assistée par ordinateur basés sur UWE avec des conseils ad hoc. / Web application developers are not all experts. Even if they use methods such as UWE (UML web engineering) and CASE tools, they are not always able to make good decisions regarding the content of the web application, the navigation schema, and/or the presentation of information. Literature provides them with many guidelines for these tasks. However this knowledge is disseminated in many sources and not structured. In this dissertation, we perform a knowledge capitalization of all these guidelines. The contribution is threefold: (i) we propose a meta-model allowing a rich representation of these guidelines, (ii) we propose a grammar enabling the description of existing guidelines, (iii) based on this grammar, we developed a guideline management tool. We enrich the UWE method with this knowledge base leading to a quality based approach. Thus, our tool enriches existing UWE-based Computer Aided Software Engineering prototypes with ad hoc guidance.
84

Evaluation and improvement of semantically-enhanced tagging system

Alsharif, Majdah Hussain January 2013 (has links)
The Social Web or ‘Web 2.0’ is focused on the interaction and collaboration between web sites users. It is credited for the existence of tagging systems, amongst other things such as blogs and Wikis. Tagging systems like YouTube and Flickr offer their users the simplicity and freedom in creating and sharing their own contents and thus folksonomy is a very active research area where many improvements are presented to overcome existing disadvantages such as the lack of semantic meaning, ambiguity, and inconsistency. TE is a tagging system proposing solutions to the problems of multilingualism, lack of semantic meaning and shorthand writing (which is very common in the social web) through the aid of semantic and social resources. The current research is presenting an addition to the TE system in the form of an embedded stemming component to provide a solution to the different lexical form problems. Prior to this, the TE system had to be explored thoroughly and then its efficiency had to be determined in order to decide on the practicality of embedding any additional components as enhancements to the performance. Deciding on this involved analysing the algorithm efficiency using an analytical approach to determine its time and space complexity. The TE had a time growth rate of O (N²) which is polynomial, thus the algorithm is considered efficient. Nonetheless, recommended modifications like patch SQL execution can improve this. Regarding space complexity, the number of tags per photo represents the problem size which, if it grows, will increase linearly the required memory space. Based on the findings above, the TE system is re-implemented on Flickr instead of YouTube, because of a recent YouTube restriction, which is of greater benefit in multi languages tagging system since the language barrier is meaningless in this case. The re-implementation is achieved using ‘flickrj’ (Java Interface for Flickr APIs). Next, the stemming component is added to perform tags normalisation prior to the ontologies querying. The component is embedded using the Java encoding of the porter 2 stemmer which support many languages including Italian. The impact of the stemming component on the performance of the TE system in terms of the size of the index table and the number of retrieved results is investigated using an experiment that showed a reduction of 48% in the size of the index table. This also means that search queries have less system tags to compare them against the search keywords and this can speed up the search. Furthermore, the experiment runs similar search trails on two versions of the TE systems one without the stemming component and the other with the stemming component and found out that the latter produced more results on the conditions of working with valid words and valid stems. The embedding of the stemming component in the new TE system has lessened the effect of the storage overhead needed for the generated system tags by their reduction for the size of the index table which make the system suited for many applications such as text classification, summarization, email filtering, machine translation…etc.
85

Semantic Analysis in Web Usage Mining

Norguet, Jean-Pierre E 20 March 2006 (has links)
With the emergence of the Internet and of the World Wide Web, the Web site has become a key communication channel in organizations. To satisfy the objectives of the Web site and of its target audience, adapting the Web site content to the users' expectations has become a major concern. In this context, Web usage mining, a relatively new research area, and Web analytics, a part of Web usage mining that has most emerged in the corporate world, offer many Web communication analysis techniques. These techniques include prediction of the user's behaviour within the site, comparison between expected and actual Web site usage, adjustment of the Web site with respect to the users' interests, and mining and analyzing Web usage data to discover interesting metrics and usage patterns. However, Web usage mining and Web analytics suffer from significant drawbacks when it comes to support the decision-making process at the higher levels in the organization. Indeed, according to organizations theory, the higher levels in the organizations need summarized and conceptual information to take fast, high-level, and effective decisions. For Web sites, these levels include the organization managers and the Web site chief editors. At these levels, the results produced by Web analytics tools are mostly useless. Indeed, most of these results target Web designers and Web developers. Summary reports like the number of visitors and the number of page views can be of some interest to the organization manager but these results are poor. Finally, page-group and directory hits give the Web site chief editor conceptual results, but these are limited by several problems like page synonymy (several pages contain the same topic), page polysemy (a page contains several topics), page temporality, and page volatility. Web usage mining research projects on their part have mostly left aside Web analytics and its limitations and have focused on other research paths. Examples of these paths are usage pattern analysis, personalization, system improvement, site structure modification, marketing business intelligence, and usage characterization. A potential contribution to Web analytics can be found in research about reverse clustering analysis, a technique based on self-organizing feature maps. This technique integrates Web usage mining and Web content mining in order to rank the Web site pages according to an original popularity score. However, the algorithm is not scalable and does not answer the page-polysemy, page-synonymy, page-temporality, and page-volatility problems. As a consequence, these approaches fail at delivering summarized and conceptual results. An interesting attempt to obtain such results has been the Information Scent algorithm, which produces a list of term vectors representing the visitors' needs. These vectors provide a semantic representation of the visitors' needs and can be easily interpreted. Unfortunately, the results suffer from term polysemy and term synonymy, are visit-centric rather than site-centric, and are not scalable to produce. Finally, according to a recent survey, no Web usage mining research project has proposed a satisfying solution to provide site-wide summarized and conceptual audience metrics. In this dissertation, we present our solution to answer the need for summarized and conceptual audience metrics in Web analytics. We first described several methods for mining the Web pages output by Web servers. These methods include content journaling, script parsing, server monitoring, network monitoring, and client-side mining. These techniques can be used alone or in combination to mine the Web pages output by any Web site. Then, the occurrences of taxonomy terms in these pages can be aggregated to provide concept-based audience metrics. To evaluate the results, we implement a prototype and run a number of test cases with real Web sites. According to the first experiments with our prototype and SQL Server OLAP Analysis Service, concept-based metrics prove extremely summarized and much more intuitive than page-based metrics. As a consequence, concept-based metrics can be exploited at higher levels in the organization. For example, organization managers can redefine the organization strategy according to the visitors' interests. Concept-based metrics also give an intuitive view of the messages delivered through the Web site and allow to adapt the Web site communication to the organization objectives. The Web site chief editor on his part can interpret the metrics to redefine the publishing orders and redefine the sub-editors' writing tasks. As decisions at higher levels in the organization should be more effective, concept-based metrics should significantly contribute to Web usage mining and Web analytics.
86

Automatic composition of prototocol-based Web services

Ragab Hassen, Ramy 07 July 2009 (has links) (PDF)
Les services web permettent l'intégration flexible et l'interopérabilité d'applications autonomes, hétérogènes et distribuées. Le développement de techniques et d'outils permettant la composition automatique de ces services en prenant en compte leurs comportements est une question cruciale. Cette thèse s'intéresse au problème de la composition automatique de services web. Nous décrivons les services web par leurs protocoles métiers, formalisés sous la forme de machines d'état finis. Les principaux travaux autour de cette problématique se focalisent sur le cas particulier où le nombre d'instances de chaque service est fixé à priori. Nous abordons le cas général du problème de synthèse de protocoles où le nombre d'instances de chaque service disponible et pouvant intervenir lors de la composition n'est pas borné à priori. Nous abordons le cas général du problème de synthèse de protocoles où le nombre d'instances de chaque service disponible et pouvant intervenir lors de la composition n'est pas borné à priori. Plus précisement, nous considérons le problème suivant : étant donné un ensemble de n protocoles disponibles P1,..., Pn et un nouveau protocole cible PT, le comportement de PT peut-il être synthétisé en combinant les comportements décrits par les protocoles disponibles ? Pour ce faire, nous proposons dans un premier temps un cadre formel de travail basé à la fois sur le test de simulation et la fermeture shuffle des machines d'états finis. Nous prouvons la décidabilité du problème en fournissant un algorithme de composition correct et complet. Ensuite, nous analysons la complexité du problème de la composition. Plus précisement, nous fournissons une borne supérieure et inférieure de complexité. Nous nous intéressons également aux cas particuliers de ce service général. Enfin, nous implémentons un prototype de composition dans le cadre de la plateforme ServiceMosaic.
87

Facilitating Web Service Discovery and Publishing: A Theoretical Framework, A Prototype System, and Evaluation

Hwang, Yousub January 2007 (has links)
The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component-based software development to promote application interaction and integration within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web services repositories not only be well-structured but also provide efficient tools for an environment supporting reusable software components for both service providers and consumers. As the potential of Web services for service-oriented computing is becoming widely recognized, the demand for an integrated framework that facilitates service discovery and publishing is concomitantly growing.In our research, we propose a framework that facilitates Web service discovery and publishing by combining clustering techniques and leveraging the semantics of the XML-based service specification in WSDL files. We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the Web service domain. Our proposed approach has several appealing features: (1) It minimizes the requirements of prior knowledge from both service providers and consumers, (2) It avoids exploiting domain-dependent ontologies,(3) It is able to visualize the information space of Web services by providing a category map that depicts the semantic relationships among them,(4) It is able to semi-automatically generate Web service taxonomies that reflect both capability and geographic context, and(5) It allows service consumers to combine multiple search strategies in a flexible manner.We have developed a Web service discovery tool based on the proposed approach using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web services repositories. We believe that both service providers and consumers in a service-oriented computing environment can benefit from our Web service discovery approach.
88

Indexation de documents pédagogiques : Fusionner les approches du web sémantique et du web participatif

Huynh-Kim-Bang, Benjamin 29 October 2009 (has links)
Les techniques actuelles d'indexation sur le Web ne sont pas satisfaisantes pour les ressources pédagogiques. En effet, l'indexation automatique, e.g. Google, peut difficilement dépasser le niveau syntaxique des contenus tandis que l'indexation par des documentalistes est coûteuse en main d'oeuvre. Or de récentes approches telles que les techniques du Web Sémantique ou la tendance du Web Participatif (Web 2.0) offrent des solutions prometteuses. Une première partie de nos travaux porte ainsi sur l'étude du Web Sémantique appliqué aux ressources pédagogiques. Nous y explorons les possibilités de raisonnements automatisés appliqués à des ontologies pédagogiques. Une seconde partie porte sur l'étude des fonctionnalités des sites participatifs, facilitant l'ajout de contenus et de métadonnées par les visiteurs. Nous proposons alors un modèle de site participatif adapté aux communautés d'enseignants. Néanmoins, Web Sémantique et Web Participatif sont souvent opposés. Aux ontologies formelles généralement réalisées par quelques experts s'opposent les tags hétérogènes ajoutés par de nombreux utilisateurs aux profils variés. Dans une troisième partie, nous proposons donc un modèle fusionnant Sémantique et Participatif. Ce modèle vise à développer des applications de partage de ressources, principalement pour des communautés de pratique. Il se fonde sur une Indexation Progressive et Multi-points de vue (modèle IPM) consistant à : - permettre aux utilisateurs de structurer progressivement les métadonnées, pour finalement favoriser des raisonnements sémantiques par les machines ; - permettre aux utilisateurs de collaborer progressivement, pour finalement favoriser une vision partagée du domaine par les humains. Ce modèle est implémenté dans un outil de partage de signets, nommé SemanticScuttle, proposant des fonctionnalités originales comme des tags structurés par des relations d'inclusion et de synonymie, ou des espaces wiki pour décrire des tags. L'outil a été développé et expérimenté avec des documentalistes en sociologie sur plusieurs mois. Diffusé, l'outil est employé par des utilisateurs dans plusieurs pays. Finalement, nos travaux nous permettent de formuler des hypothèses sur un modèle socio-technique soutenant le partage entre enseignants. Ils sont aussi une contribution à des modèles intégrant différentes formes d'indexation : automatique et par des humains, faisant intervenir des experts et les utilisateurs, fondée sur des modèles structurés (e.g. ontologies) et sur des métadonnées flexibles (e.g. tags). Mot-clefs : Indexation (documentation), Pédagogie – Ressources internet, Web 2.0, Communautés virtuelles, Web sémantique, Ontologies (informatique). / Current ways for Web indexing are not sufficient for learning resources. Indeed, automatic indexing, e.g. Google, can hardly raise above the syntaxical level of contents while indexing by human experts implies high costs. But, recent approaches like Semantic Web and Participative Web (Web 2.0) offer promising solutions. A first part of our works concerns the study of Semantic Web applied to learning resources. We explore possibilities of reasonings applied to educational ontologies. A second part is about the study of functionalities on participative websites making easier the adding of content and metadata by visitors. Then we propose a model of participative website adapted to communities of teachers. Nevertheless, Semantic Web and Participative Web are often opposed. Formal ontologies generally produced by experts are opposed to heterogeneous tags added by numerous users with various profiles. In a third part, we propose a model melting Semantic and Participative. The goal of this model is to help developping applications for sharing resources into communities of practice. It is based on a Progressive and Multi-points of view Indexing in which: - users progressively structure metadata, to finally allow semantic reasonings by computers; - users progressively collaborate, to finally allow a shared vision of the domain by humans. This model is implemented into a social bookmarking tool, called SemanticScuttle, offering original features like tags structured by relations of inclusion and synonymy, or wiki spaces to describe tags. The tool was developped and tested with librarians in sociology during several months. Finally, our works allow us to formulate hypotheses about a social and technical model supporting sharing between teachers. They also contribute to models melting different indexing solutions: automatic or by humans, including experts or simple users, based on structured models (e.g. ontologies) or on flexible metadata (e.g. tags).
89

MetroWeb: logiciel de support à l'évaluation de la qualité ergonomique des sites web

Mariage, Céline 17 March 2005 (has links)
La qualité ergonomique d'un site web représente un important facteur de succès de celui-ci auprès des utilisateurs. L'évaluation de la qualité ergonomique constitue un processus à part entière, pouvant intervenir à plusieurs étapes du cycle de vie des sites web. Cependant, il est préférable d'intégrer au plus tôt l'évaluation dans le cycle de développement, afin de centrer la conception du site sur l'utilisateur, de réduire les coûts d'analyse, de conception et de développement. Bien qu'il existe beaucoup de méthodes et d'outils d'évaluation de l'utilisabilité, spécifiques ou non aux sites web, leur pratique n'est pas répandue chez les concepteurs. La méconnaissance des méthodes et outils d'évaluation, et plus particulièrement leur apport au cours du développement de l'interface, fondent la problématique de cette recherche doctorale. Celle-ci propose un logiciel d'aide à l'évaluation de la qualité ergonomique de sites web, par l'organisation et la diffusion de connaissances utiles à l'évaluateur. Une première étape de validation externe du logiciel a été menée, auprès de concepteurs de sites web, novices et expérimentés.
90

MetroWeb: logiciel de support à l'évaluation de la qualité ergonomique des sites web

Mariage, Céline 17 March 2005 (has links)
La qualité ergonomique d'un site web représente un important facteur de succès de celui-ci auprès des utilisateurs. L'évaluation de la qualité ergonomique constitue un processus à part entière, pouvant intervenir à plusieurs étapes du cycle de vie des sites web. Cependant, il est préférable d'intégrer au plus tôt l'évaluation dans le cycle de développement, afin de centrer la conception du site sur l'utilisateur, de réduire les coûts d'analyse, de conception et de développement. Bien qu'il existe beaucoup de méthodes et d'outils d'évaluation de l'utilisabilité, spécifiques ou non aux sites web, leur pratique n'est pas répandue chez les concepteurs. La méconnaissance des méthodes et outils d'évaluation, et plus particulièrement leur apport au cours du développement de l'interface, fondent la problématique de cette recherche doctorale. Celle-ci propose un logiciel d'aide à l'évaluation de la qualité ergonomique de sites web, par l'organisation et la diffusion de connaissances utiles à l'évaluateur. Une première étape de validation externe du logiciel a été menée, auprès de concepteurs de sites web, novices et expérimentés.

Page generated in 0.0738 seconds