• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4168
  • 1988
  • 830
  • 747
  • 601
  • 597
  • 581
  • 285
  • 196
  • 131
  • 114
  • 113
  • 73
  • 72
  • 54
  • Tagged with
  • 11699
  • 1971
  • 1495
  • 1339
  • 1268
  • 1187
  • 1117
  • 1048
  • 978
  • 961
  • 943
  • 934
  • 926
  • 890
  • 869
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Referencing a website the APA way

Unruh, Miriam, McLean, Cheryl, Tittenberger, Peter, Roy, Mark 09 March 2006 (has links)
After completing this interactive tutorial you will be able to create a proper American Psychological Association (APA) reference for a webpage. This flash tutorial requires a screen resolution of 1024 x 768 or higher.
592

A comparative evaluation of Web server systems: taxonomy and performance

Ganeshan, Manikandaprabhu 29 March 2006 (has links)
The Internet is an essential resource to an ever-increasing number of businesses and home users. Internet access is increasing dramatically and hence, the need for efficient and effective Web server systems is on the rise. These systems are information engines that are accessed through the Internet by a rapidly growing client base. These systems are expected to provide good performance and high availability to the end user. They are also resilient to failures at both the hardware and software levels. These characteristics make them suitable for servicing the present and future information demands of the end consumer. In recent years, researchers have concentrated on taxonomies of scalable Web server system architectures, and routing and dispatching algorithms for request distribution. However, they have not focused on the classification of commercial products and prototypes, which would be of use to business professionals and software architects. Such a classification would help in selecting appropriate products from the market, based on product characteristics, and designing new products with different combinations of server architectures and dispatching algorithms. Currently, dispatching algorithms are classified as content-blind, content-aware, and Domain Name Server (DNS) scheduling. These classifications are extended, and organized under one tree structure in this thesis. With the help of this extension, this thesis develops a unified product-based taxonomy that identifies product capabilities by relating them to a classification of scalable Web server systems and to the extended taxonomy of dispatching algorithms. As part of a detailed analysis of Web server systems, generic queuing models, which consist of a dispatcher unit and a Web server unit are built. Some performance metrics, such as throughput, server performance, mean queue size, mean waiting time, mean service time and mean response time of these generic queuing models are measured for evaluation. Finally, the correctness of generic queuing models are evaluated with the help of theoretical and simulation analysis. / May 2005
593

An n-gram Based Approach to the Automatic Classification of Web Pages by Genre

Mason, Jane E. 10 December 2009 (has links)
The extraordinary growth in both the size and popularity of the World Wide Web has generated a growing interest in the identification of Web page genres, and in the use of these genres to classify Web pages. Web page genre classification is a potentially powerful tool for filtering the results of online searches. Although most information retrieval searches are topic-based, users are typically looking for a specific type of information with regard to a particular query, and genre can provide a complementary dimension along which to categorize Web pages. Web page genre classification could also aid in the automated summarization and indexing of Web pages, and in improving the automatic extraction of metadata. The hypothesis of this thesis is that a byte n-gram representation of a Web page can be used effectively to classify the Web page by its genre(s). The goal of this thesis was to develop an approach to the problem of Web page genre classification that is effective not only on balanced, single-label corpora, but also on unbalanced and multi-label corpora, which better represent a real world environment. This thesis research develops n-gram representations for Web pages and Web page genres, and based on these representations, a new approach to the classification of Web pages by genre is developed. The research includes an exhaustive examination of the questions associated with developing the new classification model, including the length, number, and type of the n-grams with which each Web page and Web page genre is represented, the method of computing the distance (dissimilarity) between two n-gram representations, and the feature selection method with which to choose these n-grams. The effect of preprocessing the data is also studied. Techniques for setting genre thresholds in order to allow a Web page to belong to more than one genre, or to no genre at all are also investigated, and a comparison of the classification performance of the new classification model with that of the popular support vector machine approach is made. Experiments are also conducted on highly unbalanced corpora, both with and without the inclusion of noise Web pages.
594

Composition de services web par appariement de signatures

Alkamari, Aniss January 2008 (has links) (PDF)
Les services web ont longtemps été présentés comme la réponse, tant attendue, à l'interopérabilité souhaitée des systèmes distribués hétérogènes. Dans le passé, plusieurs technologies ont fait la promesse d'offrir cette interopérabilité: .NET, DCOM, J2EE, CORBA, etc. La promesse ne fut jamais tenue, tantôt parce que la technologie en question n'était pas extensible (adaptable à différentes échelles) (DCOM et CORBA), tantôt parce qu'elle était de propriété industrielle (DCOM, .NET, etc.). UDDI (Universal Discovery Description and Integration) publie tous les services web disponibles et facilite ainsi la requête des services offerts par les différentes entreprises. Néanmoins, la façon dont ces requêtes sont formulées laisse à désirer. En particulier, UDDI prend pour acquis que, pour chaque besoin commercial, il y aurait un service commercial. Cette réalité a rapidement convaincu les utilisateurs des services web de l'importance d'en faire la composition. Par conséquent, la composition des services web a connu beaucoup d'intérêt dans les dernières années. Les approches adoptées pour composer des services web sont différentes. Nous prônons une approche syntaxique. Nous pensons qu'une recherche par contexte et domaine d'industrie permettrait une découverte adéquate de services web satisfaisant les besoins du client. WSDL nous facilite la tâche, puisque les types de ses éléments sont des documents d'affaires donnant une bonne idée des services qu'ils offrent. Nous utilisons l'appariement des signatures des opérations pour chercher l'ensemble d'opérations fournissant les types dont on a besoin. La composition des services web devient une composition de fonctions qui, partant d'un ensemble de messages d'entrées, produisent un ensemble de messages de sorties. Dans cette recherche, nous présentons un algorithme qui se base sur différentes manières d'apparier les types et qui satisfait cette approche sémantique ainsi que les résultats trouvés. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : Services Web, .NET, DCOM, J2EE, CORBA, Standard UDDI, WSDL, Appariement, Composition.
595

Estimation de projets web : application et analyse de fiabilité des modèles COCOMO II et WebMo

Ktata, Oualid January 2007 (has links) (PDF)
Allant des simples pages Web aux systèmes transactionnels sophistiqués, les applications Web ont beaucoup évolué et continuent de l'être. On parle même d'une nouvelle ingénierie logicielle à savoir l'ingénierie Web [pressman2005]. La mise en marché rapide et l'hétérogénéité de l'équipe de développement sont parmi les principales spécificités des applications et projets Web. Ces spécificités lancent de nouveaux défis aux modèles d'estimation actuels même pour les plus matures d'entre eux comme COCOMO II. Dans ce travail nous avons analysé la fiabilité d'un nouveau modèle d'estimation à savoir: WebMo. Ce dernier est une adaptation de la version COCOMO II avant projet au contexte du Web. L'instigateur de WebMo est Donald Reifer qui est aussi un membre très actif dans la communauté de COCOMO. Reifer a présenté son nouveau modèle comme une alternative viable à COCOMO II si on le dote en plus d'une nouvelle métrique qui tient compte des spécificités des applications Web. Dans cette étude visant l'analyse de fiabilité de WebMo, nous avons développé un outil d'estimation qui permet d'estimer et comparer les efforts de développement pour des projets Web selon les modèles COCOMO II et WebMo. En suivant un processus de sélection de projet bien défini, nous avons choisi cinq projets Web de la banque de projets ISBSG. Malgré l'immaturité du modèle WebMo et son caractère prévisionnel, les résultats générés par l'outil étaient conformes à nos attentes. En effet, WebMo fournit des estimations de l'effort plus proches de la réalité en comparaison avec son modèle de base (COCOMO II version avant projet). Ceci est dû essentiellement à la prise en compte des objets multimédias et autres objets spécifiques aux applications Web par la nouvelle métrique de Reifer à savoir: les 'Web Objects'. Un autre facteur important de succès est la calibration du modèle qui est basée uniquement sur des projets Web. Finalement, on suggère certaines recommandations telles qu'une version WebMo post-architecture pour des phases plus avancées du cycle de développement et aussi tenir compte de la diversité des langages de programmation, caractéristique typique des applications Web. Nous recommandons aussi d'alimenter la base de données du modèle avec plus de projets pour une meilleure calibration et ramener sa conception à une forme plus standard comme celle de COCOMO II. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : Estimation, Projet Web, WebMo, COCOMO II, ISBSG, Ingénierie Web.
596

Dyslexi och Likvärdighet : Finns det en genväg till en likvärdig utbildning? / Dyslexia and Equivalence

Orhagen, Mikael January 2011 (has links)
The purpose of this paper is to find out if technology within pedagogy can help pupils with dyslexiaperform on the same level as their peers without using compensatory aid. Is it possible to replacethe compensatory aids with the technology in question? Can the same technology also be used tosupport other pupils as well and in turn removing the stigma of needing compensatory aid? To learn the answers to these questions I have done a qualitative text analysis. This analysis focuson technology in the form of podcasts and Web 2.0 based teaching platforms. The analysis alsofocus on pupils with dyslexia and what kind of help compensatory aids can be. The texts will beanalyzed based on the concept of equivalence and Habermas's theory of technology as ideology. The conclusions I have reached is that it is not possible to help all pupils with dyslexia with theseforms of technology. Not in such a manner that the compensatory aid in question can be replaced.This is due to the fact that not every pupil with dyslexia need the same support. The technology dohowever offer aid and shows signs of being able to help the pupils perform.
597

Using JESS for Enforcing Separation of Duties and Binding of Duties in a Web Services-based Workflow

Jang, Yu-Shu 29 July 2010 (has links)
Open distributed environments such as the World Wide Web facilitate information sharing but provide limited support to the protection of sensitive information and resources. Web services have become a part of components for quickly building a business process that satisfies the business goal of an organization, and access control is imperative to prevent the illegal access of sensitive information. In recent years, several researches have investigated the Web services-based workflow access control problem, and selection approaches for choosing the performer for each task so as to satisfy all access control constraints have been proposed. Based on the role-based access control model, we focus on two types of access control: separation of duties and binding of duties. Both role-level and participant-level of SoDs and of BoDs that need to be dynamically enforced are considered in this thesis. While dealing with complex and flexible business logics, we use rule engine to reasons with the business facts to get the result based on business rules. The proposed approach is evaluated by a workflow scenario and is shown to be flexible to develop new process with dynamic access control constraints at the cost of higher execution time.
598

A Semantic-based Approach to Web Services Discovery

Tsai, Yu-Huai 13 June 2011 (has links)
Service-oriented Architecture is now an important issue when it comes to program development. However, there is not yet an efficient and effective way for developer to obtain appropriate component. Current researches mostly focus on either textual meaning or ontology relation of the services. In this research we propose a hybrid approach that integrates both types of information. It starts by defining important attributes and their weights for web service discovery using Multiple Criteria Decision Making. Then a method of similarity calculation based on both textual and ontological information is applied. In the experiment, we collect 103 real-world Web services, and the experimental results show that our approach generally performs better than the existing ones.
599

The System Design and Implementation to Support Dynamic Web Services Selection

Chen, Po-Yuan 09 February 2012 (has links)
Service-Oriented Architecture (SOA) is intended for the integration of heterogeneous applications. Complex business processes are composed by a group of specific Web services using WS-BPEL (Business Process Execution Language), and these Web services may be designed by the enterprise itself or third-party services providers. Today there are many WS-BPEL engines that support the deployment and execution of WS-BPEL files. However, the WS-BPEL activities have to be pre-defined, and if at runtime a Web service call fails, the entire business process is pronounced to be failed, thereby jeopardizing the reliability of SOA. Although the WS- BPEL supports compensation mechanism, it is complex and not flexible. In this work, we propose a process design model to support dynamic Web services selection that eases the designer¡¦s job. This model has been implemented, and the prototype is evaluated to demonstrate that it indeed improves the overall business process reliability.
600

The Study of Dynamic Web Service Selection Based on Reliability

Chen, Cheng-Hung 11 July 2007 (has links)
As the emergence of SOA concept, web services has became a key technology to achieve the seamless system interoperability and collaborations with enterprises partners. Since many available web services provide overlapping or identical functionality, when it comes to composing a composite web service, a choice needs to be made for selecting an appropriate component web service. Dynamic web service selection refers to determining a subset of component web services to be invoked so as to orchestrate a composite web service. Previous work in web service selection usually assumes the invocations of web service operations to be independent of on another. But this assumption however does not hold in practice as both the composite and component web services often impose some orderings on the invocation of their operations to represent its business logic. Such orderings constrain the selection of component web services to orchestrate the composite web service. We therefore propose to use finite state machine (FSM) to model the invocation order of web service operations. We define a measure, called aggregated reliability, to measure the probability that a given state in the composite web service will lead to successful execution in the context where each component web service may fail with some probability. We show that the computation of aggregated reliability is equivalent to eigenvector computation. We also propose two strategies to select component web services that are likely to successfully complete the execution of a given sequence of operations. For our approach to work in a practical environment, the dominating composition language BPEL for specifying the operation invocation orders will be transformed into an abstract FSM. We also proposed a prototype for realizing our dynamic WS selection. Our experiments on a generated set of web service operation sequences show that our proposed strategies perform better than two baseline selection strategies.

Page generated in 0.0458 seconds