• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4168
  • 1988
  • 830
  • 747
  • 601
  • 597
  • 581
  • 285
  • 196
  • 131
  • 114
  • 113
  • 73
  • 72
  • 54
  • Tagged with
  • 11699
  • 1971
  • 1495
  • 1339
  • 1268
  • 1187
  • 1117
  • 1048
  • 978
  • 961
  • 943
  • 934
  • 926
  • 890
  • 869
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

Combining web data mining techniques for web page access prediction

Khalil, Faten January 2008 (has links)
[Abstract]: Web page access prediction gained its importance from the ever increasing number of e-commerce Web information systems and e-businesses. Web page prediction, that involves personalising the Web users’ browsing experiences, assists Web masters in the improvement of the Web site structure and helps Web users in navigating the site and accessing the information they need. The most widely used approach for this purpose is the pattern discovery process of Web usage mining that entails many techniques like Markov model, association rules and clustering. Implementing pattern discovery techniques as such helps predict the next page tobe accessed by theWeb user based on the user’s previous browsing patterns. However, each of the aforementioned techniques has its own limitations, especiallywhen it comes to accuracy and space complexity. This dissertation achieves better accuracy as well as less state space complexity and rules generated by performingthe following combinations. First, we combine low-order Markov model and association rules. Markov model analysis are performed on the data sets. If the Markov model prediction results in a tie or no state, association rules are used for prediction. The outcome of this integration is better accuracy, less Markov model state space complexity and less number of generated rules than using each of the methods individually. Second, we integrate low-order Markov model and clustering. The data sets are clustered and Markov model analysis are performed oneach cluster instead of the whole data sets. The outcome of the integration is better accuracy than the first combination with less state space complexity than higherorder Markov model. The last integration model involves combining all three techniques together: clustering, association rules and low-order Markov model. The data sets are clustered and Markov model analysis are performed on each cluster. If the Markov model prediction results in close accuracies for the same item, association rules are used for prediction. This integration model achievesbetter Web page access prediction accuracy, less Markov model state space complexity and less number of rules generated than the previous two models.
582

The use of web metrics for online strategic decision-making

Weischedel, Birgit, n/a January 2005 (has links)
"I know but one freedom, and that is the freedom of the mind" Antoine de Saint-Exupery. Web metrics offer significant potential for online businesses to incorporate high-quality, real-time information into their strategic marketing decision-making (SDM) process. This SDM process is affected by the firm�s strategic direction, which is critical for web businesses. A review of the widely researched strategy and SDM literature identified that managers use extensive information to support and improve strategic decisions and make informed decisions. Offline SDM processes might be appropriate for the online environment but the limited literature on web metrics has not researched information needs for online SDM. Even though web metrics can be a valuable tool for web businesses to inform strategic marketing decisions, and their collection might be less expensive and easier than offline measures, virtually no published research has combined web metrics and SDM concepts into one research project. To address this gap in the literature, the thesis investigated the differences and commonalities of online and offline SDM process approaches, the use of web metrics categories for online SDM stages, and the issues encountered during that process through four research questions. A preliminary conceptual model based on the literature review was refined through preliminary research, which addressed the research questions and investigated the current state of web metrics. After investigating various methodologies, a multi-stage qualitative methodology was selected. The use of qualitative methods represents a contribution to knowledge regarding methodological approaches to online research. Four stages within the online SDM process were shown to benefit from the use of web metrics: the setting of priorities, the setting of objectives, the pretest stage and the review stage. The results identified the similarity of online and offline SDM processes; demonstrated that Traffic, Transactions, Customer Feedback and Consumer Behaviour categories provide basic metrics used by most companies; identified the Environment, Technology, Business Results and Campaigns categories as supplementary categories that are applied according to the marketing objectives; and investigated the results based on different types of companies (website classification, channel focus, size and cluster association). Three clusters were identified that relate to the strategic importance of the website and web metrics. Modifying the initial conceptual model, six issues were distinguished that affect the use of web metrics: the adoption and use of web metrics by managers; the integration of multiple sources of metrics; the establishment of industry benchmarks; data quality; the differences to offline measures; as well as resource constraints that interfere with the appropriate web metrics analysis. Links to offline marketing strategy literature and established business concepts were explored and explanations provided where the results confirmed or modified these concepts. Using qualitative methods, the research assisted in building theory of web metrics and online SDM processes. The results show that offline theories apply to the online environment and conventional concepts provide guidance for online processes. Dynamic aspects of strategy relate to the online environment, and qualitative research methods appear suitable for online research. Publications during this research project: Weischedel, B., Matear, S. and Deans, K. R. (2003) The Use of E-metrics in Strategic Marketing Decisions - A Preliminary Investigation. Business Excellence �03 - 1st International Conference on Performance Measures, Benchmarking and Best Practices in the New Economy, Guimaraes, Portugal; June 10-13, 2003. Weischedel, B., Deans, K. R. and Matear, S. (2004) Emetrics - An Empirical Study of Marketing Performance Measures for Web Businesses. Performance Measurement Association Conference 2004, Edinburgh, UK; July 28-30, 2004. Weischedel, B., Matear, S. and Deans, K. R. (2005) "A Qualitative Approach to Investigating Online Strategic Decision-Making" Qualitative Market Research, Vol. 8 No 1, pp. 61-76. Weischedel, B., Matear, S. and Deans, K. R. (2005) "The Use of Emetrics in Strategic Marketing Decisions - A Preliminary Investigation" International Journal of Internet Marketing and Advertising, Vol. 2 Nos 1/2, p. 109-125.
583

Accessibility of WVU Websites for individuals with vision impairments

Jacobin, Sarah. January 1900 (has links)
Thesis (M.S.)--West Virginia University, 2007. / Title from document title page. Document formatted into pages; contains viii, 40 p. : ill. (some col.). Vita. Includes abstract. Includes bibliographical references (p. 32-35).
584

Évaluation de la valeur à l'ère du web : Proposition de modèle de valorisation des projets non marchands

Druel, François 14 November 2007 (has links) (PDF)
L'arrivée du web permis l'émergence de nombreux projets non marchands : Libres, ouverts ou Open Source, ces créations ne fondent pas leur valeur sur la rareté mais au contraire sur leur ouverture et leur foisonnement. De plus, ces projets n'émanent pas d'entreprises, mais d'organisations en apparence informelles qui ne vendent pas de produit mais proposent de s'investir dans un projet. L'objectif de notre thèse est de proposer un cadre méthodologique permettant l'évaluation de la valeur des produits et des projets non marchands. Nous étudions la valeur, le phénomène technologique puis les outils de partage. Puis nous étudions les méthodes de recueil et de traitement des données. Enfin, nous nous penchons sur les méthodes d'évaluation de l'immatériel. Cette recherche nous permet de proposer un modèle d'évaluation de la valeur des projets non marchands de l'ère du web. Nous nous appuyons sur deux axes : l'attractivité et la pérennité et nous définissons 18 critères d'évaluation ainsi que qu'une échelle de pondération permettant de définir un graphique radial multi-critères permettant l'aide au choix. Notre modèle se destine au grand public souhaitant s'investir dans un projet.
585

Intégration de l'Internet 3G au sein d'une plate-forme active

Chamoun, Maroun 05 1900 (has links) (PDF)
Les réseaux à base de politiques (PBN) définissent un paradigme des plus prometteurs pour la gestion et le contrôle des ressources réseaux. COPS tend à être le protocole de facto pour l'utilisation de ce type d'applications. La question se pose de savoir comment déployer ce type de services pour permettre les échanges entre les différentes entités protocolaires du réseau (PEP et PDP). C'est à ce niveau que nous proposons de faire intervenir la notion de réseaux actifs, vu que ce paradigme permet d'intégrer du code exécutable aux paquets transférés et/ou nœuds en vue de distribuer et de configurer dynamiquement les services du réseau. D'autre part, une grande effervescence existe actuellement autour du déploiement des Web services sur Internet à l'aide du protocole SOAP. Toujours dans le cadre de l'Internet mais selon une perspective différente, la représentation sémantique des données, et grâce à la définition d'ontologies, permet aux logiciels d'interpréter intelligemment les données qu'elles gèrent et de ne plus jouer le rôle de simple équipement de stockage passif. La synergie entre les 4 paradigmes: PBN, Réseaux actifs, Web services, et représentation sémantique des données, présente une solution intégrée et portable des plus intéressantes pour la représentation des nœuds de l'architecture active, la conception, l'implémentation et le déploiement de services, et plus spécifiquement un service de gestion dynamique contrôlable et intelligent, dont les données (politiques et règles) sont représentés par une même ontologie.
586

Context Mediation in the Semantic Web: Handling OWL Ontology and Data Disparity through Context Interchange

Tan, Philip Eik Yeow, Tan, Kian Lee, Madnick, Stuart E. 01 1900 (has links)
The COntext INterchange (COIN) strategy is an approach to solving the problem of interoperability of semantically heterogeneous data sources through context mediation. COIN has used its own notation and syntax for representing ontologies. More recently, the OWL Web Ontology Language is becoming established as the W3C recommended ontology language. We propose the use of the COIN strategy to solve context disparity and ontology interoperability problems in the emerging Semantic Web – both at the ontology level and at the data level. In conjunction with this, we propose a version of the COIN ontology model that uses OWL and the emerging rules interchange language, RuleML. / Singapore-MIT Alliance (SMA)
587

Learning Applications based on Semantic Web Technologies

Palmér, Matthias January 2012 (has links)
The interplay between learning and technology is a growing field that is often referred to as Technology Enhanced Learning (TEL). Within this context, learning applications are software components that are useful for learning purposes, such as textbook replacements, information gathering tools, communication and collaboration tools, knowledge modeling tools, rich lab environments that allows experiments etc. When developing learning applications, the choice of technology depends on many factors. For instance, who and how many the intended end-users are, if there are requirements to support in-application collaboration, platform restrictions, the expertise of the developers, requirements to inter-operate with other systems or applications etc. This thesis provides guidance on a how to develop learning applications based on Semantic Web technology. The focus on Semantic Web technology is due to its basic design that allows expression of knowledge at the web scale. It also allows keeping track of who said what, providing subjective expressions in parallel with more authoritative knowledge sources. The intended readers of this thesis include practitioners such as software architects and developers as well as researchers in TEL and other related fields. The empirical part of the this thesis is the experience from the design and development of two learning applications and two supporting frameworks. The first learning application is the web application Confolio/EntryScape which allows users to collect files and online material into personal and shared portfolios. The second learning application is the desktop application Conzilla, which provides a way to create and navigate a landscape of interconnected concepts. Based upon the experience of design and development as well as on more theoretical considerations outlined in this thesis, three major obstacles have been identified: The first obstacle is: lack of non-expert and user friendly solutions for presenting and editing Semantic Web data that is not hard-coded to use a specific vocabulary. The thesis presents five categories of tools that support editing and presentation of RDF. The thesis also discusses a concrete software solution together with a list of the most important features that have crystallized during six major iterations of development. The second obstacle is: lack of solutions that can handle both private and collaborative management of resources together with related Semantic Web data. The thesis presents five requirements for a reusable read/write RDF framework and a concrete software solution that fulfills these requirements. A list of features that have appeared during four major iterations of development is also presented. The third obstacle is: lack of recommendations for how to build learning applications based on Semantic Web technology. The thesis presents seven recommendations in terms of architectures, technologies, frameworks, and type of application to focus on. In addition, as part of the preparatory work to overcome the three obstacles, the thesis also presents a categorization of applications and a derivation of the relations between standards, technologies and application types. / <p>QC 20121105</p>
588

Use Geospatial Web Service to Access Geospatial Data Base on Web2.0

Wu, Tsung-Han 16 August 2007 (has links)
Due to rapid development of the internet, it changes the life style of the human. The internet expert had reported that we are now in Web2.0 era. This research tries to explore how web GIS can fullfil the spirit of the Web2.0 and its possible applications. The first step of the research is to review the related techniques and applications of Geospatial web services and Web2.0. Then, a system with open GIS data structure was proposed and a web system was also established according to the spirit of Web2.0 - ¡§user participation¡¨. Web Map Services (WMS) and Web Feature Services (WFS) defined by Open Geospatial Consortium(OGC) were used in Geospatial web services system to search and view Geospatial data on the internet. Users can integrate spatial data from various sources on the internet and their own geospatial data and save as Web Map Context (WMC) file format. Then, WMC can be exchanged by other OGC Geospatial web services. In addition, the system supports file format transformation from WMC to KML, which is compatible with Google Earth. So users can use Google Earth to view the spatial layer information more easily. This study also developed a platform to demonstrate geospatial information in the blog, so users can share their Geospatial data in open GIS format with other bloggers. The system also use Google Map API and folksonomy in the data sharing process in order to speed up the web flow and to communicate their comments more easily.
589

SunSpot: A Spatial Decision Support Web-Application for Exploring Urban Solar Energy Potential

Blakey, Andrew January 2013 (has links)
The growing necessity for meaningful climate change response has encouraged the development of global warming mitigation and adaptation initiatives. Urban solar energy generation is one opportunity that has been investigated by numerous cities through various solar potential Web-applications. However, as solar feasibility can vary considerably across a small geographic area due to variations in local topography and feature shading, there is no one-size-fits-all solution to be implemented. This thesis investigates how a Web-based spatial decision support system (SDSS) can enable non-experts to explore urban solar feasibility and, to a lesser extent, issues related to urban heat. First, a conceptual framework is developed that investigates the linkages between SDSS, Web technologies, public participation, volunteered geographic information, and existing green energy initiatives. This framework identifies the relevance between these fields of study as well as a number of opportunities for improving on past work and taking advantage of new technical capabilities. Second, in order to test the opportunities identified, SunSpot was developed. This Web-SDSS investigates rooftop solar feasibility as well as land cover and surface temperature dynamics relating to the urban heat-island effect in Toronto, Ontario, Canada. A number of solar resource datasets were developed in order to facilitate the decision making capabilities of SunSpot. This was done using a combination of different topographical data sources, atmospheric data, and a raster-based irradiance model called Solar Analyst. Third, a number of in-person workshops were conducted to obtain feedback on SunSpot’s usability and ability for users to understand the visual layers and results. Finally, this feedback was analyzed to identify the successes and challenges of SunSpot’s capabilities and design. This revealed a number of recommendations for further development of SunSpot, as well as opportunities for future research relating to the development of local scale solar resource data and the development of similar Web-SDSS applications.
590

M-crawler: Crawling Rich Internet Applications Using Menu Meta-model

Choudhary, Suryakant 27 July 2012 (has links)
Web applications have come a long way both in terms of adoption to provide information and services and in terms of the technologies to develop them. With the emergence of richer and more advanced technologies such as Ajax, web applications have become more interactive, responsive and user friendly. These applications, often called Rich Internet Applications (RIAs) changed the traditional web applications in two primary ways: Dynamic manipulation of client side state and Asynchronous communication with the server. At the same time, such techniques also introduce new challenges. Among these challenges, an important one is the difficulty of automatically crawling these new applications. Crawling is not only important for indexing the contents but also critical to web application assessment such as testing for security vulnerabilities or accessibility. Traditional crawlers are no longer sufficient for these newer technologies and crawling in RIAs is either inexistent or far from perfect. There is a need for an efficient crawler for web applications developed using these new technologies. Further, as more and more enterprises use these new technologies to provide their services, the requirement for a better crawler becomes inevitable. This thesis studies the problems associated with crawling RIAs. Crawling RIAs is fundamentally more difficult than crawling traditional multi-page web applications. The thesis also presents an efficient RIA crawling strategy and compares it with existing methods.

Page generated in 0.1018 seconds