• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4175
  • 1988
  • 831
  • 747
  • 601
  • 597
  • 581
  • 285
  • 196
  • 131
  • 114
  • 113
  • 73
  • 72
  • 54
  • Tagged with
  • 11704
  • 1971
  • 1496
  • 1341
  • 1268
  • 1187
  • 1117
  • 1048
  • 978
  • 961
  • 944
  • 934
  • 926
  • 890
  • 869
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
711

Intelligent Caching to Mitigate the Impact of Web Robots on Web Servers

Rude, Howard Nathan January 2016 (has links)
No description available.
712

A Priority-Based Admission Control Scheme for Commercial Web Servers

Nafea, Ibtehal T., Younas, M., Holton, Robert, Awan, Irfan U. January 2014 (has links)
No / This paper investigates into the performance and load management of web servers that are deployed in commercial websites. Such websites offer various services such as flight/hotel booking, online banking, stock trading, and product purchases among others. Customers are increasingly relying on these round-the-clock services which are easier and (generally) cheaper to order. However, such an increasing number of customers' requests makes a greater demand on the web servers. This leads to web servers' overload and the consequential provisioning of inadequate level of service. This paper addresses these issues and proposes an admission control scheme which is based on the class-based priority scheme that classifies customer's requests into different classes. The proposed scheme is formally specified using -calculus and is implemented as a Java-based prototype system. The prototype system is used to simulate the behaviour of commercial website servers and to evaluate their performance in terms of response time, throughput, arrival rate, and the percentage of dropped requests. Experimental results demonstrate that the proposed scheme significantly improves the performance of high priority requests but without causing adverse effects on low priority requests.
713

Enforcing Trade Secrets among Competitors on the Semantic Web

Malik, Choudhry Muhammad Zaki 25 August 2004 (has links)
In this thesis, we present a novel approach for the preservation of trade secrets in a Business-to-Business (B2B) environment that involves trade among competitors. The Web provides a low cost medium for B2B collaborations. Information exchange may take place during such a collaboration. The exchanged information may be of a sensitive nature, forming a business trade secret. The open nature of the Web calls for techniques to prevent the disclosure of trade secrets. The emerging Semantic Web is expected to make the challenge more acute in terms of trade secret protection due to the automation of B2B interactions. In this thesis, the different businesses are represented by Web services on the envisioned Semantic Web. We propose a Peer-to-Peer (P2P) approach for preserving trade secrets in B2B interactions. We introduce a set of techniques based on data perturbation for preserving data privacy. The techniques presented in our thesis are implemented in WebBIS, a prototype for accessing e-business Web services. Finally, we conduct an extensive performance study (analytical and experimental) of the proposed techniques. / Master of Science
714

Florida "A" high school webmaster perceptions on faculty use of internet school websties

Wunderlich, Erwin J. 01 January 2002 (has links)
This exploratory study examined Florida "A" high school Webmasters perceptions on faculty use of Internet school websites. Variables studied in relationship to the extent of use of the WWW site for curricular and instructional support purposes included the following: Webmaster age, Webmaster years of experience, Webmaster fraction of time devoted to the function, school student enrollment, and school White student fraction. Webmasters were surveyed regarding their perceptions. A total of 33 surveys were returned for an effective response rate of 61.11%. A model was also presented for World Wide Web (WWW) site development. Curricula, instructional support, and other content items for consideration on a school WWW site were also presented. Further, fifty "A" school WWW sites were reviewed for these items. Collected data were coded and entered into an SPSS data base. The data revealed that only a small fraction of the faculty were posting curricular and instructional support information to the school WWW site. Webmasters believed that many problems faced teachers in their use of the school WWW site. Time was the most frequently listed problem, followed in order by willingness to use, knowledge or experience, ease of use or access, and support. The Webmasters believed that existing policies concerning their WWW sites were adequate. Foreseen plans for their websites tended to not involve changes in site purpose, although some schools and districts were implementing changes to allow teachers to directly post material to the school WWW site rather than going through the Webmaster. Finally, the data showed that a significant positive correlation existed between the fraction of time the Webmaster devoted to the Webmaster function and the extent of use of the WWW site for curricular and instructional support purposes. The implication for educational leaders was that many factors over which they have considerable control do affect teachers in their use of the school WWW site. Educational leaders can help generate receptiveness towards use of the WWW site for curricular and instructional support purposes by influencing pre-service and in-service training programs, by allowing teachers administrative time to maintain the school WWW site, by surveying school personnel regarding improvement of the school WWW site, and by updating their technology plan to reflect goals on improved utilization of the school WWW site.
715

Traumatic brain injury options web application

Nagulavancha, Sruthi January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Daniel A. Andresen / According to the Division of Injury Response, Centers for Disease Control and Prevention, approximately 1.4 million Americans sustain a traumatic brain injury each year. The aim of the project is to create a web interface to link survivors, family members, and caregivers of individuals suffering from traumatic brain injuries (TBI) to potentially helpful agencies or service centers within their local communities. Often the TBI service centers located in the remote places are difficult to trace hence this website mainly concentrates on small rural centers which are located in Kansas State. The portal will offer two-dimensional and basic information about traumatic brain injury centers and specifically about access of resources. Within the portal, a link to an interactive map will be provided. A form for data entry helps the service centers to publish about their presence and the regions they serve. A search distance feature is also added into the website which interactively searches the nearest latitude, longitude values (TBI service center) to the user’s location by using the haversine formula.
716

Blue search in Kansas river database

Sama, Haritha Reddy January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Daniel Andresen / Students who are Research Assistants under Dr. Craig Paukert, Division of Biology Kansas State University have problems storing/retrieving their field data directly from database. There data is generally related to fishes in Kansas River. All this data is entered in the data sheets during the research in field. They enter fish related data like tag numbers (each fish caught is given a tag and after noting the data related to the fish it is left back in the river) and other information like size of the fish, place the fish was caught (stations) and information about the shore habitats, water temperature, depth etc. They also need data to recognize the aging in a recaptured fish (fish caught with a tag). The data from these data sheets is manually entered into the database. Each student has their information on the database entered (wherein they actually have to work on a single database). There is inconsistency of data. Also the database doesn't show the concentration of the fishes or location of the fishes right away. For which they require lot of analysis on the data. The main objective of this project is to provide a solution for this problem. The solution is an interface to store data and retrieve data and show the data on Maps for easy analysis and also have centralized data. The interface is user friendly which removes the hassle from entering the data manually into the database.
717

Information retrieval on the world wide web

Lee, Kwok-wai, Joseph, 李國偉 January 2001 (has links)
published_or_final_version / Computer Science and Information Systems / Doctoral / Doctor of Philosophy
718

Web 2.0 : A case study of Mindroute Software AB

Fried, Carl, Edlund, Alexander January 2008 (has links)
<p>The paper was written on the premise that knowledge and knowledge management are central competitive business advantages and that IT has developed favorably towards alleviating the process of harnessing intellectual capital. This topic has been frequently discussed with significant variances in theoretical approaches arguing both for and against IT’s impact in knowledge management. The authors have empirically explored the role of IT in this field through findings from a case study which quite originally manages knowledge with the emergence of novel web 2.0 tools. The findings of this study leads us to suggest a revised understanding of IT’s role in knowledge management based on the development of socially interactive web 2.0 which has the potential of replacing less effective and outdated IT tools. Optimal impact is suggested to be accomplished by sequentially integrating these various web 2.0 tools into the knowledge management process.</p>
719

Intelligent Software Agents for Electronic Commerce

Chen, Kristin M., Chen, Hsinchun January 2000 (has links)
Artificial Intelligence Lab, Department of MIS, University of Arizona / Electronic commerce (EC) and software agents are two of the hottest fields of research in information science. As the Internet is rapidly becomes a popular marketplace for consumers and sellers of goods and services, combining these two research areas offers lucrative opportunities both for businesses wishing to conduct transactions over the World Wide Web (WWW) and for developers of tools to facilitate this trend. The focus in this chapter will be on software agents specifically designed for electronic commerce activities. We will briefly describe the history of agent research in general, defining characteristics of agents, and will touch on the different types of agents. Following this introduction we will describe the learning and action mechanisms that make it possible for agents to perform tasks. Finally, we will describe the issues associated with the deployment of electronic commerce agents (ECAs).
720

Hardening the Browser: Protecting Patron Privacy on the Internet

Phetteplace, Eric, Kern, Mary Kathleen January 2012 (has links)
As more and more time is spent accessing and producing content online, libraries need to position themselves to offer Internet privacy to patrons as well. This column reviews tactics for securing web browsers, from selecting a high-quality piece of software to strong default settings to add-ons that extend the capabilities of the browser.

Page generated in 0.1494 seconds