• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2551
  • 1023
  • 403
  • 270
  • 95
  • 76
  • 52
  • 45
  • 43
  • 43
  • 40
  • 37
  • 29
  • 27
  • 22
  • Tagged with
  • 5659
  • 1746
  • 1275
  • 828
  • 823
  • 744
  • 741
  • 724
  • 614
  • 592
  • 546
  • 533
  • 522
  • 489
  • 477
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Collection Development in the Syllabus of Library and Information Science: An International Comparative Analysis

Pérez-López, Ana January 2000 (has links)
[English abstract]The objective of this work is to show the present state of education as far as collection development , in the new technological environment in which we are immersed, besides of informig about the results of a study carried out in the international scope of faculties and departments of Library and Information Science (LIS) through their Web pages, by means of the revision as much of the courses that contemplate the education of collection development , as of the specific syllabus. The results of the study indicate that, at the present time, the English-speaking countries are those that offer specialized courses developing collection and those that have their education integrated in the syllabus. In most of cases, education one takes place within the second cycle and with prerequirements for inscription. On the contrary, mainly in many Schools and Faculties of Europe, Latin America, Asia and Africa, the index of specific courses on collection development is very low. Another new field that also is approached in this work is education of virtual library providing data of the proposed courses in the universities visited, since, in their contents it always appears collection development.
172

Designing web-based instruction: A human-computer interaction perspective

Dillon, Andrew, Zhu, Erping January 1997 (has links)
This item is not the definitive copy. Please use the following citation when referencing this material: Dillon, A. and Zhu, E. (1997) Designing Web Based Instruction: A Human-Computer Interaction (HCI) Perspective. In: Khan (ed.) Web-Based Instruction. Englewood Cliffs. NJ: Educational Technology Publications, 221-225. Introduction: The general interest in the World Wide Web (WWW) as a medium for sharing and distributing textual and graphic information has brought about an increasing number of instruction-oriented web sites and web-based instructional pages. These range from offering supplemental (or even duplicate) instructional materials to students on campus to providing opportunities for off-campus individuals to complete courses via WWW. This chapter briefly discusses the design of web-based instruction from an HCI perspective, raising issues which instructors and designers need to consider in the design of web-based instruction, and suggesting ways in which instructors and designers can build optimal web instructional sites and pages.
173

Pattern of online library resource usage per user in a distributed graduate education environment

Kramer, Stefan January 2006 (has links)
The frequency distribution on online library resource usage by individual users (mostly students) at a distributed education graduate school is notably skewed, with a relatively small number of users showing frequent usage, and a large number of users showing infrequent usage.
174

BUILDING AN ARTIFICIAL CEREBELLUM USING A SYSTEM OF DISTRIBUTED Q-LEARNING AGENTS

Soto Santibanez, Miguel Angel January 2010 (has links)
About 400 million years ago sharks developed a separate co-processor in their brains that not only made them faster but also more precisely coordinated. This co-processor, which is nowadays called cerebellum, allowed sharks to outperform their peers and survive as one of the fittest. For the last 40 years or so, researchers have been attempting to provide robots and other machines with this type of capability. This thesis discusses currently used methods to create artificial cerebellums and points out two main shortcomings: 1) framework usability issues and 2) building blocks incompatibility issues. This research argues that the framework usability issues hinder the production of good quality artificial cerebellums for a large number of applications. Furthermore, this study argues that the building blocks incompatibility issues make artificial cerebellums less efficient that they could be, given our current technology. To tackle the framework usability issues, this thesis research proposes the use of a new framework, which formalizes the task of creating artificial cerebellums and offers a list of simple steps to accomplish this task. Furthermore, to tackle the building blocks incompatibility issues, this research proposes thinking of artificial cerebellums as a set of cooperating q-learning agents, which utilize a new technique called Moving Prototypes to make better use of the available memory and computational resources. Furthermore, this work describes a set of general guidelines that can be applied to accelerate the training of this type of system. Simulation is used to show examples of the performance improvements resulting from the use of these guidelines. To illustrate the theory developed in this dissertation, this paper implements a cerebellum for a real life application, namely, a cerebellum capable of controlling a type of mining equipment called front-end loader. Finally, this thesis proposes the creation of a development tool based on this formalization. This research argues that such a development tool would allow engineers, scientists and technicians to quickly build customized cerebellums for a wide range of applications without the need of becoming experts on the area of Artificial Intelligence, Neuroscience or Machine Learning.
175

Transportation system modeling using the High Level Architecture

Melouk, Sharif 30 September 2004 (has links)
This dissertation investigates the High Level Architecture (HLA) as a possible distributed simulation framework for transportation systems. The HLA is an object-oriented approach to distributed simulations developed by the Department of Defense (DoD) to handle the issues of reuse and interoperability of simulations. The research objectives are as follows: (1) determine the feasibility of making existing traffic management simulation environments HLA compliant; (2) evaluate the usability of existing HLA support software in the transportation arena; (3) determine the usability of methods developed by the military to test for HLA compliance on traffic simulation models; and (4) examine the possibility of using the HLA to create Internet-based virtual environments for transportation research. These objectives were achieved in part via the development of a distributed simulation environment using the HLA. Two independent traffic simulation models (federates) comprised the environment (federation). A CORSIM federate models a freeway feeder road with an on-ramp while an Arena federate models a tollbooth exchange.
176

Scalable approximations to causality and consistency of distributed objects

Torres-Rojas, Francisco Jose 08 1900 (has links)
No description available.
177

A NEW POWER SIGNAL PROCESSOR FOR CONVERTER-INTERFACED DISTRIBUTED GENERATION SYSTEMS

Yazdani, Davood 27 January 2009 (has links)
Environmentally friendly renewable energy technologies such as wind and solar energy systems are among the fleet of new generating technologies driving the demand for distributed generation of electricity. Power Electronics has initiated the next tech¬nological revolution and enables the connection of distributed generation (DG) systems to the grid. The challenge is to achieve system functionality without extensive custom engineering, yet still have high system reliability and generation placement flexibility. Nowadays, it is a general trend to increase the electricity production using DG systems. If these systems are not properly controlled, their connection to the utility network can generate problems on the grid side. Therefore, considerations about power generation, safe running and grid synchronization must be done before connecting these systems to the utility network. This thesis introduces a new grid-synchronization, or more visibly a new “power signal processor” adaptive notch filtering (ANF) tool that can potentially stimulate much interest in the field and provide improvement solutions for grid-connected operation of DG systems. The processor is simple and offers high degree of immunity and insensitivity to power system disturbances, harmonics and other types of pollutions that exist in the grid signal. The processor is capable of decomposing three-phase quantities into symmetrical components, extracting harmonics, tracking the frequency variations, and providing means for voltage regulation and reactive power control. In addition, this simple and powerful synchronization tool will simplify the control issues currently challenging the integration of distributed energy technologies onto the electricity grid. All converter-interfaced equipments like FACTS (flexible ac transmission systems) and Custom Power Controllers will benefit from this technique. The theoretical analysis is presented, and simulation and experimental results confirm the validity of the analytical work. / Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2009-01-27 11:37:07.279
178

Mining frequent sequences in one database scan using distributed computers

Brajczuk, Dale A. 01 September 2011 (has links)
Existing frequent-sequence mining algorithms perform multiple scans of a database, or a structure that captures the database. In this M.Sc. thesis, I propose a frequent-sequence mining algorithm that mines each database row as it reads it, so that it can potentially complete mining in the time it takes to read the database once. I achieve this by having my algorithm enumerate all sub-sequences from each row as it reads it. Since sub-sequence enumeration is a time-consuming process, I create a method to distribute the work over multiple computers, processors, and thread units, while balancing the load between all resources, and limiting the amount of communication so that my algorithm scales well in regards to the number of computers used. Experimental results show that my algorithm is effective, and can potentially complete the mining process in near the time it takes to perform one scan of the input database.
179

Language features for fully distributed processing systems

Maccabe, Arthur Bernard January 1982 (has links)
No description available.
180

Operational survivability in gracefully degrading distributed processing systems

Martin, Edith Waisbrot January 1980 (has links)
No description available.

Page generated in 0.0464 seconds