• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19624
  • 3369
  • 2417
  • 2007
  • 1551
  • 1432
  • 877
  • 406
  • 390
  • 359
  • 297
  • 233
  • 208
  • 208
  • 208
  • Tagged with
  • 38106
  • 12455
  • 9251
  • 7104
  • 6698
  • 5896
  • 5284
  • 5196
  • 4722
  • 3452
  • 3302
  • 2810
  • 2725
  • 2537
  • 2116
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
661

Circuits and programs for handling nuclear data.

Kuchela, Kavudavadi Seshagiriah January 1968 (has links)
No description available.
662

Computer Go-Muku

Yuen, Jeanne Y. Y. January 1988 (has links)
No description available.
663

Data compression systems.

Husson, Georges Eugene. January 1970 (has links)
No description available.
664

Education and technology : a critical study of introduction of computers in Pakistani public schools

Arshad-Ayaz, Adeela. January 2006 (has links)
The importance of technology in education cannot be underestimated. There are compelling reasons for developing nations like Pakistan to introduce technology in their educational systems. Nevertheless the approach and methods used in introducing technology in schools are premised on an economic ideology and based on a techno-centric curriculum that leads to new forms of dependency by keeping individuals from controlling the decisions that significantly shape their lives. / Introduction of technology does not automatically guarantee enhanced learning or effective teaching. Technology in education should be used as a tool to increase communication, create awareness, break down existing hierarchies, develop new styles of creating knowledge, and make schooling and education more inclusive. Mere technical use of computers in education does nothing to empower students. / The techno-centric introduction of technology in Pakistani public schools is likely to produce inequality. A number of practices in Pakistan's educational and social structure will have to change for the potential of technology to be fully achieved. A shift is needed from 'learning about the computers' to 'using computers in learning', from 'acquisition of limited skills' to 'construction of knowledge', from 'teacher-dependency' to 'independent inquiry' and from 'teacher-centered' to 'student-centered' teaching methods. / However, such a change can only take place within a critical framework of education. The critical model based on integrated curriculum treats the computer not as an isolated subject but as a tool that helps learners enhance their critical thinking skills and seek various alternatives to solve problems. / Thus, it is important for educational policy-makers to realize that any effort at introducing technology in the educational realm requires theoretical discussion and a societal dialogue to arrive at a framework for technology's place in socio-educational contexts. Pakistan needs to develop and introduce educational technology to seek solutions for its unique economic, social, cultural and human and social development requirements based on its present level of development and evolution.
665

TCP Adaptation Framework in Data Centers

Ghobadi, Monia 09 January 2014 (has links)
Congestion control has been extensively studied for many years. Today, the Transmission Control Protocol (TCP) is used in a wide range of networks (LAN, WAN, data center, campus network, enterprise network, etc.) as the de facto congestion control mechanism. Despite its common usage, TCP operates in these networks with little knowledge of the underlying network or traffic characteristics. As a result, it is doomed to continuously increase or decrease its congestion window size in order to handle changes in the network or traffic conditions. Thus, TCP frequently overshoots or undershoots the ideal rate making it a "Jack of all trades, master of none" congestion control protocol. In light of the emerging popularity of centrally controlled Software-Defined Networks (SDNs), we ask whether we can take advantage of the information available at the central controller to improve TCP. Specifically, in this thesis, we examine the design and implementation of OpenTCP, a dynamic and programmable TCP adaptation framework for SDN-enabled data centers. OpenTCP gathers global information about the status of the network and traffic conditions through the SDN controller, and uses this information to adapt TCP. OpenTCP periodically sends updates to end-hosts which, in turn, update their behaviour using a simple kernel module. In this thesis, we first present two real-world TCP adaptation experiments in depth: (1) using TCP pacing in inter-data center communications with shallow buffers, and (2) using Trickle to rate limit TCP video streaming. We explain the design, implementation, limitation, and benefits of each TCP adaptation to highlight the potential power of having a TCP adaptation framework in today's networks. We then discuss the architectural design of OpenTCP, as well as its implementation and deployment at SciNet, Canada's largest supercomputer center. Furthermore, we study use-cases of OpenTCP using the ns-2 network simulator. We conclude that OpenTCP-based congestion control simplifies the process of adapting TCP to network conditions, leads to improvements in TCP performance, and is practical in real-world settings.
666

TCP Adaptation Framework in Data Centers

Ghobadi, Monia 09 January 2014 (has links)
Congestion control has been extensively studied for many years. Today, the Transmission Control Protocol (TCP) is used in a wide range of networks (LAN, WAN, data center, campus network, enterprise network, etc.) as the de facto congestion control mechanism. Despite its common usage, TCP operates in these networks with little knowledge of the underlying network or traffic characteristics. As a result, it is doomed to continuously increase or decrease its congestion window size in order to handle changes in the network or traffic conditions. Thus, TCP frequently overshoots or undershoots the ideal rate making it a "Jack of all trades, master of none" congestion control protocol. In light of the emerging popularity of centrally controlled Software-Defined Networks (SDNs), we ask whether we can take advantage of the information available at the central controller to improve TCP. Specifically, in this thesis, we examine the design and implementation of OpenTCP, a dynamic and programmable TCP adaptation framework for SDN-enabled data centers. OpenTCP gathers global information about the status of the network and traffic conditions through the SDN controller, and uses this information to adapt TCP. OpenTCP periodically sends updates to end-hosts which, in turn, update their behaviour using a simple kernel module. In this thesis, we first present two real-world TCP adaptation experiments in depth: (1) using TCP pacing in inter-data center communications with shallow buffers, and (2) using Trickle to rate limit TCP video streaming. We explain the design, implementation, limitation, and benefits of each TCP adaptation to highlight the potential power of having a TCP adaptation framework in today's networks. We then discuss the architectural design of OpenTCP, as well as its implementation and deployment at SciNet, Canada's largest supercomputer center. Furthermore, we study use-cases of OpenTCP using the ns-2 network simulator. We conclude that OpenTCP-based congestion control simplifies the process of adapting TCP to network conditions, leads to improvements in TCP performance, and is practical in real-world settings.
667

Mining frequent itemsets from uncertain data: extensions to constrained mining and stream mining

Hao, Boyu 19 July 2010 (has links)
Most studies on frequent itemset mining focus on mining precise data. However, there are situations in which the data are uncertain. This leads to the mining of uncertain data. There are also situations in which users are only interested in frequent itemsets that satisfy user-specified aggregate constraints. This leads to constrained mining of uncertain data. Moreover, floods of uncertain data can be produced in many other situations. This leads to stream mining of uncertain data. In this M.Sc. thesis, we propose algorithms to deal with all these situations. We first design a tree-based mining algorithm to find all frequent itemsets from databases of uncertain data. We then extend it to mine databases of uncertain data for only those frequent itemsets that satisfy user-specified aggregate constraints and to mine streams of uncertain data for all frequent itemsets. Experimental results show the effectiveness of all these algorithms.
668

Frequent pattern mining of uncertain data streams

Jiang, Fan January 2011 (has links)
When dealing with uncertain data, users may not be certain about the presence of an item in the database. For example, due to inherent instrumental imprecision or errors, data collected by sensors are usually uncertain. In various real-life applications, uncertain databases are not necessarily static, new data may come continuously and at a rapid rate. These uncertain data can come in batches, which forms a data stream. To discover useful knowledge in the form of frequent patterns from streams of uncertain data, algorithms have been developed to use the sliding window model for processing and mining data streams. However, for some applications, the landmark window model and the time-fading model are more appropriate. In this M.Sc. thesis, I propose tree-based algorithms that use the landmark window model or the time-fading model to mine frequent patterns from streams of uncertain data. Experimental results show the effectiveness of our algorithms.
669

Mining frequent patterns from uncertain data with MapReduce

Hayduk, Yaroslav 04 April 2012 (has links)
Frequent pattern mining from uncertain data allows data analysts to mine frequent patterns from probabilistic databases, within which each item is associated with an existential probability representing the likelihood of the presence of the item in the transaction. When compared with precise data, the solution space for mining uncertain data is often much larger due to the probabilistic nature of uncertain databases. Thus, uncertain data mining algorithms usually take substantially more time to execute. Recent studies show that the MapReduce programming model yields significant performance gains for data mining algorithms, which can be mapped to the map and reduce execution phases of MapReduce. An attractive feature of MapReduce is fault-tolerance, which permits detecting and restarting failed jobs on working machines. In this M.Sc. thesis, I explore the feasibility of applying MapReduce to frequent pattern mining of uncertain data. Specifically, I propose two algorithms for mining frequent patterns from uncertain data with MapReduce.
670

Business process architectures with multiplicities : transformation and correctness

Eid-Sabbagh, Rami-Habib, Hewelt, Marcin, Weske, Mathias January 2013 (has links)
Business processes are instrumental to manage work in organisations. To study the interdependencies between business processes, Business Process Architectures have been introduced. These express trigger and message ow relations between business processes. When we investigate real world Business Process Architectures, we find complex interdependencies, involving multiple process instances. These aspects have not been studied in detail so far, especially concerning correctness properties. In this paper, we propose a modular transformation of BPAs to open nets for the analysis of behavior involving multiple business processes with multiplicities. For this purpose we introduce intermediary nets to portray semantics of multiplicity specifications. We evaluate our approach on a use case from the public sector.

Page generated in 0.1211 seconds