Spelling suggestions: "subject:"file"" "subject:"pile""
41 |
Optimising structured P2P networks for complex queriesFurness, Jamie R. January 2014 (has links)
With network enabled consumer devices becoming increasingly popular, the number of connected devices and available services is growing considerably - with the number of connected devices es- timated to surpass 15 billion devices by 2015. In this increasingly large and dynamic environment it is important that users have a comprehensive, yet efficient, mechanism to discover services. Many existing wide-area service discovery mechanisms are centralised and do not scale to large numbers of users. Additionally, centralised services suffer from issues such as a single point of failure, high maintenance costs, and difficulty of management. As such, this Thesis seeks a Peer to Peer (P2P) approach. Distributed Hash Tables (DHTs) are well known for their high scalability, financially low barrier of entry, and ability to self manage. They can be used to provide not just a platform on which peers can offer and consume services, but also as a means for users to discover such services. Traditionally DHTs provide a distributed key-value store, with no search functionality. In recent years many P2P systems have been proposed providing support for a sub-set of complex query types, such as keyword search, range queries, and semantic search. This Thesis presents a novel algorithm for performing any type of complex query, from keyword search, to complex regular expressions, to full-text search, over any structured P2P overlay. This is achieved by efficiently broadcasting the search query, allowing each peer to process the query locally, and then efficiently routing responses back to the originating peer. Through experimentation, this technique is shown to be successful when the network is stable, however performance degrades under high levels of network churn. To address the issue of network churn, this Thesis proposes a number of enhancements which can be made to existing P2P overlays in order to improve the performance of both the existing DHT and the proposed algorithm. Through two case studies these enhancements are shown to improve not only the performance of the proposed algorithm under churn, but also the performance of traditional lookup operations in these networks.
|
42 |
RECORDERS IN NETWORKED DATA ACQUISITION SYSTEMSGrebe, David L. 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / The role of recorders in telemetry applications has undergone many changes throughout the years. We’ve seen the evolution from multi-track tape to disk to solid state technologies, both for airborne and ground based equipment. Data acquisition and collection system design has changed as well and a recent trend in airborne is to merge acquisition and recording. On the ground, increased decentralization of data collection and processing has generated the requirement to provide backup storage to protect against communication circuit outages. This paper explores the trend to adopt network based data acquisition, collection, and distribution systems for telemetry applications and the impact on recording techniques and equipment. It shows that in this emerging approach the recorder returns to its root mission of attempting to provide the fastest, largest capacity for the least amount of investment. In a network based architecture the recorder need only accept and reproduce data operating independently from the acquisition process.
|
43 |
Centralize (media) file sharing within organizations: Design guidelinesLundgren Bjuhr, Peter January 2015 (has links)
File sharing is a significant activity of enterprise computer use. In organizations, files are usually shared using e-mail attachments. However, large media files cannot be shared using e-mail, due to file size limitations. Instead, different external file sharing systems are used to share large files. The use and size of media files will continue to increase, which requires file sharing mechanisms that can handle this. For organizations today, it is difficult to find a file sharing application that fulfills all requirements and needs of the users. Especially for large media rich organizations, where the file sharing scenarios are many and files can be shared internally and externally with dissimilar feature and security requirements. An example of such an organization is Baggie, a fashion company where large media files are shared daily, using various file sharing systems. The inconsistency of what system to use, leads to confusion and frustration among its users. Additionally, their current file sharing systems do not fulfill all of Baggie’s users’ requirements and no system is integrated with their media asset management system. This master thesis aims to solve the challenges of file sharing within large organizations, particularly media rich organizations such as Baggie by centralize file sharing to one application. By performing a theoretical study and user studies, thirteen design guidelines for file sharing applications have been established. The guidelines focuses on usability, security and users’ requirements regarding media file sharing. Based on the studies, a prototype have been designed for a new file sharing application: BShare. BShare aims to replace Baggie’s current file sharing systems and the application fulfills all requirements of Baggie users. The BShare prototype can be seen as a reference design for file sharing applications.
|
44 |
Cheetah: An Economical Distributed RAM DriveTingstrom, Daniel 20 January 2006 (has links)
Current hard drive technology shows a widening gap between the ability to store vast amounts of data and the ability to process. To overcome the problems of this secular trend, we explore the use of available distributed RAM resources to effectively replace a mechanical hard drive. The essential approach is a distributed Linux block device that spreads its blocks throughout spare RAM on a cluster and transfers blocks using network capacity. The presented solution is LAN-scalable, easy to deploy, and faster than a commodity hard drive. The specific driving problem is I/O intensive applications, particularly digital forensics. The prototype implementation is a Linux 2.4 kernel module, and connects to Unix based clients. It features an adaptive prefetching scheme that seizes future data blocks for each read request. We present experimental results based on generic benchmarks as well as digital forensic applications that demonstrate significant performance gains over commodity hard drives.
|
45 |
Auditing database integrity with special reference to UNIX and INFORMIX17 March 2015 (has links)
M.Com. / Please refer to full text to view abstract
|
46 |
Designing and implementing a computer conferencing system to manage and track articles through the revision processDock, Patricia January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
47 |
Locality in logical database systems : a framework for analysis.McCabe, Edward James January 1978 (has links)
Thesis. 1978. M.S.--Massachusetts Institute of Technology. Alfred P. Sloan School of Management. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND DEWEY. / Bibliography: leaves 106-109. / M.S.
|
48 |
Extending the ASSIST sketch recognition systemHitchcock, Rebecca Anne, 1979- January 2003 (has links)
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003. / Includes bibliographical references (p. 71-72). / by Rebecca Anne Hitchcock. / M.Eng.
|
49 |
Design and Analysis of a Highly Efficient File Server GroupLiu, Feng-jung 29 January 2005 (has links)
The IT community has increasingly come to view storage as a resource that should be shared among computer systems and managed independently of the computer systems that it serves. And, the explosive growth of the Web contents has led to increasing attention on two major challenges: scalability and high availability of network file system. Therefore, the ways to improve the reliability and availability of system, to achieve the expected reduction in operational expenses and to reduce the operations of system management of system have become essential issues. A basic technique for improving reliability of a file system is to mask the effects of failures through replication. Consistency control protocols are implemented to ensure the consistency among these replicas.
In this dissertation, we leveraged the concept of intermediate file handle to cover the heterogeneity of file system. But, the monolithic server system suffered from the poor system utilization due to the lack of dependence checking between writes and management of out-of-ordered requests. Hence, in this dissertation, we followed the concept of intermediate file handle and proposed an efficient data consistency control scheme, which attempts to eliminate unnecessary waits for independent NFS writes to improve the efficiency of file server group. In addition, we also proposed a simple load-sharing mechanism for NFS client to improve system throughput and the utilization of duplicates. Finally, the results of experiments proved the efficiency of the proposed consistency control mechanism and load-sharing policy. Above all, easy to implement is our main design consideration.
|
50 |
Macro-modeling and energy efficiency studies of file management in embedded systems with flash memoryGoyal, Nitesh 16 August 2006 (has links)
Technological advancements in computer hardware and software have made embedded
systems highly affordable and widely used. Consumers have ever increasing demands
for powerful embedded devices such as cell phones, PDAs and media players. Such
complex and feature-rich embedded devices are strictly limited by their battery life-
time. Embedded systems typically are diskless and use flash for secondary storage
due to their low power, persistent storage and small form factor needs. The energy
efficiency of a processor and flash in an embedded system heavily depends on the
choice of file system in use. To address this problem, it is necessary to provide sys-
tem developers with energy profiles of file system activities and energy efficient file
systems. In the first part of the thesis, a macro-model for the CRAMFS file system
is established which characterizes the processor and flash energy consumption due to
file system calls. This macro-model allows a system developer to estimate the energy
consumed by CRAMFS without using an actual power setup. The second part of
the thesis examines the effects of using non-volatile memory as a write-behind buffer
to improve the energy efficiency of JFFS2. Experimental results show that a 4KB
write-behind buffer significantly reduces energy consumption by up to 2-3 times for
consecutive small writes. In addition, the write-behind buffer conserves flash space
since transient data may never be written to flash.
|
Page generated in 0.0464 seconds