• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 325
  • 92
  • 70
  • 52
  • 34
  • 31
  • 16
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 803
  • 340
  • 131
  • 125
  • 124
  • 117
  • 100
  • 69
  • 68
  • 65
  • 63
  • 62
  • 60
  • 60
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

A NEW GENERATION OF RECORDING TECHNOLOGY THE SOLID STATE RECORDER

Jensen, Peter, Thacker, Christopher 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California / The Test & Evaluation community is starting to migrate toward solid state recording. This paper outlines some of the important areas that are new to solid state recording as well as examining some of the issues involved in moving to a direct recording methodology. Some of the parameters used to choose a solid state memory architecture are included. A matrix to compare various methods of data recording, such as solid state and magnetic tape recording, will be discussed. These various methods will be evaluated using the following parameters: Ruggedness (Shock, Vibration, Temperature), Capacity, and Reliability (Error Correction). A short discussion of data formats with an emphasis on efficiency and usability is included.
242

Telemetry Definition and Processing (TDAP): Standardizing Instrumentation and EU Conversion Descriptions

Campbell, Daniel A., Reinsmith, Lee 10 1900 (has links)
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Telemetry format descriptions and engineering unit conversion calibrations are generated in an assortment of formats and numbering systems on various media. Usually this information comes to the central telemetry receiving/processing system from multiple sources, fragmented and disjointed. As present day flight tests require more and more telemetry parameters to be instrumented and processed, standardization and automation for handling this ever increasing amount of information becomes more and more critical. In response to this need, the Telemetry Definition and Processing (TDAP) system has been developed by the Air Force Development Test Center (AFDTC) Eglin AFB, Florida. TDAP standardizes the format of information required to convert PCM data and MIL-STD-1553 Bus data into engineering units. This includes both the format of the data files and the software necessary to display, output, and extract subsets of data. These standardized files are electronically available for TDAP users to review/update and are then used to automatically set up telemetry acquisition systems. This paper describes how TDAP is used to standardize the development and operational test community’s telemetry data reduction process, both real-time and post-test.
243

Systems and applications for persistent memory

Dulloor, Subramanya R. 07 January 2016 (has links)
Performance-hungry data center applications demand increasingly higher performance from their storage in addition to larger capacity memory at lower cost. While the existing storage technologies (e.g., HDD and flash-based SSD) are limited in their performance, the most prevalent memory technology (DRAM) is unable to address the capacity and cost requirements of these applications. Emerging byte-addressable, non-volatile memory technologies (such as PCM and RRAM) offer performance within an order of magnitude of DRAM, prompting their inclusion in the processor memory subsystem. Such load/store accessible non-volatile or persistent memory (referred to as NVM or PM) introduces an interesting new tier that bridges the performance gap between DRAM and PM, and serves the role of fast storage or slower memory. However, PM has several implications on system design, both hardware and software: (i) the hardware caching mechanisms, while necessary for acceptable performance, complicate the ordering and durability of stores to PM, (ii) the high performance of PM (compared to NAND) and the fact that it is byte-addressable necessitate rethinking of the system software to manage PM and the interfaces to expose PM to the applications, and (iii) the future memory-based applications that will likely employ systems coupling PM with DRAM (for cost and capacity reasons) must be extremely conscious of the performance characteristics of PM and the challenges of using fast vs. slow memory in ways that best meet their performance demands. The key contribution of our research is a set of technologies that addresses these challenges in a bottom-up fashion. Since the real hardware is not yet available, we first implement a hardware emulator that can faithfully emulate the relative performance characteristics of DRAM and PM in a system with separate DRAM and emulated PM regions. We use this emulator to perform all of our evaluations. Next we explore system software support to enable low-overhead PM access by new and legacy applications. Towards this end, we implement PMFS, an optimized light-weight POSIX file system that exploits PM's byte-addressability to avoid overheads of block-oriented storage and enable direct PM access by applications (with memory-mapped I/O). To provide strong consistency guarantees, PMFS requires only a simple hardware primitive that provides software enforceable guarantees of durability and ordering of stores to PM. We demonstrate that PMFS achieves significant (up to an order of magnitude) gains over traditional file systems (such as ext4) on a RAMDISK-like PM block device. Finally, we address the problem of designing memory-based applications for systems with both DRAM and PM by extending our system software to manage both the tiers. We demonstrate for several representative large in-memory applications that it is possible to use a small amount of fast DRAM and large amounts of slower PM without a proportional impact to an application's performance, provided the placement of data structures is done in a careful fashion. To simplify the application programming, we implement a set of libraries and automatic tools (called X-Mem) that enables programmers to achieve optimal data placement with minimal effort on their part. Finally, we demonstrate the potentially large benefits of application-driven memory tiering with X-Mem across a range of applications.
244

“CTB -- catch the bus” : a theatrical examination of cybersuicide and its culture

Scheibmeir, Mark 26 October 2010 (has links)
A dramatized account of my discovery, research and inclusion into the subculture of cybersuicide. / text
245

Internet piracy in Japan : Lessig’s modalities of constraint and Japanese file sharing

Field, Shirley Gene, 1985- 01 November 2010 (has links)
The rise of new digital technologies and the Internet has given more people than ever before the ability to copy and share music and video. Even as Japan has adopted stronger copyright protections, the number of Japanese peer-to-peer file sharing network users has multiplied. Though the distribution of copyrighted material online has long been illegal and, as of 2010, the download of copyrighted material is now a criminal act, illegal file sharing continues apace, with the majority of people active on Japan’s most popular file sharing programs remaining unaffected by the new legislation. Clearly the law alone does not work to constrain file sharing behavior in Japan and, in fact, it is not the only way Japan strives to enforce copyright law on the Internet. What strategies are industries and government taking to curb illegal file sharing and are these strategies effective? How is unauthorized peer-to-peer file sharing cast into an act both immoral and worthy of criminal prosecution? Of particular interest are the evolution and growth of architectural and social constraints on online behavior alongside these legal constraints. / text
246

Convert your enemy into a friend : Innovation strategies for collaboration between record companies and BitTorrent networks

Andersen, Axel, Hristov, Emil January 2009 (has links)
<p>Problem: Record companies are facing a downturn in sales of music. This is seen as consequence of the growth of distribution of music through Internet by file sharing networks such as BitTorrent networks. On one side there are record companies who feel threatened of the illegal file sharing, and on the other side file sharing BitTorrent networks has increased dramatically in number of users since they first approached. Some record companies have responded by taking hostile actions towards the BitTorrent networks and their users with lawsuits and penalties for illegal file sharing. Other record companies and artists have joined forces with BitTorrent networks and see them as an advantage. Purpose: The purpose of this paper is to explore and analyze if, and how record companies can collaborate with the BitTorrent networks. Method: A hermeneutic inductive approach is used, in combination with qualitative interviews with both record companies and BitTorrent networks. Conclusions: It is argued that record companies can find a way in communicating and cooperating with BitTorrent networks. Instead of adopting hostile approaches and trying to restrict the technologies adopted by end users, companies should open themselves up and accept the current changes initiated and developed by BitTorrent networks. Thus, it was concluded that companies have to concentrate around collaborating with BitTorrent networks rather than fiercely protecting old business models. By opening up to the users, record companies will adopt open innovations approach that is characterized by combining external and internal ideas, as well internal and external paths to market, thus obtaining future technological developments. As for the BitTorrent networks, by going from outlaw to crowdsourcing mode, the networks’ creative solutions can be further harnessed by record companies. Finally, strengthening relationships between customers and music artists can be considered as beneficial for both record companies and BitTorrent networks. Thus, giving opportunities for customers to win special items, tickets for concerts, watch sound check, eat dinner backstage with the group, take pictures, get autographs, watch the show from the side of the stage, etc. can lead to valuable relationship in a long run.</p>
247

Understanding and Improving Personal File Retrieval

Fitchett, Stephen January 2013 (has links)
Personal file retrieval – the task of locating and opening files on a computer – is a common task for all computer users. A range of interfaces are available to assist users in retrieving files, such as navigation within a file browser, search interfaces and recent items lists. This thesis examines two broad goals in file retrieval: understanding current file retrieval behaviour, and improving file retrieval by designing improved user interfaces. A thorough understanding of current file retrieval behaviour is important to the design of any improved retrieval tools, however there has been surprisingly little research about the ways in which users interact with common file retrieval tools. To address this, this thesis describes a longitudinal field study that logs participants' file retrieval behaviour across a range of methods, using a specially developed logging tool called FileMonitor. Results confirm findings from previous research that search is used as a method of last resort, while providing new results characterising file retrieval. These include analyses of revisitation behaviour, file browser window reuse, and interactions between retrieval methods, as well as detailed characterisations of the use of navigation and search. Knowledge gained from this study assists in the design of three improvements to file navigation: Icon Highlights, Search Directed Navigation and Hover Menus. Icon Highlights highlight items that are considered the most likely to be accessed next. These highlights are determined using a new algorithm, AccessRank, which is designed to produce a set of results that is both accurate and stable over time. Search Directed Navigation highlights items that match, or contain items that match, a filename search query, allowing users to rehearse the mechanisms for expert performance in order to aid future retrievals, and providing greater context than the results of a traditional search interface. Hover Menus appear when hovering the mouse cursor above a folder, and provide shortcuts to highly ranked files and folders located at any depth within the folder. This allows users to reduce navigation times by skipping levels of the file hierarchy. These interfaces are evaluated in lab and field studies, allowing for both precise analysis of their relative strengths and weaknesses, while also providing a high degree of external validity. Results of the lab study show that all three techniques reduce retrieval times and are subjectively preferred by participants. For the field study, fully functional versions of Icon Highlights and Search Directed Navigation are implemented as part of Finder Highlights, a plugin to OS X's file manager. Results indicate that Icon Highlights significantly reduce file retrieval times, and that Search Directed Navigation was useful to those who used it, but faces barriers to adoption. Key contributions of this thesis include a review of previous literature on file management, a thorough characterisation of file retrieval behaviour, improved algorithms for predicting user behaviour and three improved interfaces for file retrieval. This research has the potential to improve a tedious activity that users perform many times a day, while also providing generalisable algorithms and interface concepts that are applicable to a wide range of interfaces beyond file management.
248

Split array and scalar data cache: A comprehensive study of data cache organization.

Naz, Afrin 08 1900 (has links)
Existing cache organization suffers from the inability to distinguish different types of localities, and non-selectively cache all data rather than making any attempt to take special advantage of the locality type. This causes unnecessary movement of data among the levels of the memory hierarchy and increases in miss ratio. In this dissertation I propose a split data cache architecture that will group memory accesses as scalar or array references according to their inherent locality and will subsequently map each group to a dedicated cache partition. In this system, because scalar and array references will no longer negatively affect each other, cache-interference is diminished, delivering better performance. Further improvement is achieved by the introduction of victim cache, prefetching, data flattening and reconfigurability to tune the array and scalar caches for specific application. The most significant contribution of my work is the introduction of novel cache architecture for embedded microprocessor platforms. My proposed cache architecture uses reconfigurability coupled with split data caches to reduce area and power consumed by cache memories while retaining performance gains. My results show excellent reductions in both memory size and memory access times, translating into reduced power consumption. Since there was a huge reduction in miss rates at L-1 caches, further power reduction is achieved by partially or completely shutting down L-2 data or L-2 instruction caches. The saving in cache sizes resulting from these designs can be used for other processor activities including instruction and data prefetching, branch-prediction buffers. The potential benefits of such techniques for embedded applications have been evaluated in my work. I also explore how my cache organization performs for non-numeric data structures. I propose a novel idea called "Data flattening" which is a profile based memory allocation technique to compress sparsely scattered pointer data into regular contiguous memory locations and explore the potentials of my proposed Spit cache organization for data treated with data flattening method.
249

Reliable content delivery using persistent data sessions in a highly mobile environment

Pantoleon, Periklis K. 03 1900 (has links)
Approved for public release; distribution is unlimited / Special Forces are crucial in specific military operations. They usually operate in hostile territory where communications are difficult to establish and preserve, since the operations are often carried out in a remote environment and the communications need to be highly mobile. The delivery of information about the geographical parameters of the area can be crucial for the completion of their mission. But in that highly mobile environment, the connectivity of the established wireless networks (LANs) can be unstable and intermittently unavailable. Existing content transfer protocols are not adaptive to volatile network connectivity. If a physical connection is lost, any information or part of a file already retrieved is discarded and the same information must be retransmitted again after the reestablishment of the lost session. The intention of this Thesis is to develop a protocol in the application layer that preserves the already transmitted part of the file, and when the session is reestablished, the information server can continue sending the rest of the file to the requesting host. Further, if the same content is available from another server through a better route, the new server should be able to continue to serve the content, starting from where the session with the previous server ended. / Lieutenant, Hellenic Navy
250

Filstorleksoptimering för retuscheringsarbete : Enundersökning med fokus på moderetuschering / File size optimization for retouching : A study with a focus on fashion retouching

Liljengård, Anton January 2017 (has links)
Under bearbetning av bilder idag förekommer ofta stora filer. Med den effektiva teknologiska utvecklingen har efterfrågan på kvalitet växt allt mer. I en värld där fotografens kamera har blivit mer högupplöst har även bilders filstorlek blivit större. Målet med detta examensarbete har varit att komma fram med en rekommendation för hur man arbetar mot en liten filstorlek. Rekommendationen är till för retuschörer som arbetar inom modebranschen och med bilder ämnade för print. Arbetet har försökt åskådliggöra vad under retuscheringens arbetsgång som orsakar en större filstorlek. Detta genom att kontakta retuschörer som ofta arbetar med modebilder. Fokus har legat på lager i Photoshop samt editeringsalternativ för retuschören. Det framkom att retuschörer gjorde liknande åtgärder för att få en liten filstorlek, och att en viss likhet kan urskiljas i deras arbetssätt kring vad som ökade filstorlek. Det framkom även att filstorleken påverkas mest av hur pixellager och masker ser ut, till skillnad från justeringslager. / During the processing of pictures today the file size often becomes large. An effective technological development has made the demand for quality higher. In a world where the photographer's camera has gotten a higher resolution, the image's file size has also increased. The aim of this thesis has been to come up with a recommendation for how to work towards getting a smaller file size. The recommendation was intended for retouchers who work in the fashion industry and with pictures meant for print. The work has dealt with file sizes associated with retouching and have tried to illustrate what during the retouch procedure that causes a larger file size. This has been done by contacting retouchers who often work with fashion images. The focus has been on the layers in Photoshop and editing options for the retoucher. The results showed that the retouchers had similar ways of working towards a small file size, and a certain similarity is apparent in their way of retouching which caused a bigger file size. What also showed was that the file size is the most affected by how layers consisting of pixels and masks look compared to adjustment layers.

Page generated in 0.0165 seconds