• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 13
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 11
  • 10
  • 10
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Variational methods for Bayesian independent component analysis

Choudrey, Rizwan A. January 2002 (has links)
No description available.
2

ACQUISITION AND DISTRIBUTION OF TSPI DATA USING COTS HARDWARE OVER AN ETHERNET NETWORK

James, Russell W., Bevier, James C. 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The Western Aeronautical Test Range (WATR) operates the ground stations for research vehicles operating at the NASA Dryden Flight Research Center (DFRC). Recently, the WATR implemented a new system for distributing Time, Space, and Position Information (TSPI) data. The previous system for processing this data was built on archaic hardware that is no longer supported, running legacy software with no upgrade path. The purpose of the Radar Information Processing System (RIPS) is to provide the ability to acquire TSPI data from a variety of sources and process the data for subsequent distribution to other destinations located at the various DFRC facilities. RIPS is built of commercial, off the shelf (COTS) hardware installed in Personal Computers (PC). Data is transported between these computers on a Gigabit Ethernet network. The software was developed using C++ with a modular, object-oriented design approach.
3

Lessons Learned in Using COTS for Real Time High Speed Data Distribution

Downing, Bob, Bretz, Jim 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Currently, there is a large effort being placed on the use of commercial-off-the-shelf (COTS) equipment to satisfy dedicated system requirements. This emphasis is being pursued in the quest of reducing overall system development costs. The development activity discussed in this paper consisted of determining some of the boundaries and constraints in the use of COTS equipment for high speed data distribution. This paper will present some of the lessons learned in developing a real-time high speed (greater than 1 MByte/sec) data distribution subsystem using COTS equipment based on industry accepted standards and POSIX P1003.1 operating system compliance.
4

Improvements in distribution of meteorological data using application layer multicast

Shah, Saurin Bipin 25 April 2007 (has links)
The Unidata Program Center is an organization working with the University Center for Atmospheric Research (UCAR), in Colorado. It provides a broad variety of meteorological data, which is used by researchers in many real-world applications. This data is obtained from observation stations and distributed to various universities worldwide, using Unidata’s own Internet Data Distribution (IDD) system, and software called the Local Data Manager (LDM). The existing solution for data distribution has many limitations, like high end-toend latency of data delivery, increased bandwidth usage at some nodes, poor scalability for future needs and manual intervention for adjusting to changes or faults in the network topology. Since the data is used in so many applications, the impact of these limitations is often substantial. This thesis removes these limitations by suggesting improvements in the IDD system and the LDM. We present new algorithms for constructing an application-layer data distribution network. This distribution network will form the basis of the improved LDM and the IDD system, and will remove most of the limitations given above. Finally, we perform simulations and show that our algorithms achieve better average end-to-end latency as compared to that of the existing solution. We also compare the performance of our algorithms with a randomized solution. We find that for smaller topologies (where the number of nodes in the system are less than 38) the randomized solution constructs efficient distribution networks. However, if the number of nodes in the system increases (more than 38), our solution constructs efficient distribution networks than the randomized solution. We also evaluate the performance of our algorithms as the number of nodes in the system increases and as the number of faults in the system increases. We find that even if the number of faults in the system increases, the average end-to-end latency decreases, thus showing that the distribution topology does not become inefficient.
5

An Internet Based GIS Database Distribution System

Huang, Tao 11 October 2001 (has links)
No description available.
6

WINGS NETWORK ARCHITECTURE FOR THE MISSION SEGMENT DATA DISTRIBUTION

Downing, Bob, Harris, Jim, Coggins, Greg, James, Russell W. 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The Western Aeronautical Test Range (WATR) Integrated Next Generation System (WINGS) Mission Segment provides data acquisition, processing, display and storage in support of each project’s mission at NASA Dryden Flight Research Center (DFRC). The network architecture for WINGS Mission Segment is responsible for distributing a variety of information from the Telemetry and Radar Acquisition and Processing System (TRAPS), which is responsible for data acquisition and processing, to the Mission Control Centers (MCCs) for display of data to the user. WINGS consists of three TRAPS and four MCCs, where any TRAPS can drive any one or multiple MCCs. This paper will address the requirements for the TRAPS/MCC network and the design solution.
7

TELEMETRY DATA DISTRIBUTION UTILIZING A MULTICAST IP NETWORK

DeLong, Brian 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The efficient distribution of telemetry data via standard Ethernet networks has become an increasingly important part of telemetry system designs. While there are several methods and architectures to choose from, a solution based on IP multicast transmission provides for a fast and efficient method of distributing data from a single source to multiple clients. This data distribution method allows for increased scalability as data servers are no longer required to service individual client connections, and network bandwidth is minimized with multiple network clients being simultaneously serviced via a single data transmission.
8

A WEB COMPATIBLE FILE SERVER FOR MEASUREMENT AND TELEMETRY NETWORKS

Miller, Matthew J., Freudinger, Lawrence C. 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / There is a gulf that separates measurement and telemetry applications from the full benefits of Internet style communication. Whereas the Web provides ubiquitous infrastructure for the distribution of file-based “static” data, there is no general Web solution for real-time streaming data. At best, there are proprietary products that target consumer multimedia and resort to custom point-to-point data connections. This paper considers an extension of the static file paradigm to a dynamic file and introduces a streaming data solution integrated with the existing file-based infrastructure of the Web. The solution approach appears to maximize platform and application independence leading to improved application interoperability potential for large or complex measurement and telemetry networks.
9

A PC-Based Data Acquisition and Compact Disc Recording System

Bretthauer, Joy W., Davis, Rodney A. 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada / The Telemetry Data Distribution System (TDDS) solves the need to record, archive, and distribute sounding rocket and satellite data on a compact, user-friendly medium, such as CD-Recordable discs. The TDDS also archives telemetry data on floppy disks, nine-track tapes, and magneto-optical disc cartridges. The PC-based, semi-automated, TDDS digitizes, time stamps, formats, and archives frequency modulated (FM) or pulse code modulated (PCM) telemetry data. An analog tape or a real-time signal may provide the telemetry data source. The TDDS accepts IRIG A, B, G, H, and NASA 36 analog code sources for time stamp data. The output time tag includes time, frame, and subframe status information. Telemetry data may be time stamped based upon a user-specified number of frames, subframes, or words. Once recorded, the TDDS performs data quality testing, formatting, and validation and logs the results automatically. Telemetry data is quality checked to ensure a good analog source track was selected. Raw telemetry data is formatted by dividing the data into records and appending header information. The formatted telemetry data is validated by checking consecutive time tags and subframe identification counter values (if applicable) to identify data drop-outs. After validation, the TDDS archives the formatted data to any of the following media types: CD-Recordable (CD-R) Disc (650 megabytes capacity); nine track tape (180 megabytes capacity); and erasable optical disc (499 megabytes capacity). Additionally, previously archived science data may be re-formatted and archived to a different output media.
10

Deterministic Distribution of Telemetry and Other Replicated Information

Gustin, Thomas W. 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / Discover how it is now possible to memory-link all man-in-the-loop and machine-in-the-loop elements, as global resources that share information at memory-access speeds, to provide a unified system paradigm that avows: "the data is there, on time, every time." Regardless of configuration, if your past, present, or future system consists of more than one computer, and it interactively mixes information sources and destinations (e.g. Telemetry data streams, I/O interfaces, information processors, etc.) to achieve a highly integrated system, then the critical path to real-time success mandates a high performance, reliable, and deterministic communications methodology. This softwareless, future technology is already successfully sharing information in other real-time markets and applications, and is ready for more challenging applications.

Page generated in 0.1628 seconds