• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 3
  • 2
  • 1
  • Tagged with
  • 21
  • 21
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automation of Generalized Measurement Extraction from Telemetric Network Systems

Seegmiller, Ray D., Willden, Greg C., Araujo, Maria S., Newton, Todd A., Abbott, Ben A., Malatesta, William A. 10 1900 (has links)
ITC/USA 2012 Conference Proceedings / The Forty-Eighth Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2012 / Town and Country Resort & Convention Center, San Diego, California / In telemetric network systems, data extraction is often an after-thought. The data description frequently changes throughout the program so that last minute modifications of the data extraction approach are often required. This paper presents an alternative approach in which automation of measurement extraction is supported. The central key is a formal declarative language that can be used to configure instrumentation devices as well as measurement extraction devices. The Metadata Description Language (MDL) defined by the integrated Network Enhanced Telemetry (iNET) program, augmented with a generalized measurement extraction approach, addresses this issue. This paper describes the TmNS Data Extractor Tool, as well as lessons learned from commercial systems, the iNET program and TMATS.
2

Searching textual and model-based process descriptions based on a unified data format

Leopold, Henrik, van der Aa, Han, Pittke, Fabian, Raffel, Manuel, Mendling, Jan, Reijers, Hajo A. January 2019 (has links) (PDF)
Documenting business processes using process models is common practice in many organizations. However, not all process information is best captured in process models. Hence, many organizations complement these models with textual descriptions that specify additional details. The problem with this supplementary use of textual descriptions is that existing techniques for automatically searching process repositories are limited to process models. They are not capable of taking the information from textual descriptions into account and, therefore, provide incomplete search results. In this paper, we address this problem and propose a technique that is capable of searching textual as well as model-based process descriptions. It automatically extracts activity-related and behavioral information from both descriptions types and stores it in a unified data format. An evaluation with a large Austrian bank demonstrates that the additional consideration of textual descriptions allows us to identify more relevant processes from a repository.
3

HADOOP-EDF: LARGE-SCALE DISTRIBUTED PROCESSING OF ELECTROPHYSIOLOGICAL SIGNAL DATA IN HADOOP MAPREDUCE

Wu, Yuanyuan 01 January 2019 (has links)
The rapidly growing volume of electrophysiological signals has been generated for clinical research in neurological disorders. European Data Format (EDF) is a standard format for storing electrophysiological signals. However, the bottleneck of existing signal analysis tools for handling large-scale datasets is the sequential way of loading large EDF files before performing an analysis. To overcome this, we develop Hadoop-EDF, a distributed signal processing tool to load EDF data in a parallel manner using Hadoop MapReduce. Hadoop-EDF uses a robust data partition algorithm making EDF data parallel processable. We evaluate Hadoop-EDF’s scalability and performance by leveraging two datasets from the National Sleep Research Resource and running experiments on Amazon Web Service clusters. The performance of Hadoop-EDF on a 20-node cluster improves 27 times and 47 times than sequential processing of 200 small-size files and 200 large-size files, respectively. The results demonstrate that Hadoop-EDF is more suitable and effective in processing large EDF files.
4

Selected problems analysis for little LEO data communication service deployment

Lo, J. Unknown Date (has links)
No description available.
5

THE USE OF HDF IN F-22 AVIONICS TEST AND EVALUATION

Barnum, Jil 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / Hierarchical Data Format (HDF) is a public domain standard for file formats which is documented and maintained by the National Center for Super Computing Applications. HDF is the standard adopted by the F-22 program to increase efficiency of avionics data processing and utility of the data. This paper will discuss how the data processing Integrated Product Team (IPT) on the F-22 program plans to use HDF for file format standardization. The history of the IPT choosing HDF, the efficiencies gained by choosing HDF, and the ease of data transfer will be explained.
6

An Extensible Computing Architecture Design for Connected Autonomous Vehicle System

Hochstetler, Jacob Daniel 05 1900 (has links)
Autonomous vehicles have made milestone strides within the past decade. Advances up the autonomy ladder have come lock-step with the advances in machine learning, namely deep-learning algorithms and huge, open training sets. And while advances in CPUs have slowed, GPUs have edged into the previous decade's TOP 500 supercomputer territory. This new class of GPUs include novel deep-learning hardware that has essentially side-stepped Moore's law, outpacing the doubling observation by a factor of ten. While GPUs have make record progress, networks do not follow Moore's law and are restricted by several bottlenecks, from protocol-based latency lower bounds to the very laws of physics. In a way, the bottlenecks that plague modern networks gave rise to Edge computing, a key component of the Connected Autonomous Vehicle system, as the need for low-latency in some domains eclipsed the need for massive processing farms. The Connected Autonomous Vehicle ecosystem is one of the most complicated environments in all of computing. Not only is the hardware scaled all the way from 16 and 32-bit microcontrollers, to multi-CPU Edge nodes, and multi-GPU Cloud servers, but the networking also encompasses the gamut of modern communication transports. I propose a framework for negotiating, encapsulating and transferring data between vehicles ensuring efficient bandwidth utilization and respecting real-time privacy levels.
7

Optimalizace distribuovaného I/O subsystému projektu k-Wave / Optimization of the Distributed I/O Subsystem of the k-Wave Project

Vysocký, Ondřej January 2016 (has links)
This thesis deals with an effective solution of the parallel I/O of the k-Wave tool, which is designed for time domain acoustic and ultrasound simulations. k-Wave is a supercomputer application, it runs on a Lustre file system and it requires to be implemented with MPI and stores the data in suitable data format (HDF5). I designed three methods of optimization which fits k-Wave's needs. It uses accumulation and redistribution techniques. In comparison with the native write, every optimization method led to better write speed, up to 13.6GB/s. It is possible to use these methods to optimize every data distributed application with the write speed issue.
8

Delphin 6 Output File Specification

Vogelsang, Stefan, Nicolai, Andreas 12 April 2016 (has links) (PDF)
Abstract This paper describes the file formats of the output data and geometry files generated by the Delphin program, a simulation model for hygrothermal transport in porous media. The output data format is suitable for any kind of simulation output generated by transient transport simulation models. Implementing support for the Delphin output format enables use of the advanced post-processing functionality provided by the Delphin post-processing tool and its dedicated physical analysis functionality.
9

Ontological lockdown assessment : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Information Technology at Massey University, Palmerston North, New Zealand

Steele, Aaron January 2008 (has links)
In order to keep shared access computers secure and stable system administrators resort to locking down the computing environment in order to prevent intentional and unintentional damage by users. Skilled attackers are often able to break out of locked down computing environments and intentionally misuse shared access computers. This misuse has resulted in cases of mass identity theft and fraud, some of which have had an estimated cost ranging in millions. In order to determine if it is possible to break out of locked down computing environments an assessment method is required. Although a number of vulnerability assessment techniques exist, none of the existing techniques are sufficient for assessing locked down shared access computers. This is due to the existing techniques focusing on traditional, application specific, software vulnerabilities. Break out path vulnerabilities (which are exploited by attackers in order to break out of locked down environments) differ substantially from traditional vulnerabilities, and as a consequence are not easily discovered using existing techniques. Ontologies can be thought of as a modelling technique that can be used to capture expert knowledge about a domain of interest. The method for discovering break out paths in locked down computers can be considered expert knowledge in the domain of shared access computer security. This research proposes an ontology based assessment process for discovering break out path vulnerabilities in locked down shared access computers. The proposed approach is called the ontological lockdown assessment process. The ontological lockdown assessment process is implemented against a real world system and successfully identifies numerous break out path vulnerabilities.
10

Low-complexity block dividing coding method for image compression using wavelets : a thesis presented in partial fulfillment of the requirements for the degree of Master of Engineering in Computer Systems Engineering at Massey University, Palmerston North, New Zealand

Zhu, Jihai January 2007 (has links)
Image coding plays a key role in multimedia signal processing and communications. JPEG2000 is the latest image coding standard, it uses the EBCOT (Embedded Block Coding with Optimal Truncation) algorithm. The EBCOT exhibits excellent compression performance, but with high complexity. The need to reduce this complexity but maintain similar performance to EBCOT has inspired a significant amount of research activity in the image coding community. Within the development of image compression techniques based on wavelet transforms, the EZW (Embedded Zerotree Wavelet) and the SPIHT (Set Partitioning in Hierarchical Trees) have played an important role. The EZW algorithm was the first breakthrough in wavelet based image coding. The SPIHT algorithm achieves similar performance to EBCOT, but with fewer features. The other very important algorithm is SBHP (Sub-band Block Hierarchical Partitioning), which attracted significant investigation during the JPEG2000 development process. In this thesis, the history of the development of wavelet transform is reviewed, and a discussion is presented on the implementation issues for wavelet transforms. The above mentioned four main coding methods for image compression using wavelet transforms are studied in detail. More importantly the factors that affect coding efficiency are identified. The main contribution of this research is the introduction of a new low-complexity coding algorithm for image compression based on wavelet transforms. The algorithm is based on block dividing coding (BDC) with an optimised packet assembly. Our extensive simulation results show that the proposed algorithm outperforms JPEG2000 in lossless coding, even though it still leaves a narrow gap in lossy coding situations

Page generated in 0.0478 seconds