• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 26
  • 6
  • 4
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 87
  • 87
  • 24
  • 13
  • 12
  • 10
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A Probabilistic Classification Algorithm With Soft Classification Output

Phillips, Rhonda D. 23 April 2009 (has links)
This thesis presents a shared memory parallel version of the hybrid classification algorithm IGSCR (iterative guided spectral class rejection), a novel data reduction technique that can be used in conjunction with PIGSCR (parallel IGSCR), a noise removal method based on the maximum noise fraction (MNF), and a continuous version of IGSCR (CIGSCR) that outputs soft classifications. All of the above are either classification algorithms or preprocessing algorithms necessary prior to the classification of high dimensional, noisy images. PIGSCR was developed to produce fast and portable code using Fortran 95, OpenMP, and the Hierarchical Data Format version 5 (HDF5) and accompanying data access library. The feature reduction method introduced in this thesis is based on the singular value decomposition (SVD). This feature reduction technique demonstrated that SVD-based feature reduction can lead to more accurate IGSCR classifications than PCA-based feature reduction. This thesis describes a new algorithm used to adaptively filter a remote sensing dataset based on signal-to-noise ratios (SNRs) once the maximum noise fraction (MNF) has been applied. The adaptive filtering scheme improves image quality as shown by estimated SNRs and classification accuracy improvements greater than 10%. The continuous iterative guided spectral class rejection (CIGSCR) classification method is based on the iterative guided spectral class rejection (IGSCR) classification method for remotely sensed data. Both CIGSCR and IGSCR use semisupervised clustering to locate clusters that are associated with classes in a classification scheme. This type of semisupervised classification method is particularly useful in remote sensing where datasets are large, training data are difficult to acquire, and clustering makes the identification of subclasses adequate for training purposes less difficult. Experimental results indicate that the soft classification output by CIGSCR is reasonably accurate (when compared to IGSCR), and the fundamental algorithmic changes in CIGSCR (from IGSCR) result in CIGSCR being less sensitive to input parameters that influence iterations. / Ph. D.
52

Model and Data Reduction for Control, Identification and Compressed Sensing

Kramer, Boris Martin Josef 05 September 2015 (has links)
This dissertation focuses on problems in design, optimization and control of complex, large-scale dynamical systems from different viewpoints. The goal is to develop new algorithms and methods, that solve real problems more efficiently, together with providing mathematical insight into the success of those methods. There are three main contributions in this dissertation. In Chapter 3, we provide a new method to solve large-scale algebraic Riccati equations, which arise in optimal control, filtering and model reduction. We present a projection based algorithm utilizing proper orthogonal decomposition, which is demonstrated to produce highly accurate solutions at low rank. The method is parallelizable, easy to implement for practitioners, and is a first step towards a matrix free approach to solve AREs. Numerical examples for n >= 100,000 unknowns are presented. In Chapter 4, we develop a system identification method which is motivated by tangential interpolation. This addresses the challenge of fitting linear time invariant systems to input-output responses of complex dynamics, where the number of inputs and outputs is relatively large. The method reduces the computational burden imposed by a full singular value decomposition, by carefully choosing directions on which to project the impulse response prior to assembly of the Hankel matrix. The identification and model reduction step follows from the eigensystem realization algorithm. We present three numerical examples, a mass spring damper system, a heat transfer problem, and a fluid dynamics system. We obtain error bounds and stability results for this method. Chapter 5 deals with control and observation design for parameter dependent dynamical systems. We address this by using local parametric reduced order models, which can be used online. Data available from simulations of the system at various configurations (parameters, boundary conditions) is used to extract a sparse basis to represent the dynamics (via dynamic mode decomposition). Subsequently, a new compressed sensing based classification algorithm is developed which incorporates the extracted dynamic information into the sensing basis. We show that this augmented classification basis makes the method more robust to noise, and results in superior identification of the correct parameter. Numerical examples consist of a Navier-Stokes, as well as a Boussinesq flow application. / Ph. D.
53

REDUCTION AND ANALYSIS PROGRAM FOR TELEMETRY RECORDINGS (RAPTR): ANALYSIS AND DECOMMUTATION SOFTWARE FOR IRIG 106 CHAPTER 10 DATA

Kim, Jeong Min 10 1900 (has links)
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Solid State On-Board Recording is becoming a revolutionary way of recording airborne telemetry data and IRIG 106 Chapter 10 “Solid State On-Board Recorder Standard” provides interface documentation for solid state digital data acquisition. The Reduction and Analysis Program for Telemetry Recordings (RAPTR) is a standardized and extensible software application developed by the 96th Communications Group, Test and Analysis Division, at Eglin AFB, and provides a data reduction capability for disk files in Chapter 10 format. This paper provides the system description and software architecture of RAPTR and presents the 96th Communication Group’s total solution for Chapter 10 telemetry data reduction.
54

Common Airborne Processing System (CAPS) 2.0: Data Reduction Software on a Personal Computer (PC)

Hunt, Trent W. 10 1900 (has links)
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada / CAPS 2.0 provides a flexible, PC-based tool for meeting evolving data reduction and analysis requirements while supporting standardization of instrumentation data processing. CAPS 2.0 will accept a variety of data types including raw instrumentation, binary, ASCII, and Internet protocol message data and will output Engineering Unit data to files, static or dynamic plots, and Internet protocol message exchange. Additionally, CAPS 2.0 will input and output data in accordance with the Digital Data Standard. CAPS 2.0 will accept multiple input sources of PCM, MIL-STD-1553, or DDS data to create an output for every Output Product Description and Dictionary grouping specified for a particular Session. All of this functionality is performed on a PC within the framework of the Microsoft Windows 95/NT graphical user interface.
55

CAPS: AN EGLIN RANGE STANDARD FOR PC-BASED TELEMETRY DATA REDUCTION

Thomas, Tim 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / A need exists to provide a flexible data reduction tool that minimizes software development costs and reduces analysis time for telemetry data. The Common Airborne Processing System (CAPS), developed by the Freeman Computer Sciences Center at Eglin AFB, Florida, provides a generalpurpose data reduction capability for digitally recorded data on a PC. Data from virtually any kind of MIL-STD-1553 message or Pulse Code Modulation (PCM) frame can be extracted and converted to engineering units using a parameter dictionary that describes the data format. The extracted data can then be written to a file, ASCII or binary, with a great deal of flexibility in the output format. CAPS has become the standard for digitally recorded data reduction on a PC at Eglin. New features, such as composing derived parameters using mathematical expressions, are being added to CAPS to make it an even more productive data reduction tool. This paper provides a conceptual overview of the CAPS version 2.3 software.
56

On-Board Data Processing and Filtering

Faber, Marc 10 1900 (has links)
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / One of the requirements resulting from mounting pressure on flight test schedules is the reduction of time needed for data analysis, in pursuit of shorter test cycles. This requirement has ramifications such as the demand for record and processing of not just raw measurement data but also of data converted to engineering units in real time, as well as for an optimized use of the bandwidth available for telemetry downlink and ultimately for shortening the duration of procedures intended to disseminate pre-selected recorded data among different analysis groups on ground. A promising way to successfully address these needs consists in implementing more CPU-intelligence and processing power directly on the on-board flight test equipment. This provides the ability to process complex data in real time. For instance, data acquired at different hardware interfaces (which may be compliant with different standards) can be directly converted to more easy-to-handle engineering units. This leads to a faster extraction and analysis of the actual data contents of the on-board signals and busses. Another central goal is the efficient use of the available bandwidth for telemetry. Real-time data reduction via intelligent filtering is one approach to achieve this challenging objective. The data filtering process should be performed simultaneously on an all-data-capture recording and the user should be able to easily select the interesting data without building PCM formats on board nor to carry out decommutation on ground. This data selection should be as easy as possible for the user, and the on-board FTI devices should generate a seamless and transparent data transmission, making a quick data analysis viable. On-board data processing and filtering has the potential to become the future main path to handle the challenge of FTI data acquisition and analysis in a more comfortable and effective way.
57

Analyzing and sharing data for surface combat weapons systems

Wilhelm, Gary L. 12 1900 (has links)
Approved for public release, distribution is unlimited / Test and evaluation of system performance has been a critical part of the acceptance of combat weapon systems for the Department of Defense. As combat weapon systems have become more complex, evaluation of system performance has relied more heavily on recorded test data. As part of the on-going transformation of the Defense department, Commercial-Off- The-Shelf (COTS) technology is being integrated into the acquisition of combat weapon systems. An Analysis Control Board (ACB) was created in response to these factors to support the AEGIS Weapon System Program Office. The focus of this ACB was to investigate and provide potential solutions to Data Dictionary, Data Recording and Data Reduction (R2D2) issues to the AEGIS Program Manager. This thesis discusses the history of the R2D2 ACB and its past, present and future directions. Additionally, this thesis examines how the R2D2 ACB concept could be applied to the DD(X) Next Generation Destroyer program. / Civilian, United States Navy
58

Scalable Performance Assessment of Industrial Assets: A Data Mining Approach

Dagnely, Pierre 21 June 2019 (has links) (PDF)
Nowadays, more and more industrial assets are continuously monitored and generate vast amount of event logs and sensor data. Data Mining is the field concerned with the exploration and exploitation of these data. Despite the fact that data mining has been researched for decades, the event log data are still underexploited in most data mining workflows although they could provide valuable insights on the asset behavior as they represent the internal processes of an asset. However, exploitation of event log data is challenging, mainly as: 1) event labels are not consistent across manufacturers, 2) assets report vast amount of data from which only a small part may be relevant, 3) textual event logs and numerical sensor data are usually processed by methods dedicated respectively to textual data or sensor data, methods combining both types of data are still missing, 4) industrial data are rarely labelled, i.e. there is no indication on the actual performance of the asset and it has to be derived from other sources, 5) the meaning of an event may vary depending on the events send after or before.Concretely, this thesis is concerned with the conception and validation of an integrated data processing framework for scalable performance assessment of industrial asset portfolios. This framework is composed of several advanced methodologies facilitating exploitation of both event logs and time series sensor data: 1) an ontology model describing photovoltaic (the validation domain) event system allowing the integration of heterogeneous event generated by various manufacturers; 2) a novel and computationally scalable methodology enabling automatic calculation of event relevancy score without any prior knowledge; 3) a semantically enriched multi-level pattern mining methodology enabling data exploration and hypothesis building across heterogeneous assets; 4) an advanced workflow extracting performance profiles by combining textual event logs and numerical sensor values; 5) a scalable methodology allowing rapid annotation of new asset runs with a known performance label only based on the event logs data.The framework has been exhaustively validated on real-world data from PV plants, provided by our industrial partner 3E. However, the framework has been designed to be domain agnostic and can be adapted to other industrial assets reporting event logs and sensor data. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
59

Data security and reliability in cloud backup systems with deduplication.

January 2012 (has links)
雲存儲是一個新興的服務模式,讓個人和企業的數據備份外包予較低成本的遠程雲服務提供商。本論文提出的方法,以確保數據的安全性和雲備份系統的可靠性。 / 在本論文的第一部分,我們提出 FadeVersion,安全的雲備份作為今天的雲存儲服務上的安全層服務的系統。 FadeVersion實現標準的版本控制備份設計,從而消除跨不同版本備份的冗餘數據存儲。此外,FadeVersion在此設計上加入了加密技術以保護備份。具體來說,它實現細粒度安全删除,那就是,雲客戶可以穩妥地在雲上删除特定的備份版本或文件,使有關文件永久無法被解讀,而其它共用被删除數據的備份版本或文件將不受影響。我們實現了試驗性原型的 FadeVersion並在亞馬遜S3之上進行實證評價。我們證明了,相對於不支援度安全删除技術傳統的雲備份服務 FadeVersion只增加小量額外開鎖。 / 在本論文的第二部分,提出 CFTDedup一個分佈式代理系統,利用通過重複數據删除增加雲存儲的效率,而同時確保代理之間的崩潰容錯。代理之間會進行同步以保持重複數據删除元數據的一致性。另外,它也分批更新元數據減輕同步帶來的開銷。我們實現了初步的原型CFTDedup並通過試驗台試驗,以存儲虛擬機映像評估其重複數據删除的運行性能。我們還討論了幾個開放問題,例如如何提供可靠、高性能的重複數據删除的存儲。我們的CFTDedup原型提供了一個平台來探討這些問題。 / Cloud storage is an emerging service model that enables individuals and enterprises to outsource the storage of data backups to remote cloud providers at a low cost. This thesis presents methods to ensure the data security and reliability of cloud backup systems. / In the first part of this thesis, we present FadeVersion, a secure cloud backup system that serves as a security layer on top of todays cloud storage services. FadeVersion follows the standard version-controlled backup design, which eliminates the storage of redundant data across different versions of backups. On top of this, FadeVersion applies cryptographic protection to data backups. Specifically, it enables ne-grained assured deletion, that is, cloud clients can assuredly delete particular backup versions or files on the cloud and make them permanently in accessible to anyone, while other versions that share the common data of the deleted versions or les will remain unaffected. We implement a proof-of-concept prototype of FadeVersion and conduct empirical evaluation atop Amazon S3. We show that FadeVersion only adds minimal performance overhead over a traditional cloud backup service that does not support assured deletion. / In the second part of this thesis, we present CFTDedup, a distributed proxy system designed for providing storage efficiency via deduplication in cloud storage, while ensuring crash fault tolerance among proxies. It synchronizes deduplication metadata among proxies to provide strong consistency. It also batches metadata updates to mitigate synchronization overhead. We implement a preliminary prototype of CFTDedup and evaluate via test bed experiments its runtime performance in deduplication storage for virtual machine images. We also discuss several open issues on how to provide reliable, high-performance deduplication storage. Our CFTDedup prototype provides a platform to explore such issues. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Rahumed, Arthur. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 47-51). / Abstracts also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Cloud Based Backup and Assured Deletion --- p.1 / Chapter 1.2 --- Crash Fault Tolerance for Backup Systems with Deduplication --- p.4 / Chapter 1.3 --- Outline of Thesis --- p.6 / Chapter 2 --- Background and Related Work --- p.7 / Chapter 2.1 --- Deduplication --- p.7 / Chapter 2.2 --- Assured Deletion --- p.7 / Chapter 2.3 --- Policy Based Assured Deletion --- p.8 / Chapter 2.4 --- Convergent Encryption --- p.9 / Chapter 2.5 --- Cloud Based Backup Systems --- p.10 / Chapter 2.6 --- Fault Tolerant Deduplication Systems --- p.10 / Chapter 3 --- Design of FadeVersion --- p.12 / Chapter 3.1 --- Threat Model and Assumptions for Fade Version --- p.12 / Chapter 3.2 --- Motivation --- p.13 / Chapter 3.3 --- Main Idea --- p.14 / Chapter 3.4 --- Version Control --- p.14 / Chapter 3.5 --- Assured Deletion --- p.16 / Chapter 3.6 --- Assured Deletion for Multiple Policies --- p.18 / Chapter 3.7 --- Key Management --- p.19 / Chapter 4 --- Implementation of FadeVersion --- p.20 / Chapter 4.1 --- System Entities --- p.20 / Chapter 4.2 --- Metadata Format in FadeVersion --- p.22 / Chapter 5 --- Evaluation of FadeVersion --- p.24 / Chapter 5.1 --- Setup --- p.24 / Chapter 5.2 --- Backup/Restore Time --- p.26 / Chapter 5.3 --- Storage Space --- p.28 / Chapter 5.4 --- Monetary Cost --- p.29 / Chapter 5.5 --- Conclusions --- p.30 / Chapter 6 --- CFTDedup Design --- p.31 / Chapter 6.1 --- Failure Model --- p.31 / Chapter 6.2 --- System Overview --- p.32 / Chapter 6.3 --- Distributed Deduplication --- p.33 / Chapter 6.4 --- Crash Fault Tolerance --- p.35 / Chapter 6.5 --- Implementation --- p.36 / Chapter 7 --- Evaluation of CFTDedup --- p.37 / Chapter 7.1 --- Setup --- p.37 / Chapter 7.2 --- Experiment 1 (Archival) --- p.38 / Chapter 7.3 --- Experiment 2 (Restore) --- p.39 / Chapter 7.4 --- Experiment 3 (Recovery) --- p.40 / Chapter 7.5 --- Summary --- p.41 / Chapter 8 --- Future work and Conclusions of CFTDedup --- p.43 / Chapter 8.1 --- Future Work --- p.43 / Chapter 8.2 --- Conclusions --- p.44 / Chapter 9 --- Conclusion --- p.45 / Bibliography --- p.47
60

Wavelet-based Data Reduction and Mining for Multiple Functional Data

Jung, Uk 12 July 2004 (has links)
Advance technology such as various types of automatic data acquisitions, management, and networking systems has created a tremendous capability for managers to access valuable production information to improve their operation quality and efficiency. Signal processing and data mining techniques are more popular than ever in many fields including intelligent manufacturing. As data sets increase in size, their exploration, manipulation, and analysis become more complicated and resource consuming. Timely synthesized information such as functional data is needed for product design, process trouble-shooting, quality/efficiency improvement and resource allocation decisions. A major obstacle in those intelligent manufacturing system is that tools for processing a large volume of information coming from numerous stages on manufacturing operations are not available. Thus, the underlying theme of this thesis is to reduce the size of data in a mathematical rigorous framework, and apply existing or new procedures to the reduced-size data for various decision-making purposes. This thesis, first, proposes {it Wavelet-based Random-effect Model} which can generate multiple functional data signals which have wide fluctuations(between-signal variations) in the time domain. The random-effect wavelet atom position in the model has {it locally focused impact} which can be distinguished from other traditional random-effect models in biological field. For the data-size reduction, in order to deal with heterogeneously selected wavelet coefficients for different single curves, this thesis introduces the newly-defined {it Wavelet Vertical Energy} metric of multiple curves and utilizes it for the efficient data reduction method. The newly proposed method in this thesis will select important positions for the whole set of multiple curves by comparison between every vertical energy metrics and a threshold ({it Vertical Energy Threshold; VET}) which will be optimally decided based on an objective function. The objective function balances the reconstruction error against a data reduction ratio. Based on class membership information of each signal obtained, this thesis proposes the {it Vertical Group-Wise Threshold} method to increase the discriminative capability of the reduced-size data so that the reduced data set retains salient differences between classes as much as possible. A real-life example (Tonnage data) shows our proposed method is promising.

Page generated in 0.1097 seconds