• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 303
  • 139
  • 34
  • 31
  • 23
  • 19
  • 16
  • 16
  • 14
  • 12
  • 7
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 742
  • 742
  • 742
  • 141
  • 118
  • 112
  • 102
  • 86
  • 68
  • 65
  • 59
  • 57
  • 55
  • 54
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

The Multi-Isotope Process Monitor: Non-destructive, Near-Real-Time Nuclear Safeguards Monitoring at a Reprocessing Facility

Orton, Christopher Robert January 2009 (has links)
No description available.
212

Anomaly Detection Using Multiscale Methods

Aradhye, Hrishikesh Balkrishna 11 October 2001 (has links)
No description available.
213

Tensorial Data Low-Rank Decomposition on Multi-dimensional Image Data Processing

Luo, Qilun 01 August 2022 (has links)
How to handle large multi-dimensional datasets such as hyperspectral images and video information both efficiently and effectively plays an important role in big-data processing. The characteristics of tensor low-rank decomposition in recent years demonstrate the importance of capturing the tensor structure adequately which usually yields efficacious approaches. In this dissertation, we first aim to explore the tensor singular value decomposition (t-SVD) with the nonconvex regularization on the multi-view subspace clustering (MSC) problem, then develop two new tensor decomposition models with the Bayesian inference framework on the tensor completion and tensor robust principal component analysis (TRPCA) and tensor completion (TC) problems. Specifically, the following developments for multi-dimensional datasets under the mathematical tensor framework will be addressed. (1) By utilizing the t-SVD proposed by Kilmer et al. \cite{kilmer2013third}, we unify the Hyper-Laplacian (HL) and exclusive $\ell_{2,1}$ (L21) regularization with Tensor Log-Determinant Rank Minimization (TLD) to identify data clusters from the multiple views' inherent information. Whereby the HL regularization maintains the local geometrical structure that makes the estimation prune to nonlinearities, and the mixed $\ell_{2,1}$ and $\ell_{1,2}$ regularization provides the joint sparsity within-cluster as well as the exclusive sparsity between-cluster. Furthermore, a log-determinant function is used as a tighter tensor rank approximation to discriminate the dimension of features. (2) By considering a tube as an atom of a third-order tensor and constructing a data-driven learning dictionary from the observed noisy data along the tubes of a tensor, we develop a Bayesian dictionary learning model with tensor tubal transformed factorization to identify the underlying low-tubal-rank structure of the tensor substantially with the data-adaptive dictionary for the TRPCA problem. With the defined page-wise operators, an efficient variational Bayesian dictionary learning algorithm is established for TPRCA that enables to update of the posterior distributions along the third dimension simultaneously. (3) With the defined matrix outer product into the tensor decomposition process, we present a new decomposition model for a third-order tensor. The fundamental idea is to decompose tensors mathematically in a compact manner as much as possible. By incorporating the framework of Bayesian probabilistic inference, the new tensor decomposition model on the subtle matrix outer product (BPMOP) is developed for the TC and TRPCA problems. Extensive experiments on synthetic data and real-world datasets are conducted for the multi-view clustering, TC, and TRPCA problems to demonstrate the desirable effectiveness of the proposed approaches, by detailed comparison with currently available results in the literature.
214

Data Mining Algorithms for Decentralized Fault Detection and Diagnostic in Industrial Systems

Grbovic, Mihajlo January 2012 (has links)
Timely Fault Detection and Diagnosis in complex manufacturing systems is critical to ensure safe and effective operation of plant equipment. Process fault is defined as a deviation from normal process behavior, defined within the limits of safe production. The quantifiable objectives of Fault Detection include achieving low detection delay time, low false positive rate, and high detection rate. Once a fault has been detected pinpointing the type of fault is needed for purposes of fault mitigation and returning to normal process operation. This is known as Fault Diagnosis. Data-driven Fault Detection and Diagnosis methods emerged as an attractive alternative to traditional mathematical model-based methods, especially for complex systems due to difficulty in describing the underlying process. A distinct feature of data-driven methods is that no a priori information about the process is necessary. Instead, it is assumed that historical data, containing process features measured in regular time intervals (e.g., power plant sensor measurements), are available for development of fault detection/diagnosis model through generalization of data. The goal of my research was to address the shortcomings of the existing data-driven methods and contribute to solving open problems, such as: 1) decentralized fault detection and diagnosis; 2) fault detection in the cold start setting; 3) optimizing the detection delay and dealing with noisy data annotations. 4) developing models that can adapt to concept changes in power plant dynamics. For small-scale sensor networks, it is reasonable to assume that all measurements are available at a central location (sink) where fault predictions are made. This is known as a centralized fault detection approach. For large-scale networks, decentralized approach is often used, where network is decomposed into potentially overlapping blocks and each block provides local decisions that are fused at the sink. The appealing properties of the decentralized approach include fault tolerance, scalability, and reusability. When one or more blocks go offline due to maintenance of their sensors, the predictions can still be made using the remaining blocks. In addition, when the physical facility is reconfigured, either by changing its components or sensors, it can be easier to modify part of the decentralized system impacted by the changes than to overhaul the whole centralized system. The scalability comes from reduced costs of system setup, update, communication, and decision making. Main challenges in decentralized monitoring include process decomposition and decision fusion. We proposed a decentralized model where the sensors are partitioned into small, potentially overlapping, blocks based on the Sparse Principal Component Analysis (PCA) algorithm, which preserves strong correlations among sensors, followed by training local models at each block, and fusion of decisions based on the proposed Maximum Entropy algorithm. Moreover, we introduced a novel framework for adding constraints to the Sparse PCA problem. The constraints limit the set of possible solutions by imposing additional goals to be reached trough optimization along with the existing Sparse PCA goals. The experimental results on benchmark fault detection data show that Sparse PCA can utilize prior knowledge, which is not directly available in data, in order to produce desirable network partitions, with a pre-defined limit on communication cost and/or robustness. / Computer and Information Science
215

Transverse Position Reconstruction in a Liquid Argon Time Projection Chamber using Principal Component Analysis and Multi-Dimensional Fitting

Watson, Andrew William January 2017 (has links)
One of the most enduring questions in modern physics is the dark matter problem. Measurements of galactic rotation curves taken in the middle of the twentieth century suggest that there are large spherical halos of unseen matter permeating and surrounding most galaxies, stretching far beyond their visible extents. Although some of this mass discrepancy can be attributed to sources like primordial black holes or Massive Astrophysical Compact Halo Objects (MACHOs), these theories can only explain a small percentage of this "missing matter". One approach which could account for the entirety of this missing mass is the theory of Weakly Interacting Massive Particles, or "WIMPs". As their name suggests, WIMPs interact only through the weak nuclear force and gravity and are quite massive (100 GeV/c2 to 1 TeV/c2). These particles have very small cross sections (≈ 10−39 cm2) with nucleons and therefore interact only very rarely with "normal" baryonic matter. To directly detect a dark matter particle, one needs to overcome this small cross-section barrier. In many experiments, this is achieved by utilizing detectors filled with liquid noble elements, which have excellent particle identification capabilities and are very low-background, allowing potential WIMP signals to be more easily distinguished from detector noise. These experiments also often apply uniform electric fields across their liquid volumes, turning the apparatus into Time Projection Chambers or "TPCs". TPCs can accurately determine the location of an interaction in the liquid volume (often simply called an "event") along the direction of the electric field. In DarkSide-50 ("DS-50" for short), the electric field is aligned antiparallel to the z-axis of the detector, and so the depth of an event can be determined to a considerable degree of accuracy by measuring the time between the first and second scintillation signals ("S1" and "S2"), which are generated at the interaction point itself and in a small gas pocket above the liquid region, respectively. One of the lingering challenges in this experiment, however, is the determination of an event’s position along the other two spatial dimensions, that is, its transverse or "xy" position. Some liquid noble element TPCs have achieved remarkably accurate event position reconstructions, typically using the relative amounts of S2 light collected by Photo-Multiplier Tubes ("PMTs") as the input data to their reconstruction algorithms. This approach has been particularly challenging in DarkSide-50, partly due to unexpected asymmetries in the detector, and partly due to the design of the detector itself. A variety of xy-Reconstruction methods ("xy methods" for short) have come and gone in DS- 50, with only a few of them providing useful results. The xy method described in this dissertation is a two-step Principal Component Analysis / Multi-Dimensional Fit (PCAMDF) reconstruction. In a nutshell, this method develops a functional mapping from the 19-dimensional space of the signal received by the PMTs at the "top" (or the "anode" end) of the DarkSide-50 TPC to each of the transverse coordinates, x and y. PCAMDF is a low-level "machine learning" algorithm, and as such, needs to be "trained" with a sample of representative events; in this case, these are provided by the DarkSide geant4-based Monte Carlo, g4ds. In this work, a thorough description of the PCAMDF xy-Reconstruction method is provided along with an analysis of its performance on MC events and data. The method is applied to several classes of data events, including coincident decays, external gamma rays from calibration sources, and both atmospheric argon "AAr" and underground argon "UAr". Discrepancies between the MC and data are explored, and fiducial volume cuts are calculated. Finally, a novel method is proposed for finding the accuracy of the PCAMDF reconstruction on data by using the asymmetry of the S2 light collected on the anode and cathode PMT arrays as a function of xy. / Physics
216

Multivariate Analysis Applied to Discrete Part Manufacturing

Wallace, Darryl 09 1900 (has links)
<p>The overall focus of this thesis is the implementation of a process monitoring system in a real manufacturing environment that utilizes multivariate analysis techniques to assess the state of the process. The process in question was the medium-high volume manufacturing of discrete aluminum parts using relatively simple machining processes involving the use of two tools. This work can be broken down into three main sections.</p><p>The first section involved the modeling of temperatures and thermal expansion measurements for real-time thermal error compensation. Thermal expansion of the Z-axis was measured indirectly through measurement of the two quality parameters related to this axis with a custom gage that was designed for this part. A compensation strategy is proposed which is able to hold the variation of the parts to ±0.02mm, where the tolerance is ±0.05mm.</p><p>The second section involved the modeling of the process data from the parts that included vibration, current, and temperature signals from the machine. The modeling of the process data using Principal Component Analysis (PCA), while unsuccessful in detecting minor simulated process faults, was successful in detecting a miss-loaded part during regular production. Simple control charts using Hotelling's T^2 statistic and Squared Prediction Error are illustrated. The modeling of quality data from the process data of good parts using Projection to Latent Structures by Partial Least Squares (PLS) data did not provide very accurate fits to the data; however, all of the predictions are within the tolerance specifications.</p><p>The final section discusses the implementation of a process monitoring system in both manual and automatic production environments. A method for the integration and storage of process data with Mitutoyo software MCOSMOS and MeasurLink® is described. All of the codes to perform multivariate analysis and process monitoring were written using Matlab.</p> / Thesis / Master of Applied Science (MASc)
217

An Investigation of Unidimensional Testing Procedures under Latent Trait Theory using Principal Component Analysis

McGill, Michael T. 11 December 2009 (has links)
There are several generally accepted rules for detecting unidimensionality, but none are well tested. This simulation study investigated well-known methods, including but not limited to, the Kaiser (k>1) Criterion, Percentage of Measure Validity (greater than 50%, 40%, or 20%), Ratio of Eigenvalues, and Kelley method, and compares these methods to each other and a new method proposed by the author (McGill method) for assessing unidimensionality. After applying principal component analysis (PCA) to the residuals of a Latent Trait Test Theory (LTTT) model, this study was able to address three purposes: determining the Type I error rates associated with various criterion values, for assessing unidimensionality; determining the Type II error rates and statistical power associated with various rules of thumb when assessing dimensionality; and, finally, determining whether more suitable criterion values could be established for the methods of the study by accounting for various characteristics of the measurement context. For those methods based on criterion values, new modified values are proposed. For those methods without criterion values for dimensionality decisions, criterion values are modeled and presented. The methods compared in this study were investigated using PCA on residuals from the Rasch model. The sample size, test length, ability distribution variability, and item distribution variability were varied and the resulting Type I and Type II error rates of each method were examined. The results imply that certain conditions can cause improper diagnoses as to the dimensionality of instruments. Adjusted methods are suggested to induce a more stable condition relative to the Type I and Type II error rates. The nearly ubiquitous Kaiser method was found to be biased towards signaling multidimensionality whether it exists or not. The modified version of the Kaiser method and the McGill method, proposed by the author were shown to be among the best at detecting unidimensionality when it was present. In short, methods that take into account changes in variables such as sample size, test length, item variability, and person variability are better than methods that use a single, static criterionvalue in decision making with respect to dimensionality. / Ph. D.
218

Applications of Sensory Analysis for Water Quality Assessment

Byrd, Julia Frances 30 January 2018 (has links)
In recent years, communities that source raw water from the Dan River experienced two severe and unprecedented outbreaks of unpleasant tastes and odors in their drinking water. During both TandO events strong 'earthy', 'musty' odors were reported, but the source was not identified. The first TandO event began in early February, 2015 and coincided with an algal bloom in the Dan River. The algal bloom was thought to be the cause, but after the bloom dissipated, odors persisted until May 2015. The second TandO in October, 2015 did not coincide with observed algal blooms. On February 2, 2014 approximately 39,000 tons of coal ash from a Duke Energy coal ash pond was spilled into the Dan River near Eden, NC. As there were no documented TandO events before the spill, there is concern the coal ash adversely impacted water quality and biological communities in the Dan River leading to the TandO events. In addition to the coal ash spill, years of industrial and agricultural activity in the Dan River area may have contributed to the TandO events. The purpose of this research was to elucidate causes of the two TandO events and provide guidance to prevent future problems. Monthly water samples were collected from August, 2016 to September, 2017 from twelve sites along the Dan and Smith Rivers. Multivariate analyses were applied to look for underlying factors, spatial or temporal trends in the data. There were no reported TandO events during the project but sensory analysis, Flavor Profile Analysis, characterized earthy/musty odors present. No temporal or spatial trends of odors were observed. Seven earthy/musty odorants commonly associated with TandO events were detected. Odor intensity was mainly driven by geosmin, but no relationship between strong odors and odorants was observed. / Master of Science
219

Segmentation of the market for labeled ornamental plants by environmental preferences: A latent class analysis

D'Alessio, Nicole Marie 09 July 2015 (has links)
Labeling is a product differentiation mechanism which has increased in prevalence across many markets. This study investigated the potential for a labeling program applied in ornamental plant sales, given key ongoing issues affecting ornamental plant producers: irrigation water use and plant disease. Our research investigated how to better understand the market for plants certified as disease free and/or produced using water conservation techniques through segmenting the market by consumers' environmental preferences. Latent class analysis was conducted using choice modeling survey results and respondent scores on the New Environmental Paradigm scale. The results show that when accounting for environmental preferences, consumers can be grouped into two market segments. Relative to each other, these segments are considered: price sensitive and attribute sensitive. Our research also investigated market segments' preferences for multiple certifying authorities. The results strongly suggest that consumers of either segment do not have a preference for any particular certifying authority. / Master of Science
220

Modified Kernel Principal Component Analysis and Autoencoder Approaches to Unsupervised Anomaly Detection

Merrill, Nicholas Swede 01 June 2020 (has links)
Unsupervised anomaly detection is the task of identifying examples that differ from the normal or expected pattern without the use of labeled training data. Our research addresses shortcomings in two existing anomaly detection algorithms, Kernel Principal Component Analysis (KPCA) and Autoencoders (AE), and proposes novel solutions to improve both of their performances in the unsupervised settings. Anomaly detection has several useful applications, such as intrusion detection, fault monitoring, and vision processing. More specifically, anomaly detection can be used in autonomous driving to identify obscured signage or to monitor intersections. Kernel techniques are desirable because of their ability to model highly non-linear patterns, but they are limited in the unsupervised setting due to their sensitivity of parameter choices and the absence of a validation step. Additionally, conventionally KPCA suffers from a quadratic time and memory complexity in the construction of the gram matrix and a cubic time complexity in its eigendecomposition. The problem of tuning the Gaussian kernel parameter, $sigma$, is solved using the mini-batch stochastic gradient descent (SGD) optimization of a loss function that maximizes the dispersion of the kernel matrix entries. Secondly, the computational time is greatly reduced, while still maintaining high accuracy by using an ensemble of small, textit{skeleton} models and combining their scores. The performance of traditional machine learning approaches to anomaly detection plateaus as the volume and complexity of data increases. Deep anomaly detection (DAD) involves the applications of multilayer artificial neural networks to identify anomalous examples. AEs are fundamental to most DAD approaches. Conventional AEs rely on the assumption that a trained network will learn to reconstruct normal examples better than anomalous ones. In practice however, given sufficient capacity and training time, an AE will generalize to reconstruct even very rare examples. Three methods are introduced to more reliably train AEs for unsupervised anomaly detection: Cumulative Error Scoring (CES) leverages the entire history of training errors to minimize the importance of early stopping and Percentile Loss (PL) training aims to prevent anomalous examples from contributing to parameter updates. Lastly, early stopping via Knee detection aims to limit the risk of over training. Ultimately, the two new modified proposed methods of this research, Unsupervised Ensemble KPCA (UE-KPCA) and the modified training and scoring AE (MTS-AE), demonstrates improved detection performance and reliability compared to many baseline algorithms across a number of benchmark datasets. / Master of Science / Anomaly detection is the task of identifying examples that differ from the normal or expected pattern. The challenge of unsupervised anomaly detection is distinguishing normal and anomalous data without the use of labeled examples to demonstrate their differences. This thesis addresses shortcomings in two anomaly detection algorithms, Kernel Principal Component Analysis (KPCA) and Autoencoders (AE) and proposes new solutions to apply them in the unsupervised setting. Ultimately, the two modified methods, Unsupervised Ensemble KPCA (UE-KPCA) and the Modified Training and Scoring AE (MTS-AE), demonstrates improved detection performance and reliability compared to many baseline algorithms across a number of benchmark datasets.

Page generated in 0.1313 seconds