• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 304
  • 139
  • 34
  • 31
  • 23
  • 19
  • 16
  • 16
  • 14
  • 12
  • 7
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 743
  • 743
  • 743
  • 141
  • 118
  • 112
  • 102
  • 86
  • 68
  • 65
  • 59
  • 57
  • 55
  • 54
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Development and Optimization of Near-infrared spectroscopy

Hahlin, Amanda January 2023 (has links)
With the growing demand for sustainable options, the existing sorting capacities are limiting the potential for fiber-to-fiber recycling. With the help of near-infrared spectroscopy (NIRS), automated sorting of textiles with high accuracy is possible due to the easy access for polymer identification. Despite the effectiveness of NIRS, some limitations of the process still limit its full potential. Possible disruptors may interfere with and disturb the identification of polymer identities and compositions in different ways. In the following thesis, additives, treatments, and other environmental factors that may hinder fiber identification are further acknowledged. The key results of the thesis state that stains and factors due to wear and tear are the most common possible disruptors that could be identified from pre-sorted post-consumer end-of-life textiles. Further on, stains of ketchup, deodorant, and oil affect the polymer recognition by lowering the recognized fiber content. Water-repellent coatings on 100 % polyamide woven fabric were not detected correctly according to the NIR scanner, as the stated polymer composition was >90 %. Even though some investigated factors, e.g., material structures, were correctly identified by the NIR scanner, the internal deviation of the knitted polyester structure indicates that porous and loose structures hold the ability to interfere with the detection of polymers. To what extent the operating software has been developed is highly relevant to the outcome of how accurate textile sorting may be.
322

Functional Principal Component Analysis of Vibrational Signal Data: A Functional Data Analytics Approach for Fault Detection and Diagnosis of Internal Combustion Engines

McMahan, Justin Blake 14 December 2018 (has links)
Fault detection and diagnosis is a critical component of operations management systems. The goal of FDD is to identify the occurrence and causes of abnormal events. While many approaches are available, data-driven approaches for FDD have proven to be robust and reliable. Exploiting these advantages, the present study applied functional principal component analysis (FPCA) to carry out feature extraction for fault detection in internal combustion engines. Furthermore, a feature subset that explained 95% of the variance of the original vibrational sensor signal was used in a multilayer perceptron to carry out prediction for fault diagnosis. Of the engine states studied in the present work, the ending diagnostic performance shows the proposed approach achieved an overall prediction accuracy of 99.72 %. These results are encouraging because they show the feasibility for applying FPCA for feature extraction which has not been discussed previously within the literature relating to fault detection and diagnosis.
323

Development of statistical shape and intensity models of eroded scapulae to improve shoulder arthroplasty

Sharif Ahmadian, Azita 22 December 2021 (has links)
Reverse Total shoulder arthroplasty (RTSA) is an effective treatment and a surgical alternative approach to conventional total shoulder arthroplasty for patients with severe rotator cuff tears and glenoid erosion. To help optimize RTSA design, it is necessary to gain insight into the geometry of glenoid erosions and consider their unique morphology across the entire bone. One of the most powerful tools to systematically quantify and visualize the variation of bone geometry throughout a population is Statistical Shape Modeling (SSM); this method can assess the variation in the full shape of a bone, rather than of discrete anatomical features, which is very useful in identifying abnormalities, planning surgeries, and improving implant designs. Recently, many scapula SSMs have been presented in the literature; however, each has been created using normal and healthy bones. Therefore, creation of a scapula SSM derived exclusively from patients exhibiting complex glenoid bone erosions is critical and significantly challenging. In addition, several studies have quantified scapular bone properties in patients with complex glenoid erosion. However, because of their discrete nature these analyses cannot be used as the basis for Finite Element Modeling (FEM). Thus, a need exists to systematically quantify the variation of bone properties in a glenoid erosion patient population using a method that captures variation across the entire bone. This can be achieved using Statistical Intensity Modeling (SIM), which can then generate scapula FEMs with realistic bone properties for evaluation of orthopaedic implants. Using an SIM enables researchers to generate models with bone properties that represent a specific, known portion of the population variation, which makes the findings more generalizable. Accordingly, the main purpose of this research is to develop an SSM and SIM to mathematically quantifying the variation of bone geometries in a systematic manner for the complex geometry of scapulae with severe glenoid erosion and to determine the main modes of variation in bone property distribution, which could be used for future FEM studies, respectively. To draw meaningful statistical conclusions from the dataset, we need to compare and relate corresponding parts of the scapula. To achieve this correspondence, 3D triangulated mesh models of 61 scapulae were created from pre-operative CT scans from patients who were treated with RTSA and then a Non-Rigid (NR) registration method was used to morph one Atlas point cloud to the shapes of all other bones. However, the more complex the shape, the more difficult it is to maintain good correspondence. To overcome this challenge, we have adapted and optimized a NR-Iterative Closest Point (ICP) method and applied that on 61 eroded scapulae which results in each bone shape having identical mesh structure (i.e., same number and anatomical location of points). To assess the quality of our proposed algorithm, the resulting correspondence error was evaluated by comparing the positions of ground truth points and the corresponding point locations produced by the algorithm. The average correspondence error of all anatomical landmarks across the two observers was 2.74 mm with inter and intra-observer reliability of ±0.31 and ±0.06 mm. Moreover, the Root-Mean-Square (RMS) and Hausdorff errors of geometric registration between the original and the deformed models were calculated 0.25±0.04 mm and 0.76±0.14 mm, respectively. After registration, Principal Component Analysis (PCA) is applied to the deformed models as a group to describe independent modes of variation in the dataset. The robustness of the SSM is also evaluated using three standard metrics: compactness, generality, and specificity. Regarding compactness, the first 9 principal modes of variations accounted for 95% variability, while the model’s generality error and the calculated specificity over 10,000 instances were found to be 2.6 mm and 2.99 mm, respectively. The SIM results showed that the first mode of variation accounts for overall changes in intensity across the entire bone, while the second mode represented localized changes in the glenoid vault bone quality. The third mode showed changes in intensity at the posterior and inferior glenoid rim associated with posteroinferior glenoid rim erosion which suggests avoiding fixation in this region and preferentially placing screws in the anterosuperior region of the glenoid to improve implant fixation. / Graduate
324

A CORRELATION OF WESTERN ARCTIC OCEAN SEDIMENTATION DURING THE LATE HOLOCENE WITH AN ATMOSPHERIC TEMPERATURE PROXY RECORD FROM A GLACIAL LAKE IN THE BROOKS RANGE, ALASKA

Harrison, Jeffrey Michael 22 April 2013 (has links)
No description available.
325

A Principal Component Regression Analysis for Detection of the Onset of Nocturnal Hypoglycemia in Type 1 Diabetic Patients

Zuzarte, Ian Jeromino January 2008 (has links)
No description available.
326

[en] A NOVEL SEMIPARAMETRIC STRUCTURAL MODEL FOR ELECTRICITY FORWARD CURVES / [pt] MODELO ESTRUTURAL SEMI-PARAMÉTRICO PARA CURVAS FORWARD DE ELETRICIDADE

MARINA DIETZE MONTEIRO 23 February 2021 (has links)
[pt] A proteção contra a volatilidade dos preços spot torna-se cada vez mais importante nos mercados de energia desverticalizados. Portanto, ser capaz de modelar preços forward e futuros de eletricidade é crucial em um ambiente competitivo. A eletricidade difere de outras commodities devido à sua capacidade de armazenamento e transporte limitados. Além disso, seus derivativos estão associados a um período de entrega durante o qual a energia é concedida continuamente, o que implica em muitas vezes os contratos de eletricidades serem denominados swaps. Tais peculiaridades tornam a modelagem de preços de contratos de energia elétrica uma tarefa não trivial, onde os modelos tradicionais devem ser adaptados para atender às características mencionadas. Neste contexto, foi proposto um modelo estrutural semi-paramétrico para obtenção de uma curva forward de eletricidade contínua e diária através de critérios de máxima suavidade. Ademais, os contratos forward elementares podem ser representados por qualquer estrutura paramétrica para sazonalidade ou mesmo para variáveis exógenas. Nossa estrutura reconhece a sobreposição dos swaps e permite uma análise das oportunidades de arbitragem observadas nos mercados de energia. A curva forward é calculada por um problema de otimização hierárquico capaz de lidar com conjuntos de dados escassos de mercados com baixa liquidez. Os resultados do PCA corroboram a capacidade do modelo em explicar uma alta porcentagem da variância com apenas alguns fatores. / [en] Hedging against spot price volatilities becomes increasingly important in deregulated power markets. Therefore, being able to model electricity forward prices is crucial in a competitive environment. Electricity differs from other commodities due to its limited storability and transportability. Furthermore, its derivatives are associated with a delivery period during which electricity is continuously delivered, implying on referring to power forwards as swaps. These peculiarities make the modeling of electricity contract prices a non-trivial task, where traditional models must be adapted to address the mentioned characteristics. In this context, we propose a novel semiparametric structural model to compute a continuous daily forward curve of electricity through maximum smoothness criterion. In addition, elementary forward contracts can be represented by any parametric structure for seasonality or even for exogenous variables. Our framework acknowledges the overlapped swaps and allows an analysis of arbitrage opportunities observed in power markets. The smooth forward curve is computed by a hierarchical optimization problem able to handle scarce data sets from low-liquidity markets. PCA results corroborate our framework s capability to explain a high percentage of variance with only a few factors.
327

Assessing Crash Occurrence On Urban Freeways Using Static And Dynamic Factors By Applying A System Of Interrelated Equations

Pemmanaboina, Rajashekar 01 January 2005 (has links)
Traffic crashes have been identified as one of the main causes of death in the US, making road safety a high priority issue that needs urgent attention. Recognizing the fact that more and effective research has to be done in this area, this thesis aims mainly at developing different statistical models related to the road safety. The thesis includes three main sections: 1) overall crash frequency analysis using negative binomial models, 2) seemingly unrelated negative binomial (SUNB) models for different categories of crashes divided based on type of crash, or condition in which they occur, 3) safety models to determine the probability of crash occurrence, including a rainfall index that has been estimated using a logistic regression model. The study corridor is a 36.25 mile stretch of Interstate 4 in Central Florida. For the first two sections, crash cases from 1999 through 2002 were considered. Conventionally most of the crash frequency analysis model all crashes, instead of dividing them based on type of crash, peaking conditions, availability of light, severity, or pavement condition, etc. Also researchers traditionally used AADT to represent traffic volumes in their models. These two cases are examples of macroscopic crash frequency modeling. To investigate the microscopic models, and to identify the significant factors related to crash occurrence, a preliminary study (first analysis) explored the use of microscopic traffic volumes related to crash occurrence by comparing AADT/VMT with five to twenty minute volumes immediately preceding the crash. It was found that the volumes just before the time of crash occurrence proved to be a better predictor of crash frequency than AADT. The results also showed that road curvature, median type, number of lanes, pavement surface type and presence of on/off-ramps are among the significant factors that contribute to crash occurrence. In the second analysis various possible crash categories were prepared to exactly identify the factors related to them, using various roadway, geometric, and microscopic traffic variables. Five different categories are prepared based on a common platform, e.g. type of crash. They are: 1) Multiple and Single vehicle crashes, 2) Peak and Off-peak crashes, 3) Dry and Wet pavement crashes, 4) Daytime and Dark hour crashes, and 5) Property Damage Only (PDO) and Injury crashes. Each of the above mentioned models in each category are estimated separately. To account for the correlation between the disturbance terms arising from omitted variables between any two models in a category, seemingly unrelated negative binomial (SUNB) regression was used, and then the models in each category were estimated simultaneously. SUNB estimation proved to be advantageous for two categories: Category 1, and Category 4. Road curvature and presence of On-ramps/Off-ramps were found to be the important factors, which can be related to every crash category. AADT was also found to be significant in all the models except for the single vehicle crash model. Median type and pavement surface type were among the other important factors causing crashes. It can be stated that the group of factors found in the model considering all crashes is a superset of the factors that were found in individual crash categories. The third analysis dealt with the development of a logistic regression model to obtain the weather condition at a given time and location on I-4 in Central Florida so that this information can be used in traffic safety analyses, because of the lack of weather monitoring stations in the study area. To prove the worthiness of the weather information obtained form the analysis, the same weather information was used in a safety model developed by Abdel-Aty et al., 2004. It was also proved that the inclusion of weather information actually improved the safety model with better prediction accuracy.
328

Selective Multivariate Applications In Forensic Science

Rinke, Caitlin 01 January 2012 (has links)
A 2009 report published by the National Research Council addressed the need for improvements in the field of forensic science. In the report emphasis was placed on the need for more rigorous scientific analysis within many forensic science disciplines and for established limitations and determination of error rates from statistical analysis. This research focused on multivariate statistical techniques for the analysis of spectral data obtained for multiple forensic applications which include samples from: automobile float glasses and paints, bones, metal transfers, ignitable liquids and fire debris, and organic compounds including explosives. The statistical techniques were used for two types of data analysis: classification and discrimination. Statistical methods including linear discriminant analysis and a novel soft classification method were used to provide classification of forensic samples based on a compiled library. The novel soft classification method combined three statistical steps: Principal Component Analysis (PCA), Target Factor Analysis (TFA), and Bayesian Decision Theory (BDT) to provide classification based on posterior probabilities of class membership. The posterior probabilities provide a statistical probability of classification which can aid a forensic analyst in reaching a conclusion. The second analytical approach applied nonparametric methods to provide the means for discrimination between samples. Nonparametric methods are performed as hypothesis test and do not assume normal distribution of the analytical figures of merit. The nonparametric iv permutation test was applied to forensic applications to determine the similarity between two samples and provide discrimination rates. Both the classification method and discrimination method were applied to data acquired from multiple instrumental methods. The instrumental methods included: Laser Induced-Breakdown Spectroscopy (LIBS), Fourier Transform Infrared Spectroscopy (FTIR), Raman spectroscopy, and Gas Chromatography-Mass Spectrometry (GC-MS). Some of these instrumental methods are currently applied to forensic applications, such as GC-MS for the analysis of ignitable liquid and fire debris samples; while others provide new instrumental methods to areas within forensic science which currently lack instrumental analysis techniques, such as LIBS for the analysis of metal transfers. The combination of the instrumental techniques and multivariate statistical techniques is investigated in new approaches to forensic applications in this research to assist in improving the field of forensic science.
329

Development and Application of Novel Computer Vision and Machine Learning Techniques

Depoian, Arthur Charles, II 08 1900 (has links)
The following thesis proposes solutions to problems in two main areas of focus, computer vision and machine learning. Chapter 2 utilizes traditional computer vision methods implemented in a novel manner to successfully identify overlays contained in broadcast footage. The remaining chapters explore machine learning algorithms and apply them in various manners to big data, multi-channel image data, and ECG data. L1 and L2 principal component analysis (PCA) algorithms are implemented and tested against each other in Python, providing a metric for future implementations. Selected algorithms from this set are then applied in conjunction with other methods to solve three distinct problems. The first problem is that of big data error detection, where PCA is effectively paired with statistical signal processing methods to create a weighted controlled algorithm. Problem 2 is an implementation of image fusion built to detect and remove noise from multispectral satellite imagery, that performs at a high level. The final problem examines ECG medical data classification. PCA is integrated into a neural network solution that achieves a small performance degradation while requiring less then 20% of the full data size.
330

Analysis of Transactional Data with Long Short-Term Memory Recurrent Neural Networks

Nawaz, Sabeen January 2020 (has links)
An issue authorities and banks face is fraud related to payments and transactions where huge monetary losses occur to a party or where money laundering schemes are carried out. Previous work in the field of machine learning for fraud detection has addressed the issue as a supervised learning problem. In this thesis, we propose a model which can be used in a fraud detection system with transactions and payments that are unlabeled. The proposed modelis a Long Short-term Memory in an auto-encoder decoder network (LSTMAED)which is trained and tested on transformed data. The data is transformed by reducing it to Principal Components and clustering it with K-means. The model is trained to reconstruct the sequence with high accuracy. Our results indicate that the LSTM-AED performs better than a random sequence generating process in learning and reconstructing a sequence of payments. We also found that huge a loss of information occurs in the pre-processing stages. / Obehöriga transaktioner och bedrägerier i betalningar kan leda till stora ekonomiska förluster för banker och myndigheter. Inom maskininlärning har detta problem tidigare hanterats med hjälp av klassifierare via supervised learning. I detta examensarbete föreslår vi en modell som kan användas i ett system för att upptäcka bedrägerier. Modellen appliceras på omärkt data med många olika variabler. Modellen som används är en Long Short-term memory i en auto-encoder decoder nätverk. Datan transformeras med PCA och klustras med K-means. Modellen tränas till att rekonstruera en sekvens av betalningar med hög noggrannhet. Vår resultat visar att LSTM-AED presterar bättre än en modell som endast gissar nästa punkt i sekvensen. Resultatet visar också att mycket information i datan går förlorad när den förbehandlas och transformeras.

Page generated in 0.1511 seconds