• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 304
  • 139
  • 34
  • 31
  • 23
  • 19
  • 16
  • 16
  • 14
  • 12
  • 7
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 743
  • 743
  • 743
  • 141
  • 118
  • 112
  • 102
  • 86
  • 68
  • 65
  • 59
  • 57
  • 55
  • 54
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Financial Risk Profiling using Logistic Regression / Finansiell riskprofilering med logistisk regression

Emfevid, Lovisa, Nyquist, Hampus January 2018 (has links)
As automation in the financial service industry continues to advance, online investment advice has emerged as an exciting new field. Vital to the accuracy of such service is the determination of the individual investors’ ability to bear financial risk. To do so, the statistical method of logistic regression is used. The aim of this thesis is to identify factors which are significant in determining a financial risk profile of a retail investor. In other words, the study seeks to map out the relationship between several socioeconomic- and psychometric variables to develop a predictive model able to determine the risk profile. The analysis is based on survey data from respondents living in Sweden. The main findings are that variables such as income, consumption rate, experience of a financial bear market, and various psychometric variables are significant in determining a financial risk profile. / I samband med en ökad automatiseringstrend har digital investeringsrådgivning dykt upp som ett nytt fenomen. Av central betydelse är tjänstens förmåga att bedöma en investerares förmåga till att bära finansiell risk. Logistik regression tillämpas för att bedöma en icke- professionell investerares vilja att bära finansiell risk. Målet med uppsatsen är således att identifiera ett antal faktorer med signifikant förmåga till att bedöma en icke-professionell investerares riskprofil. Med andra ord, så syftar denna uppsats till att studera förmågan hos ett antal socioekonomiska- och psykometriska variabler. För att därigenom utveckla en prediktiv modell som kan skatta en individs finansiella riskprofil. Analysen genomförs med hjälp av en enkätstudie hos respondenter bosatta i Sverige. Den huvudsakliga slutsatsen är att en individs inkomst, konsumtionstakt, tidigare erfarenheter av abnorma marknadsförhållanden, och diverse psykometriska komponenter besitter en betydande förmåga till att avgöra en individs finansiella risktolerans
192

The Application of Post-hoc Correction Methods for Soft Tissue Artifact and Marker Misplacement in Youth Gait Knee Kinematics

Lawson, Kaila L 01 June 2021 (has links) (PDF)
Biomechanics research investigating the knee kinematics of youth participants is very limited. The most accurate method of measuring knee kinematics utilizes invasive procedures such as bone pins. However, various experimental techniques have improved the accuracy of gait kinematic analyses using minimally invasive methods. In this study, gait trials were conducted with two participants between the ages of 11 and 13 to obtain the knee flexion-extension (FE), adduction-abduction (AA) and internal-external (IE) rotation angles of the right knee. The objectives of this study were to (1) conduct pilot experiments with youth participants to test whether any adjustments were necessary in the experimental methods used for adult gait experiments, (2) apply a Triangular Cosserat Point Element (TCPE) analysis for Soft-Tissue Artifact (STA) correction of knee kinematics with youth participants, and (3) develop a code to conduct a Principal Component Analysis (PCA) to find the PCA-defined flexion axis and calculate knee angles with both STA and PCA-correction for youth participants. The kinematic results were analyzed for six gait trials on a participant-specific basis. The TCPE knee angle results were compared between uncorrected angles and another method of STA correction, Procrustes Solution, with a repeated measures ANOVA of the root mean square errors between each group and a post-hoc Tukey test. The PCA-corrected results were analyzed with a repeated measures ANOVA of the FE-AA correlations from a linear regression analysis between TCPE, PS, PCA-TCPE and PCA-PS angles. The results indicated that (1) youth experiments can be conducted with minor changes to experimental methods used for adult gait experiments, (2) TCPE and PS analyses did not yield statistically different knee kinematic results, and (3) PCA-correction did not reduce FE-AA correlations as predicted.
193

Wildfire Detection System Based on Principal Component Analysis and Image Processing of Remote-Sensed Video

Radjabi, Ryan F. 01 June 2016 (has links) (PDF)
Early detection and mitigation of wildfires can reduce devastating property damage, firefighting costs, pollution, and loss of life. This thesis proposes the method of Principal Component Analysis (PCA) of images in the temporal domain to identify a smoke plume in wildfires. Temporal PCA is an effective motion detector, and spatial filtering of the output Principal Component images can segment the smoke plume region. The effective use of other image processing techniques to identify smoke plumes and heat plumes are compared. The best attributes of smoke plume detectors and heat plume detectors are evaluated for combination in an improved wildfire detection system. PCA of visible blue images at an image sampling rate of 2 seconds per image effectively exploits a smoke plume signal. PCA of infrared images is the fundamental technique for exploiting a heat plume signal. A system architecture is proposed for the implementation of image processing techniques. The real-world deployment and usability are described for this system.
194

On facial age progression based on modified active appearance models with face texture

Bukar, Ali M., Ugail, Hassan, Hussain, Nosheen 09 1900 (has links)
No / Age progression that involves the reconstruction of facial appearance with a natural ageing effect has several applications. These include the search for missing people and identification of fugitives. The majority of age progression methods reported in the literature are data driven. Hence, such methods learn from training data and utilise statistical models such as 3D morphable models and active appearance models (AAM). Principal component analysis (PCA) which is a vital part of these models has an unfortunate drawback of averaging out texture details. Therefore, they work as a low pass filter and as such many of the face skin deformations and minor details become faded. Interestingly, recent work in 2D and 3D animation has shown that patches of the human face are somewhat similar when compared in isolation. Thus, researchers have proposed generating novel faces by compositing small face patches, usually from large image databases. Following these ideas, we propose a novel age progression model which synthesises aged faces using a hybrid of these two techniques. First, an invertible model of age synthesis is developed using AAM and sparse partial least squares regression (sPLS). Then the texture details of the face are enhanced using the patch-based synthesis approach. Our results show that the hybrid algorithm produces both unique and realistic images. Furthermore, our method demonstrates that the identity and ageing effects of subjects can be more emphasised.
195

Development and Application of a Congruence-Based Knee Model in Anterior Cruciate Ligament Injured Adolescents

Warren, Claire Emily 28 November 2022 (has links)
Objective: Patient-specific musculoskeletal models have emerged as a reliable method to study how tibiofemoral joint (TFJ) morphology influences anterior cruciate ligament (ACL) injuries. However, there are no such models for adolescent populations that can be scaled to accommodate growth. To serve as the foundation for such models, the objective of this thesis was therefore to i) build a patient-specific model of natural knee motion in an ACL-injured (ACLi) adolescent sample using joint congruency and ii) to attempt to reconstruct patient-specific simplified articular contacts using principal component analysis (PCA). Design: Twelve magnetic resonance images (MRI) of ACi adolescents were segmented and used to generate spheres of simplified TFJ articulations. A congruence-based optimization algorithm was used to determine the envelope of tibiofemoral configurations that optimize joint congruency. Descriptive statistics were used to compare model outputs to existing literature. Combinations of marker trajectories and anthropometrics were used to determine the feasibility of reconstructing articular sphere simplifications using PCA. Root-mean squared error (RMSE) was used to compare predicted sphere contacts to MRI-extracted contacts. Results: Average knee joint anglesof the femur with respect to the tibia was slightly abducted and externally rotated, with a range of motion (ROM) of 1.60º ± 0.66 and 7.64 º ± 2.34 across 102° of flexion respectively. The percent elongation of the posterior cruciate ligament (PCL) varied the most across participants (8.65 ± 6.2%) compared to the ACL (2.34 ± 2.1%), MCL (1.41 ± 0.5%) and LCL (1.75 ± 1.6%) respectively. The combination of femur markers and anthropometrics was able to reconstruct simplified tibiofemoral articulations the best, but not within 5 mm of RMSE. Conclusion: Inter-subject variability in passive kinematic motion derived from patient-specific morphology highlights the need for personalized and accessible musculoskeletal models in growing populations. Furthermore, simplified distal femur morphology can be reconstructed from anthropometrics and marker positions, but proximal tibia morphology requires more information.
196

GRAPH-BASED ANALYSIS OF NON-RANDOM MISSING DATA PROBLEMS WITH LOW-RANK NATURE: STRUCTURED PREDICTION, MATRIX COMPLETION AND SPARSE PCA

Hanbyul Lee (17586345) 09 December 2023 (has links)
<p dir="ltr">In most theoretical studies on missing data analysis, data is typically assumed to be missing according to a specific probabilistic model. However, such assumption may not accurately reflect real-world situations, and sometimes missing is not purely random. In this thesis, our focus is on analyzing incomplete data matrices without relying on any probabilistic model assumptions for the missing schemes. To characterize a missing scheme deterministically, we employ a graph whose adjacency matrix is a binary matrix that indicates whether each matrix entry is observed or not. Leveraging its graph properties, we mathematically represent the missing pattern of an incomplete data matrix and conduct a theoretical analysis of how this non-random missing pattern affects the solvability of specific problems related to incomplete data. This dissertation primarily focuses on three types of incomplete data problems characterized by their low-rank nature: structured prediction, matrix completion, and sparse PCA.</p><p dir="ltr">First, we investigate a basic structured prediction problem, which involves recovering binary node labels on a fixed undirected graph, where noisy binary observations corresponding to edges are given. Essentially, this setting parallels a simple binary rank-1 symmetric matrix completion problem, where missing entries are determined by a fixed undirected graph. Our aim is to establish the fundamental limit bounds of this problem, revealing a close association between the limits and graph properties, such as connectivity.</p><p dir="ltr">Second, we move on to the general low-rank matrix completion problem. In this study, we establish provable guarantees for exact and approximate low-rank matrix completion problems that can be applied to any non-random missing pattern, by utilizing the observation graph corresponding to the missing scheme. We theoretically and experimentally show that the standard constrained nuclear norm minimization algorithm can successfully recover the true matrix when the observation graph is well-connected and has similar node degrees. We also verify that matrix completion is achievable with a near-optimal sample complexity rate when the observation graph has uniform node degrees and its adjacency matrix has a large spectral gap.</p><p dir="ltr">Finally, we address the sparse PCA problem, featuring an approximate low-rank attribute. Missing data is common in situations where sparse PCA is useful, such as single-cell RNA sequence data analysis. We propose a semidefinite relaxation of the non-convex $\ell_1$-regularized PCA problem to solve sparse PCA on incomplete data. We demonstrate that the method is particularly effective when the observation pattern has favorable properties. Our theory is substantiated through synthetic and real data analysis, showcasing the superior performance of our algorithm compared to other sparse PCA approaches, especially when the observed data pattern has specific characteristics.</p>
197

De-mixing Decision Representations in Rodent dmPFC to Investigate Strategy Change During Delay Discounting

Shelby M White (6615890) 31 May 2023 (has links)
<p>Preclinical rodent models were used to investigate the neural signatures of strategy change during the delay discounting decision making task. Neural signatures were assessed using advanced statistical techniques (de-mixed principal component analysis). </p>
198

Organic Petrography and Geochemistry of the Bakken Formation, Williston Basin, ND USA

Abdi, Zain 01 May 2023 (has links) (PDF)
The environmental processes and conditions controlling productivity and organic matter (OM) accumulation/preservation as well as bottom–water redox conditions in the lower black shale (LBS) and upper black shale (UBS) members of the Devonian-Mississippian (D–M) Bakken Formation were evaluated utilizing trace metal (TM) concentrations, degree of pyritization (DOPT), enrichment factors (EF) of TMs, bi–metal ratios (V/Cr, V/(V+Ni), Ni/Co, U/Th), total sulfur (ST) vs. iron (Fe), total organic carbon (TOC), carbon–sulfur–iron relationships (C–S–Fe), as well as Mo–TOC and Mo EF–U EF relationships. High-resolution (1- to 3-cm scale) chemostratigraphic records were generated for twelve drill cores, four of which closely flank the N–S-trending axis of the Nesson Anticline, proximal to the center of the Williston Basin in northwest North Dakota, USA. Furthermore, five of the twelve drill cores were selected (sample selection was based on down–core spacing and TM concentrations) for petrographic and Rock-Eval analysis to assess variations in kerogen type, quantity, quality, and thermal maturity (based on solid bitumen reflectance (%SBRo), vitrinite reflectance equivalence (%VRE), Rock–Eval Tmax–derived vitrinite reflectance (%Ro)) from immature to condensate, wet gas hydrocarbon generation windows. Degree of pyritization (DOPT) values (0.25 to 1.0) indicate that bottom waters were frequently dysoxic (> 60%) with intermittent aerobic and anoxic/euxinic conditions which is in agreement with C–S–Fe and total ST vs. Fe assessments of paleoredox conditions and sedimentological evidence. Furthermore, using published Mo–TOC relationships from modern anoxic-euxinic basins, it is estimated that renewal time of the sub-chemoclinal water mass during accumulation of the LBS and UBS approximated 10 and 30 yrs., respectively. Agreement is also seen between Mo/TOC and Mo EF/U EF where both suggest the Bakken shales were deposited under relatively unrestricted water mass conditions resulting in consistent renewal of TMs into the basin. However, bi–metal ratios suggest > 80% of samples were deposited under suboxic to anoxic/euxinic conditions. Trace metal concentrations for the Bakken Fm. show considerable range for Co (0–10324 ppm), Mo (0–2018 ppm), Ni (0–1574 ppm), U (0–1604 ppm), and V (0–3194 ppm), and bi–metal ratios for the Bakken Fm. are up to 5x greater than those reported for other D–M black shale formations. The Bakken black shales represent a unique sedimentary system where the EF of various TMs such as Cu (6.2–7.7), Mo (219.7–237.8), Ni (9.4–10.2), U (20.6–29.3), V (9.9–14.2), and Zn (10.4–12.2) as well as total organic carbon contents (LBS = 10.80 and UBS = 11.80 avg. wt.%) are considerably higher than other Devonian–Mississippian black shales. In this study, raw distributions of elemental concentrations combined with bivariate and principal component analysis (PCA) were used to elucidate the processes that could have contributed to the high EF of TMs in the Bakken shales. Total organic carbon shares heavier PCA component loadings (>0.445) and stronger correlation coefficients (r) with Cu, Mo, Ni, U, V, and Zn rather than with pyrite-associated (As, Co, Fe, and S) elements, suggesting that TOC played a primary role in the scavenging and accumulation of TMs in the sediments. Reducing conditions within bottom waters or sediment pore waters may have accelerated the accumulation of redox-sensitive Cu, Mo, Ni, V, and Zn introduced into the sediments via primarily an organic matter (OM) detritus host and most likely played a secondary role in the enrichment of TMs. The high EF of TMs observed in the Bakken shales may be the result of the frequent resupply of TMs into basin waters, enhanced primary productivity that is necessary in scavenging TMs from the water column, the presence of H2S within sediment pore or bottom waters, or possibly secondary processes associated with basin-wide fluid and hydrocarbon migration. Factors controlling TM accumulation during time of deposition (e.g., TM availability, bottom-water redox conditions, adsorption onto organic matter) and during diagenesis and catagenesis (e.g., alteration and break down of organic matter, movement of fluid hydrocarbons or other basinal fluids) likely contribute to the lack of agreement between redox proxies, and subsequently, the lack of applicability of bi–metal ratios (i.e., V/Cr, V/(V+Ni), Ni/Co, U/Th) in assessing bottom–water conditions for the Bakken shales. Solid bitumen (SB), a secondary organic matter formed as a residue after hydrocarbon generation (through either sufficient thermal maturation or microbial degradation) and expulsion, is primarily dispersed within the mineral matrix and increases in quantity with increasing thermal maturity. Rock-Eval II and HAWK analyzers were used to measure and estimate the hydrogen index (HI; avg. 201 mg HC/g TOC), oxygen index (OI; avg. 7mg CO2/g TOC), S1 (free hydrocarbons; avg. 8.0 mg HC/g rock), S2 (hydrocarbons generated after cracking kerogen; avg. 24.3 mg HC/g rock), and %Ro (0.60–1.03%; estimated from Tmax). The HI and OI values are calculated from TOC as well as S2 and S3 (oxygen bonded to hydrocarbons). Plots of HI vs. Tmax (ºC) and HI vs. OI as well as S2 vs. S3 ratio were utilized to determine the type of kerogen, primary OM that is insoluble in organic solvents. However, these relationships are not in agreement with kerogen typing based on petrographic observations, where samples from more thermally mature cores plot as Type III (vitrinite) kerogen instead of observed Type I/II (marine algae) kerogen. This is largely due to the abundant presence of SB in the more thermally mature section of the Bakken (Rock-Eval Ro = 0.83–1.03%) as SB is known to have a lower HI content than Type II kerogen. Petrographic evidence shows greater abundance of alginite and amorphous organic matter (AOM or bituminite) in the thermally less mature (Rock-Eval Ro = 0.60–0.83%) section of the Bakken compared to the greater abundance of dispersed SB in the more thermally mature section where AOM is absent. Early research on the Bakken Fm. reported lower than expected vitrinite reflectance values attributed to vitrinite “suppression". The overall lack of vitrinite and abundance of solid bitumen in these shales suggests that these early attempts likely reported solid bitumen reflectance rather than vitrinite reflectance. More recent attempts to assess the thermal maturity of the Bakken Fm. black shales have measured and converted SBRo to vitrinite reflectance equivalent (VRE). However, samples selected for SBRo by some previous workers have included heterogenous, granular as well as high reflecting SB samples, which introduce error in the measurements. As such, reported reflectance values are most likely lower than they would be if smooth, homogenous solid bitumen with no inclusions were measured. For this project, smooth and homogenous SB was measured to produce consistent and reliable VRE values to assess the thermal maturity gradient from the Bakken Fm. basin margins to the depocenter. Blue-light fluorescence petrography was done to support thermal maturity assessments. Results from SBRo, Rock-Eval Ro, VRE, and blue-light fluorescence observations suggest that cores from the current study range from early oil window into condensate, wet gas.
199

Statistical methods to identify differentially methylated regions using illumina methylation arrays

Zheng, Yuanchao 08 February 2024 (has links)
DNA methylation is an epigenetic mechanism that usually occurs at CpG sites in the genome. Both sequencing and array-based techniques are available to detect methylation patterns. Whole-genome bisulfite sequencing is the most comprehensive but cost-prohibitive approach, and microarrays represent an affordable alternative approach. Array-based methods are generally cheaper but assess a specific number of genomic loci, such as Illumina methylation arrays. Differentially methylated regions (DMRs) are genomic regions with specific methylation patterns across multiple CpG sites that associate with a phenotype. Methylation at nearby sites tends to be correlated, therefore it may be more powerful to study sets of sites to detect methylation differences as well as reduce the multiple testing burden, compared to utilizing individual sites. Several statistical approaches exist for identifying DMRs, and a few prior publications compared the performance of several commonly used DMR methods. However, as far as we know, no comprehensive comparisons have been made based on genome-wide simulation studies. This dissertation provides some comprehensive suggestions for DMR analysis based on genome-wide evaluations of existing DMR tools and presents the development of a novel approach to increase the power to identify DMRs with clinical value in genomic research. The second chapter presents genome-wide null simulations to compare five commonly used array-based DMR methods (Bumphunter, comb-p, DMRcate, mCSEA and coMethDMR) and identifies coMethDMR as the only approach that consistently yields appropriate Type I error control. We suggest that a genome-wide evaluation of false positive (FP) rates is critical for DMR methods. The third chapter develops a novel Principal Component Analysis based DMR method (denoted as DMRPC), which demonstrates its ability to identify DMRs using genome-wide methylation arrays with well-controlled FP rates at the level of 0.05. Compared to coMethDMR, DMRPC is a robust and powerful novel DMR tool that can examine more genomic regions and extract signals from low-correlation regions. The fourth chapter applies the new DMR approach DMRPC in two “real-world” datasets and identifies novel DMRs that are associated with several inflammatory markers.
200

On the Performance of Jpeg2000 and Principal Component Analysis in Hyperspectral Image Compression

Zhu, Wei 05 May 2007 (has links)
Because of the vast data volume of hyperspectral imagery, compression becomes a necessary process for hyperspectral data transmission, storage, and analysis. Three-dimensional discrete wavelet transform (DWT) based algorithms are particularly of interest due to their excellent rate-distortion performance. This thesis investigates several issues surrounding efficient compression using JPEG2000. Firstly, the rate-distortion performance is studied when Principal Component Analysis (PCA) replaces DWT for spectral decorrelation with the focus on the use of a subset of principal components (PCs) rather than all the PCs. Secondly, the algorithms are evaluated in terms of data analysis performance, such as anomaly detection and linear unmixing, which is directly related to the useful information preserved. Thirdly, the performance of compressing radiance and reflectance data with or without bad band removal is compared, and instructive suggestions are provided for practical applications. Finally, low-complexity PCA algorithms are presented to reduce the computational complexity and facilitate the future hardware design.

Page generated in 0.1164 seconds