Spelling suggestions: "subject:"dataanalysis"" "subject:"data.analysis""
201 |
Effiziente DatenanalyseHahne, Hannes, Schulze, Frank January 2015 (has links)
Die Fähigkeit zur Analyse großer Datenmengen sowie das extrahieren wichtiger Erkenntnisse daraus, sind in der modernen Unternehmenswelt ein entscheidender Wettbewerbsvorteil geworden. Umso wichtiger ist es, dabei vor allem nachvollziehbar, reproduzierbar und effizient vorzugehen.
Der Beitrag stellt mit dem Instrument der skriptbasierten Datenanalyse eine Möglichkeit vor, um diesen Anforderungen gerecht zu werden.
|
202 |
Persistent homology for the quantification of prostate cancer morphology in two and three-dimensional histologyJanuary 2020 (has links)
archives@tulane.edu / The current system for evaluating prostate cancer architecture is the Gleason Grade system, which divides the morphology of cancer into five distinct architectural patterns, labeled numerically in increasing levels of cancer aggressiveness and generates a score by summing the labels of the two most dominant patterns. The Gleason score is currently the most powerful prognostic predictor of patient outcomes; however, it suffers from problems in reproducibility and consistency due to the high intra-observer and inter-observer variability among pathologists. In addition, the Gleason system lacks the granularity to address potentially prognostic architectural features beyond Gleason patterns. We look towards persistent homology, a tool from topological data analysis, to provide a means of evaluating prostate cancer glandular architecture. The objective of this work is to demonstrate the capacity of persistent homology to capture architectural features independently of Gleason patterns in a representation suitable for unsupervised and supervised machine learning. Specifically, using persistent homology, we compute topological representations of purely graded prostate cancer histopathology images of Gleason patterns and show that persistent homology is capable of clustering prostate cancer histology into architectural groups through discrete representations of persistent homology in both two-dimensional and three-dimensional histopathology. We then demonstrate the performance of persistent homology based features in common machine learning classifiers, indicating that persistent homology can both separate unique architectures in prostate cancer, but is also predictive of prostate cancer aggressiveness. Our results indicate the ability of persistent homology to cluster into unique groups with dominant architectural patterns consistent with the continuum of Gleason patterns. In addition, of particular interest, is the sensitivity of persistent homology to identify specific sub-architectural groups within single Gleason patterns, suggesting that persistent homology could represent a robust quantification method for prostate cancer architecture with higher granularity than the existing semi-quantitative measures. This work develops a framework for segregating prostate cancer aggressiveness by architectural subtype using topological representations, in a supervised machine learning setting, and lays the groundwork for augmenting traditional approaches with topological features for improved diagnosis and prognosis. / 1 / Peter Lawson
|
203 |
Bayesian Modelling Frameworks for Simultaneous Estimation, Registration, and Inference for Functions and Planar CurvesMatuk, James Arthur January 2021 (has links)
No description available.
|
204 |
A Study of Online Auction Processes using Functional Data AnalysisOhalete, Nzubechukwu C. 02 June 2022 (has links)
No description available.
|
205 |
Mathematical Modeling of the Osmotic Fragility of Rabbit Red Blood CellsOrcutt, Ronald H., Thurmond, T. Scott, Ferslew, Kenneth E. 01 January 1995 (has links)
The osmotic fragility (OF) test is used to determine the extent of red blood cell hemolysis (RBCH) produced by osmotic stress. RBCH is dependent upon cell volume, surface area, and functional integrity of cell membranes. The variation of cell lysis with stress reflects underlying cell subpopulations and their membranes' cytoskeletal functionality. OF was determined on blood from New Zealand white rabbits. The dependence of RBCH on NaCl concentration ([NaCl]) was determined spectrophotometrically by measuring absorbance (Abs) from hemoglobin release at 545 nm. Abs data were fitted to the equation Abs = p3 erfc( ([NaCl] - p1) p2) where p3 reflects maximum RBCH, p1 measures the [NaCl] at 50% RBCH, and p2 shows the dispersion in [NaCl] producing the RBCH. Parameter values for control blood were p1 = 0.4489 ± 0.0016; p2 = 0.0486 ± 0.0016; and p3 = 0.4366 ± 0.0022. Addition of indomethacin (9.6 μg/mL) produced an increased fragility in the RBC's characterized by increased values of p1 and p2. Normalization of the data to p3 did not change the values of p1 or p2. Our equation satisfactorily describes the variation in RBCH as a function of [NaCl]. The parameters of the equation can be used to quantitatively characterize Abs/[NaCl]. The compare pharmacological, Toxicological, and pathological effects on the OF of RBC's.
|
206 |
Secondary Qualitative Analysis in the Family SciencesAnderson, Leslie A., Paulus, Trena M. 01 June 2021 (has links)
Sharing and reusing data can help researchers answer new questions and approach data from different analytical perspectives. The extant literature on data sharing has focused almost exclusively on qualitative data specifically, such as interviews and focus groups. Observational and video data capturing family interactions are a common data collection method in family science research. While quantitative analytic approaches are common, observational and video data can lend itself well to qualitative analysis. This paper introduces a secondary data analysis approach, referred to as methodological expansion, which involves the process of qualitatively analyzing pre-existing data that were collected for quantitative research purposes.
|
207 |
Optimisation of galaxy identification methods on large interferometric surveysGqaza, Themba 14 May 2019 (has links)
The astronomical size of spectral data cubes that will result from the SKA pathfinders planned large HI surveys such as LADUMA; Fornax HI survey; DINGO; WALLABY; etc. necessitate fully automated three-dimensional (3D) source finding and parametrization tools. A fraction of the percentage difference in the performance of these automated tools corresponds to a significant number of galaxies being detected or undetected. Failure or success to resolve satellites around big spirals will affect both the low and the high mass end of the HI mass function. As a result, the performance and efficiency of these automated tools are of great importance, especially in the epoch of big data. Here I present the comprehensive comparison of performance between the fully automated source identification and parametrization software: SOFIA, the visual galaxy identification method and the semi-automated galaxy identification method. Each galaxy identification method has been applied to the same ∼ 35 gigabytes 3D HI data cube. The data cube results from the blind HI imaging survey conducted using the Westerbork Synthesis Radio Telescope (WSRT). The survey mapped the overdensity corresponding to the Perseus-Pisces Supercluster filament crossing the Zone-of-Avoidance (ZoA), at (`, b) ≈ (160◦ , 0.5◦ ). A total of 211 galaxies detected using the semi-automated method by Ramatsoku et al. [2016]. In this work, I detected 194 galaxies (using the visual identification method) of which 89.7% (174) have cross-matches/counterparts on the galaxy catalogue produced through semi-automated identification method. A total of 130 detections were made using SOFIA of which 89 were also identified by the two other methods. I used the sample of 174 visual detections with semi-automated counterparts as a Testbed to calculate the reliability and completeness achieved by SOFIA. The achieved reliability is ∼ 0.68 whereas completeness is ∼ 0.51. Further parameter fine-tuning is necessary to have a better handle on all SOFIA parameters and achieve higher reliability and completeness values.
|
208 |
Advancing the Applicability of Fast Photochemical Oxidation of Proteins to Complex SystemsRinas, Aimee Lynn 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Hydroxyl radical protein footprinting coupled with mass spectrometry has become an invaluable technique for protein structural characterization. In this method, hydroxyl radicals react with solvent exposed amino acid side chains producing stable, covalently attached labels. Although this technique yields beneficial information, the extensive list of known oxidation products produced increases the complexity of identifying and quantifying oxidation products. The current methods available for quantifying the extent of oxidation either involve manual analysis steps, or limit the number of searchable modifications or the size of sequence database. This creates a bottleneck which can result in a long and arduous analysis process, which is further compounded in a complex sample. In addition to the data complexity, the peptides containing the oxidation products of hydroxyl radical-mediated protein footprinting experiments are typically much less abundant than their unoxidized counterparts. This is inherent to the design of the experiment as excessive oxidation may lead to undesired conformational changes or unfolding of the protein, skewing the results. Thus, as the complexity of the systems studied using this method expands, the detection and identification of these oxidized species can be increasingly difficult with the limitations of data-dependent acquisition (DDA) and one-dimensional chromatography. The recently published in cell FPOP method exemplifies where this field is headed - larger and more complex systems. This dissertation describes two new methodologies and one new technology for hydroxyl radical-mediated protein footprinting, expanding the applicability of the method. First is development of a new footprinting analysis method for both peptide and residue level analysis, allowing for faster quantification of results. This method utilizes a customized multilevel search workflow developed for an on-market search platform in conjunction with a quantitation platform developed using a free Excel add-in, expediting the analysis process. Second is the application of multidimensional protein identification technology (MudPIT) in combination with hydroxyl radical footprinting as a method to increase the identification of quantifiable peptides in these experiments. Last is the design and implementation of a flow system for in cell FPOP, which hydrodynamically focuses the cells, and when used yielded a 13-fold increase in oxidized proteins and 2 orders of magnitude increase in the dynamic range of the method.
|
209 |
Diagnostic Analysis of Postural Data using Topological Data AnalysisSiegrist, Kyle W. 02 August 2019 (has links)
No description available.
|
210 |
Periodic Performance Analysis to Predict Student Success RatesSenol, Nurettin Selcuk 20 May 2020 (has links)
No description available.
|
Page generated in 0.0632 seconds