• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 188
  • 56
  • 24
  • 10
  • 9
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 383
  • 232
  • 87
  • 73
  • 70
  • 66
  • 48
  • 46
  • 46
  • 40
  • 39
  • 37
  • 35
  • 34
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

The dimensionality of the Rosenberg Self-Esteem Scale (RSES) with South African University Students

Ndima, Nombeko Lungelwa Velile January 2017 (has links)
The Rosenberg Self-Esteem Scale (RSES) has been the subject of widespread debate over the years. Initially conceptualised by Rosenberg as a undimensional measure of global self-esteem, other studies have found evidence that challenges this notion, suggesting that this scale is in fact a multidimensional measure. The aim of this study was to investigate the construct validity of the RSES among South African university students. The RSES was administered to students from two different South African universities located in different regions (N = 304). Principal component analysis (PCA) was used in order to investigate the factor structure of the RSES and correlations were run between the RSES and the General Self-Efficacy Scale (SGSES) to investigate the relationship between self-esteem and self-efficacy. The PCA findings yielded a single factor structure of the RSES in the South African university student sample and a significant positive correlation was observed between self-esteem and self-efficacy. The findings therefore supported the construct validity of the RSES within the South African university context. / Mini Dissertation (MA)--University of Pretoria, 2017. / Psychology / MA / Unrestricted
132

Combining brain imaging and genetic data using fast and efficient multivariate correlation analysis

Grellmann, Claudia 10 July 2017 (has links)
Many human neurological and psychiatric disorders are substantially heritable and there is growing inter-est in searching for genetic variants explaining variability in disease-induced alterations of brain anatomy and function, as measured using neuroimaging techniques. The standard analysis approach in genetic neuroimaging is the mass-univariate linear modeling approach, which is disadvantageous, since it cannot account for dependencies among collinear variables and has to be corrected for multiple testing. In con-trast, multivariate methods include combined information from multiple variants simultaneously into the analysis, and can therefore account for the correlation structure in both the neuroimaging and the genetic data. Partial Least Squares Analysis and Canonical Correlation Analysis are common multivariate ap-proaches and different variants have been established for genetic neuroimaging. However, a compre-hensive comparison with respect to data characteristics and strengths and weaknesses of these methods was missing to date. This thesis elaborately compared three multivariate techniques, Sparse Canonical Correlation Analysis (Sparse CCA), Bayesian Inter-Battery Factor Analysis (Bayesian IBFA) and Partial Least Squares Corre-lation (PLSC) in order to express a clear statement on which method in to choose for analysis in genetic neuroimaging. It was shown that for highly collinear neuroimaging data, Bayesian IBFA could not be recommended, since additional post-processing steps were required to differentiate between causal and non-informative components. In contrast, Sparse CCA and PLSC were suitable for genetic neuroimaging data. Among the two, the use of Sparse CCA was recommended in situations with relatively low-dimensional neuroimaging and genetic data, since its predictive power was higher when data dimension-ality was below 400 times sample size. For higher dimensionalities, the predictive power of PLSC ex-ceeded that of Sparse CCA. Thus, for multivariate modeling of high-dimensional neuroimaging-genetics-associations, a preference for the usage of PLSC was indicated. The remainder of this thesis dealt with the improvement of the computational efficiency of multivariate statistics in genetic neuroimaging, since it can be expected that there will be a growth in cost- and time-efficient DNA sequencing as well as neuroimaging techniques in the coming years, which will result in excessively long computation times due to increasing data dimensionality. To accommodate this large number of variables, a new and computational efficient statistical approach named PLSC-RP was pro-posed, which incorporates a method for dimensionality reduction named Random projection (RP) into traditional PLSC in order to represent the originally high-dimensional data in lower dimensional spaces. Subsequently, PLSC is used for multivariate analysis of compressed data sets. Finally, the results are transformed back to the original spaces to enable the interpretation of original variables. It was demon-strated that the usage of PLSC-RP reduced computation times from hours to seconds compared to its state-of-the-art counterpart PLSC. Nonetheless, the accuracy of the results was not impaired, since the results of PLSC-RP and PLSC were statistically equivalent. Furthermore, PLSC-RP could be used for inte-grative analysis of data sets containing high-dimensional neuroimaging data, high-dimensional genetic data or both, and was therefore shown to be independent of the statistical data type. Thus, PLSC-RP opens up a wide range of possible applications.
133

Tvorba testových baterií pro diagnostiku motorických projevů laterality - vztah mezi mozečkovou dominancí a výkonností horní končetiny / Development of Test Batteries for Diagnostics of Motor Laterality Manifestation - Link between Cerebellar Dominance and Hand Performance

Musálek, Martin January 2012 (has links)
The aim of this study is to contribute to the standardization of the new diagnostic tools assessing the motor manifestations of laterality in adults and children aged 8 to 10 years, both in terms of determining the theoretical concept and the selection of appropriate items, and the verification of structural hypotheses concerning the design of acceptable models, including the diagnostic quality of individual parts of the test battery. Moreover in this study we try to suggest new approach in assessing of motor laterality manifestation by means of relationship between cerebellar dominance and hand performance. The first part of this thesis deals with the concept of laterality, its manifestations and meaning in non-living systems and living organisms. As a human characteristic, laterality is manifested in a variety of functional and structural asymmetries. This part also discusses ways of diagnosing motor manifestations of laterality and the issue of cerebellar dominance, including its reflection in the form of asymmetry of the extinction physiological syndrome of upper limbs. The second part focuses on the process of the standardization study, the statistical method of structural equation modelling, and the actual design of test battery construction. The last part of this thesis presents the results...
134

Textile Hybrids : Exploring knitted textiles by challenging properties of elasticity and flexibility through combinations with wood.

Becerra Venegas, Francisca January 2020 (has links)
Textile Hybrids explores knitted textiles by challenging properties of elasticity and flexibility through yarn composition, technical construction and combinations with wood. This study is placed in the field of textile spatial design and suggests experimental ways to explore three-dimensionality in a knitted textile by changing its properties through material synergies. The outcome is a three piece series of modular three-dimensional, standalone textile objects. The construction, assembly and flexibility of each piece make it possible to separate all components for reassembly, recycling or reusing, suggesting further research possibilities into more tangible contexts within textile spatial design, architecture, furniture design and product design. This study is derived from an interest to explore different ways a textile can exist on its own in a spatial context such as the home, without solely being the material covering a load-bearing framework i.e a couch or a chair.
135

Automatic Generation of Descriptive Features for Predicting Vehicle Faults

Revanur, Vandan, Ayibiowu, Ayodeji January 2020 (has links)
Predictive Maintenance (PM) has been increasingly adopted in the Automotive industry, in the recent decades along with conventional approaches such as the Preventive Maintenance and Diagnostic/Corrective Maintenance, since it provides many advantages to estimate the failure before the actual occurrence proactively, and also being adaptive to the present status of the vehicle, in turn allowing flexible maintenance schedules for efficient repair or replacing of faulty components. PM necessitates the storage and analysis of large amounts of sensor data. This requirement can be a challenge in deploying this method on-board the vehicles due to the limited storage and computational power on the hardware of the vehicle. Hence, this thesis seeks to obtain low dimensional descriptive features from high dimensional data using Representation Learning. This low dimensional representation will be used for predicting vehicle faults, specifically Turbocharger related failures. Since the Logged Vehicle Data (LVD) was base on all the data utilized in this thesis, it allowed for the evaluation of large populations of trucks without requiring additional measuring devices and facilities. The gradual degradation methodology is considered for describing vehicle condition, which allows for modeling the malfunction/ failure as a continuous process rather than a discrete flip from healthy to an unhealthy state. This approach eliminates the challenge of data imbalance of healthy and unhealthy samples. Two important hypotheses are presented. Firstly, Parallel StackedClassical Autoencoders would produce better representations com-pared to individual Autoencoders. Secondly, employing Learned Em-beddings on Categorical Variables would improve the performance of the Dimensionality reduction. Based on these hypotheses, a model architecture is proposed and is developed on the LVD. The model is shown to achieve good performance, and in close standards to the previous state-of-the-art research. This thesis, finally, illustrates the potential to apply parallel stacked architectures with Learned Embeddings for the Categorical features, and a combination of feature selection and extraction for numerical features, to predict the Remaining Useful Life (RUL) of a vehicle, in the context of the Turbocharger. A performance improvement of 21.68% with respect to the Mean Absolute Error (MAE) loss with an 80.42% reduction in the size of data was observed.
136

Development and Evaluation of Dimensionally Adaptive Techniques for Improving Computational Efficiency of Radiative Heat Transfer Calculations in Cylindrical Combustors

Williams, Todd Andrew 22 June 2020 (has links)
Computational time to model radiative heat transfer in a cylindrical Pressurized Oxy-Coal (POC) combustor was reduced by incorporating the multi-dimensional characteristics of the combustion field. The Discrete Transfer Method (DTM) and the Discrete Ordinates Method (DOM) were modified to work with a computational mesh that transitions from 3D cells to axisymmetric and then 1D cells, also known as a dimensionally adaptive mesh. For the DTM, three methods were developed for selecting so-called transdimensional rays, the Single Unweighted Ray (SUR) technique, the Multiple Unweighted Ray (MUR) technique, and the Single Weighted Ray (SWR) technique. For the DOM, averaging methods for handling radiative intensity at dimensional boundaries were developed. Limitations of both solvers with adaptive meshes were identified by comparison with fully 3D results. For the DTM, the primary limit was numerical error associated with view factor calculations. For the DOM, treatment of dimensional boundaries led to step changes that created numerical oscillations, the severity of which was lessened by both increased angular resolution and increased optical thickness. Performance of dimensionally adaptive radiation calculations, uncoupled to any other physical calculation, was evaluated with a series of sensitivity studies including sensitivity to spatial and angular resolution, dimensional boundary placement, and reactor scaling. Runtime was most impacted by boundary layer placement. For the upstream case which had 3D cells over 40% of the reactor length, the speedup versus the fully 3D calculations were 743%, 18%, 220%, and 76% for the SUR, MUR, SWR, and DOM calculations, respectively. The downstream case which had 3D cells over the first 60% of the reactor length, had speedups of 209%, 3%, 109%, and 37%, respectively. For the DTM, accuracy was most sensitive to optical thickness, with the average percent difference in incident heat flux for SUR, MUR, and SWR calculations versus fully 3D calculations being 0.93%, 0.86%, and 1.18%, respectively, for a reactor half the size of the baseline case. The case with four times the reactor size had average percent differences of 0.28%, 0.41%, and 0.39% for the SUR, MUR, and SWR, respectively. Accuracy of the DOM was comparatively insensitive to the different changes studied. Performance of dimensionally adaptive radiation calculations coupled with thermochemistry was also investigated for both pilot and industrial scale systems. For pilot scale systems, flux and temperature differences from either solver were less than 5% and 6%, respectively, with speedups being between 200% - 600%. For industrial systems, temperature differences as high as 15% - 20% and flux differences as high as 50% - 75% were seen. In the case of the DTM, these differences between fully 3D and adaptive results come from a combination of high property gradients and comparatively few rays being drawn and could therefore be improved, at the cost of additional computation time, by using a more sophisticated ray selection method. For the DOM, these issues stem from poor performance of the 1D portion of the solver and could therefore be improved by using a more sophisticated equation to model the radiative transfer in the 1D region.
137

Improving Support-vector machines with Hyperplane folding

Söyseth, Carl, Ekelund, Gustav January 2019 (has links)
Background. Hyperplane folding was introduced by Lars Lundberg et al. in Hyperplane folding increased the margin while suffering from a flaw, referred to asover-rotation in this thesis. The aim of this thesis is to introduce a new different technique thatwould not over-rotate data points. This novel technique is referred to as RubberBand folding in the thesis. The following research questions are addressed: 1) DoesRubber Band folding increases classification accuracy? 2) Does Rubber Band fold-ing increase the Margin? 3) How does Rubber Band folding effect execution time? Rubber Band folding was implemented and its result was compared toHyperplane folding and the Support-vector machine. This comparison was done byapplying Stratified ten-fold cross-validation on four data sets for research question1 & 2. Four folds were applied for both Hyperplane folding and Rubber Band fold-ing, as more folds can lead to over-fitting. While research question 3 used 15 folds,in order to see trends and is not affected by over-fitting. One BMI data set, wasartificially made for the initial Hyperplane folding paper. Another data set labeled patients with, or without a liver disorder. Another data set predicted if patients havebenign- or malign cancer cells. Finally, a data set predicted if a hepatitis patient isalive within five years.Results.Rubber Band folding achieved a higher classification accuracy when com-pared to Hyperplane folding in all data sets. Rubber Band folding increased theclassification in the BMI data set and cancer data set while the accuracy for Rub-ber Band folding decreased in liver and hepatitis data sets. Hyperplane folding’saccuracy decreased in all data sets.Both Rubber Band folding and Hyperplane folding increases the margin for alldata sets tested. Rubber Band folding achieved a margin higher than Hyperplanefolding’s in the BMI and Liver data sets. Execution time for both the classification ofdata points and the training time for the classifier increases linearly per fold. RubberBand folding has slower growth in classification time when compared to Hyperplanefolding. Rubber Band folding can increase the classification accuracy, in whichexact cases are unknown. It is howevered believed to be when the data is none-linearly seperable.Rubber Band folding increases the margin. When compared to Hyperplane fold-ing, Rubber Band folding can in some cases, achieve a higher increase in marginwhile in some cases Hyperplane folding achieves a higher margin.Both Hyperplane folding and Rubber Band folding increases training time andclassification time linearly. The difference between Hyperplane folding and RubberBand folding in training time was negligible while Rubber bands increase in classifi-cation time was lower. This was attributed to Rubber Band folding rotating fewerpoints after 15 folds.
138

Dimensionality Reduction in Healthcare Data Analysis on Cloud Platform

Ray, Sujan January 2020 (has links)
No description available.
139

Reduced and coded sensing methods for x-ray based security

Sun, Zachary Z. 05 November 2016 (has links)
Current x-ray technologies provide security personnel with non-invasive sub-surface imaging and contraband detection in various portal screening applications such as checked and carry-on baggage as well as cargo. Computed tomography (CT) scanners generate detailed 3D imagery in checked bags; however, these scanners often require significant power, cost, and space. These tomography machines are impractical for many applications where space and power are often limited such as checkpoint areas. Reducing the amount of data acquired would help reduce the physical demands of these systems. Unfortunately this leads to the formation of artifacts in various applications, thus presenting significant challenges in reconstruction and classification. As a result, the goal is to maintain a certain level of image quality but reduce the amount of data gathered. For the security domain this would allow for faster and cheaper screening in existing systems or allow for previously infeasible screening options due to other operational constraints. While our focus is predominantly on security applications, many of the techniques can be extended to other fields such as the medical domain where a reduction of dose can allow for safer and more frequent examinations. This dissertation aims to advance data reduction algorithms for security motivated x-ray imaging in three main areas: (i) development of a sensing aware dimensionality reduction framework, (ii) creation of linear motion tomographic method of object scanning and associated reconstruction algorithms for carry-on baggage screening, and (iii) the application of coded aperture techniques to improve and extend imaging performance of nuclear resonance fluorescence in cargo screening. The sensing aware dimensionality reduction framework extends existing dimensionality reduction methods to include knowledge of an underlying sensing mechanism of a latent variable. This method provides an improved classification rate over classical methods on both a synthetic case and a popular face classification dataset. The linear tomographic method is based on non-rotational scanning of baggage moved by a conveyor belt, and can thus be simpler, smaller, and more reliable than existing rotational tomography systems at the expense of more challenging image formation problems that require special model-based methods. The reconstructions for this approach are comparable to existing tomographic systems. Finally our coded aperture extension of existing nuclear resonance fluorescence cargo scanning provides improved observation signal-to-noise ratios. We analyze, discuss, and demonstrate the strengths and challenges of using coded aperture techniques in this application and provide guidance on regimes where these methods can yield gains over conventional methods.
140

(Ultra-)High Dimensional Partially Linear Single Index Models for Quantile Regression

Zhang, Yuankun 30 October 2018 (has links)
No description available.

Page generated in 0.0948 seconds