• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 755
  • 179
  • 109
  • 91
  • 28
  • 26
  • 24
  • 23
  • 18
  • 18
  • 12
  • 7
  • 5
  • 5
  • 5
  • Tagged with
  • 1600
  • 290
  • 241
  • 200
  • 199
  • 191
  • 168
  • 164
  • 150
  • 144
  • 139
  • 137
  • 118
  • 111
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

The Association Between Risk Taking And Personality

Anic, Gabriella 11 April 2007 (has links)
The aim of this study was to examine the association between personality and risk taking in a sample of 461 older adults from the Charlotte County Healthy Aging Study (CCHAS). The personality factors of openness to experience, extraversion, neuroticism, agreeableness and conscientiousness were measured with the NEO Five Factor Inventory. Risk-taking was measured with an 8-item questionnaire and a single-item question that assessed subjects' participation in sensation seeking behaviors. Spearman correlation coefficients, hierarchical linear regression and hierarchical logistic regression were used to assess the association. As consistent with past research, high scores on openness to experience (beta = 0.16, P<.0001) and low scores on neuroticism (β = -0.14, P<.01) and agreeableness (β = -0.16, P<.01) were associated with the total score of the 8-item risk taking questionnaire. The single-item risk question was also associated with openness [OR = 1.09; 95% CI: 1.05-1.13], neuroticism [OR = 0.94; 95% CI: 0.90-0.97] and agreeableness [OR = 0.95; 95% CI: 0.92-0.99]. After stratifying by gender, only openness was still significantly associated with risk-taking. Interaction terms including gender and personality factors were added to the models to test if gender was an effect modifier. Although personality differences existed between men and women, none of the interaction terms were statistically significant.
302

SIMD Algorithms for Single Link and Complete Link Pattern Clustering

Arumugavelu, Shankar 08 March 2007 (has links)
Clustering techniques play an important role in exploratory pattern analysis, unsupervised pattern recognition and image segmentation applications. Clustering algorithms are computationally intensive in nature. This thesis proposes new parallel algorithms for Single Link and Complete Link hierarchical clustering. The parallel algorithms have been mapped on a SIMD machine model with a linear interconnection network. The model consists of a linear array of N (number of patterns to be clustered) processing elements (PEs), interfaced to a host machine and the interconnection network provides inter-PE and PE-to-host/host-to-PE communication. For single link clustering, each PE maintains a sorted list of its first logN nearest neighbors and the host maintains a heap of the root elements of all the PEs. The determination of the smallest entry in the distance matrix and update of the distance matrix is achieved in O(logN) time. In the case of complete link clustering, each PE maintains a heap data structure of the inter pattern distances. This significantly reduces the computation time for the determination of the smallest entry in the distance matrix during each iteration, from O(N2) to O(N), as the root element in each PE gives its nearest neighbor. The proposed algorithms are faster and simpler than previously known algorithms for hierarchical clustering. For clustering a data set with N patterns, using N PEs, the computation time for the single link clustering algorithm is shown to be O(NlogN) and the time complexity for the complete link clustering algorithm is shown to be O(N2). The parallel algorithms have been verified through simulations on the Intel iPSC/2 parallel machine.
303

Bayesian Hierarchical Meta-Analysis of Asymptomatic Ebola Seroprevalence

Brody-Moore, Peter 01 January 2019 (has links)
The continued study of asymptomatic Ebolavirus infection is necessary to develop a more complete understanding of Ebola transmission dynamics. This paper conducts a meta-analysis of eight studies that measure seroprevalence (the number of subjects that test positive for anti-Ebolavirus antibodies in their blood) in subjects with household exposure or known case-contact with Ebola, but that have shown no symptoms. In our two random effects Bayesian hierarchical models, we find estimated seroprevalences of 8.76% and 9.72%, significantly higher than the 3.3% found by a previous meta-analysis of these eight studies. We also produce a variation of this meta-analysis where we exclude two of the eight studies. In this model, we find an estimated seroprevalence of 4.4%, much lower than our first two Bayesian hierarchical models. We believe a random effects model more accurately reflects the heterogeneity between studies and thus asymptomatic Ebola is more seroprevalent than previously believed among subjects with household exposure or known case-contact. However, a strong conclusion cannot be reached on the seriousness of asymptomatic Ebola without an international testing standard and more data collection using this adopted standard.
304

ACCOUNTING FOR MATCHING UNCERTAINTY IN PHOTOGRAPHIC IDENTIFICATION STUDIES OF WILD ANIMALS

Ellis, Amanda R. 01 January 2018 (has links)
I consider statistical modelling of data gathered by photographic identification in mark-recapture studies and propose a new method that incorporates the inherent uncertainty of photographic identification in the estimation of abundance, survival and recruitment. A hierarchical model is proposed which accepts scores assigned to pairs of photographs by pattern recognition algorithms as data and allows for uncertainty in matching photographs based on these scores. The new models incorporate latent capture histories that are treated as unknown random variables informed by the data, contrasting past models having the capture histories being fixed. The methods properly account for uncertainty in the matching process and avoid the need for researchers to confirm matches visually, which may be a time consuming and error prone process. Through simulation and application to data obtained from a photographic identification study of whale sharks I show that the proposed method produces estimates that are similar to when the true matching nature of the photographic pairs is known. I then extend the method to incorporate auxiliary information to predetermine matches and non-matches between pairs of photographs in order to reduce computation time when fitting the model. Additionally, methods previously applied to record linkage problems in survey statistics are borrowed to predetermine matches and non-matches based on scores that are deemed extreme. I fit the new models in the Bayesian paradigm via Markov Chain Monte Carlo and custom code that is available by request.
305

Prediction of Hierarchical Classification of Transposable Elements Using Machine Learning Techniques

Panta, Manisha 05 August 2019 (has links)
Transposable Elements (TEs) or jumping genes are the DNA sequences that have an intrinsic capability to move within a host genome from one genomic location to another. Studies show that the presence of a TE within or adjacent to a functional gene may alter its expression. TEs can also cause an increase in the rate of mutation and can even promote gross genetic arrangements. Thus, the proper classification of the identified jumping genes is important to understand their genetic and evolutionary effects. While computational methods have been developed that perform either binary classification or multi-label classification of TEs, few studies have focused on their hierarchical classification. The existing methods have limited accuracy in classifying TEs. In this study, we examine the performance of a variety of machine learning (ML) methods and propose a robust augmented Stacking-based ML method, ClassifyTE, for the hierarchical classification of TEs with high accuracy.
306

Agregace závislých rizik / Aggregation of dependent risks

Asipenka, Anna January 2019 (has links)
In this thesis we are interested in the calculation of economic capital for the to- tal loss which is the sum of partial dependent losses, whose dependence structure is described by Archimedean and hierarchical Archimedean copulas. Firstly, the concept of economic capital and the ways of its aggregation are introduced. Then the basic definitions and properties of copulas are listed, as well as the depen- dence measures. After that we work with definition and properties of Archimedean copulas and their simulation. We also mention the most popular families of Ar- chimedes copulas. Next, hierarchical Archimedean copulas are defined, as well as the algorithm for their sampling. Finally, we present methods for estimating the parameters of copulas and the recursive algorithm for estimating the hierarchical Archimedean copula structure. In the last chapter we perform simulation studies of selected models using hierarchical Archimedes copulas. 1
307

Using collateral information in the estimation of sub-scores --- a fully Bayesian approach

Tao, Shuqin 01 July 2009 (has links)
Educators and administrators often use sub-scores derived from state accountability assessments to diagnose learning/instruction and inform curriculum planning. However, there are several psychometric limitations of observed sub-scores, two of which were the focus of the present study: (1) limited reliabilities due to short lengths, and (2) little distinct information in sub-scores for most existing assessments. The present study was conducted to evaluate the extent to which these limitations might be overcome by incorporating collateral information into sub-score estimation. The three sources of collateral information under investigation included (1) information from other sub-scores, (2) schools that students attended, and (3) school-level scores on the same test taken by previous cohorts of students in that school. Kelley's and Shin's methods were implemented in a fully Bayesian framework and were adapted to incorporate differing levels of collateral information. Results were evaluated in light of three comparison criteria, i.e., signal noise ratio, standard error of estimate, and sub-score separation index. The data came from state accountability assessments. Consistent with the literature, using information from other sub-scores produced sub-scores with enhanced precision but reduced profile variability. This finding suggests that using collateral information internal to the test has the capability of enhancing sub-score reliability, but at the expense of losing the distinctness of each individual sub-score. Using information indicating the schools that students attended led to a small gain in sub-score precision without losing sub-score distinctness. Furthermore, using such information was found to have the potential to improve sub-score validity by addressing Simpson's paradox when sub-score correlations were not invariant across schools. Using previous-year school-level sub-score information was found to have the potential to enhance both precision and distinctness for school-level sub-scores, although not for student-level sub-scores. School-level sub-scores were found to exhibit satisfactory psychometric properties and thus have value in evaluating school curricular effectiveness. Issues concerning validity, interpretability, suitability of using such collateral information are discussed in the context of state accountability assessments.
308

Renormalization group and phase transitions in spin, gauge, and QCD like theories

Liu, Yuzhi 01 July 2013 (has links)
In this thesis, we study several different renormalization group (RG) methods, including the conventional Wilson renormalization group, Monte Carlo renormalization group (MCRG), exact renormalization group (ERG, or sometimes called functional RG), and tensor renormalization group (TRG). We use the two dimensional nearest neighbor Ising model to introduce many conventional yet important concepts. We then generalize the model to Dyson's hierarchical model (HM), which has rich phase properties depending on the strength of the interaction. The partition function zeros (Fisher zeros) of the HM model in the complex temperature plane is calculated and their connection with the complex RG flows is discussed. The two lattice matching method is used to construct both the complex RG flows and calculate the discrete β functions. The motivation of calculating the discrete β functions for various HM models is to test the matching method and to show how physically relevant fixed points emerge from the complex domain. We notice that the critical exponents calculated from the HM depend on the blocking parameter b. This motivated us to analyze the connection between the discrete and continuous RG transformation. We demonstrate numerical calculations of the ERG equations. We discuss the relation between Litim and Wilson-Polchinski equation and the effect of the cut-off functions in the ERG calculation. We then apply methods developed in the spin models to more complicated and more physically relevant lattice gauge theories and lattice quantum chromodynamics (QCD) like theories. Finite size scaling (FSS) technique is used to analyze the Binder cumulant of the SU(2) lattice gauge model. We calculate the critical exponent nu and omega of the model and show that it is in the same universality class as the three dimensional Ising model. Motivated by the walking technicolor theory, we study the strongly coupled gauge theories with conformal or near conformal properties. We compare the distribution of Fisher zeros for lattice gauge models with four and twelve light fermion flavors. We also briefly discuss the scaling of the zeros and its connection with the infrared fixed point (IRFP) and the mass anomalous dimension. Conventional numerical simulations suffer from the critical slowing down at the critical region, which prevents one from simulating large system. In order to reach the continuum limit in the lattice gauge theories, one needs either large volume or clever extrapolations. TRG is a new computational method that may calculate exponentially large system and works well even at the critical region. We formulate the TRG blocking procedure for the two dimensional O(2) (or XY ) and O(3) spin models and discuss possible applications and generalizations of the method to other spin and lattice gauge models. We start the thesis with the introduction and historical background of the RG in general.
309

Object Recognition in Videos Utilizing Hierarchical and Temporal Objectness with Deep Neural Networks

Peng, Liang 01 May 2017 (has links)
This dissertation develops a novel system for object recognition in videos. The input of the system is a set of unconstrained videos containing a known set of objects. The output is the locations and categories for each object in each frame across all videos. Initially, a shot boundary detection algorithm is applied to the videos to divide them into multiple sequences separated by the identified shot boundaries. Since each of these sequences still contains moderate content variations, we further use a cost optimization-based key frame extraction method to select key frames in each sequence and use these key frames to divide the videos into shorter sub-sequences with little content variations. Next, we learn object proposals on the first frame of each sub-sequence. Building upon the state-of-the-art object detection algorithms, we develop a tree-based hierarchical model to improve the object detection. Using the learned object proposals as the initial object positions in the first frame of each sub-sequence, we apply the SPOT tracker to track the object proposals and re-rank them using the proposed temporal objectness to obtain object proposals tubes by removing unlikely objects. Finally, we employ the deep Convolution Neural Network (CNN) to perform classification on these tubes. Experiments show that the proposed system significantly improves the object detection rate of the learned proposals when comparing with some state-of-the-art object detectors. Due to the improvement in object detection, the proposed system also achieves higher mean average precision at the stage of proposal classification than the state-of-the-art methods.
310

A Monte Carlo Study: The Consequences of the Misspecification of the Level-1 Error Structure

Petit-Bois, Merlande 01 January 2014 (has links)
Single-case interventions allow for the repeated measurement of a case or participant across multiple time points, to assess the treatment¡͞s effect on one specific case or participant. The basic interrupted time series design includes two phases: baseline and treatment. Raudenbush and Byrk (2002) demonstrated that a meta-analysis of large group designs can be seen as a special case of multi-level analysis with participants (level-one) nested within studies (level-two). Raw data from a set of single case design studies have a similar structure. Van den Noortgate and Onghena (2003) illustrated the use of a two-level model to analyze data in primary single-case studies. In 2008, Van den Noortgate and Onghena later proposed that if raw data from several single case designs are used in a meta-analysis, scores can be varied at each of the three levels: over occasions (level-one), across participants from the same study (level-two), and across studies (level-three). The multi-level approach allows for a large degree of flexibility in modeling the data (Goldstein & Yang, 2000; Hox & de Leeuw, 1997). Researchers can make various methodological decisions when specifying the model to approximate the data. Those decisions are critical since parameters can be biased if the statistical model is not correctly specified. The first of these decisions is how to model the level-one error structure--is it correlated or uncorrelated? Recently, the investigation of the Van den Noortgate and Onghena¡͞s (2008) three-level meta-analytic model has increased and shown promising results (Owens & Ferron, 2011; Ugille, Moeyaert, Beretvas, Ferron, & Van den Noortgate, 2012 ). These studies have shown the fixed effects tend to be unbiased and the variance components have been problematic across a range of conditions. Based on a thorough literature review, no one has looked at the model in relation to the use of fit indices or log likelihood tests to select an appropriate level-one error structure. The purpose of the study was two-fold: 1) to determine the extent to which the various fit indices can correctly identify the level-one covariance structure; and 2) to investigate the effect of various forms of misspecification of the level-one error structure when using a three-level meta-analytic single-case model. This study used Monte Carlo simulation methods to address the aforementioned research questions. Multiple design, data, and analysis factors were manipulated in this study. The study used a 2x2x2x2x2x5x7 factorial design. Seven experimental variables were manipulated in this study: 1) The number of primary studies per meta-analysis (10 and 30); 2) The number of participants per primary study (4 and 8); 3)The series length per participant (10 and 20); 4)Variances of the error terms (most of the variance at level-one: [¦Ò2=1;¡¼ ¦²¡½_u = 0.5, 0.05, 0.5, 0.05; ¡¼ ¦²¡½_v = 0.5, 0.05, 0.5, 0.05] and most of the variance at the upper levels: [¦Ò2=1;¡¼ ¦²¡½_u = 2, 0.2, 2, 0.2; ¡¼ ¦²¡½_v = 2, 0.2, 2, 0.2]); 5) The levels for the fixed effects (0, 2 [corresponding to the shift in level]; and 0, 0.2[corresponding to the shift in slope]) 6)Various types of covariance structures were used for data generation (ID, AR(1), and ARMA (1,1); and 7) The form of model specification [i.e. ID, AR(1), ARMA (1,1)], and error structure selected by AIC, AICC, BIC, and the LRT. The results of this study found that the fixed effects tend to mostly be unbiased, however, the variance components were extremely biased with particular design factors. The study also concluded that the use of fit indices to select the correct level-1 structure was appropriate for certain error structures. The accuracy of the fit indices tend to increase for the simpler level-one error structures. There were multiple implications for the applied single-case researcher, for the meta-analyst, and for the methodologist. Future research included investigating different estimation methods, such as Bayesian approach, to improve the estimates of the variance components and coupling multiple violations of the error structures, such as non-normality at levels two and three.

Page generated in 0.0231 seconds