• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1107
  • 257
  • 89
  • 85
  • 75
  • 26
  • 23
  • 21
  • 18
  • 17
  • 14
  • 13
  • 11
  • 7
  • 7
  • Tagged with
  • 2121
  • 526
  • 521
  • 489
  • 437
  • 358
  • 343
  • 318
  • 282
  • 270
  • 270
  • 263
  • 236
  • 180
  • 174
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Healthy ageing and binding features in working memory : measurement issues and potential boundary conditions

Rhodes, Stephen January 2016 (has links)
Accurate memory for an object or event requires that multiple diverse features are bound together and retained as an integrated representation. There is overwhelming evidence that healthy ageing is accompanied by an associative deficit in that older adults struggle to remember relations between items above any deficit exhibited in remembering the items themselves. However, the effect of age on the ability to bind features within novel objects (for example, their colour and shape) and retain correct conjunctions over brief intervals is less clear. The relatively small body of work that exists on this topic to-date has suggested no additional working memory impairment for conjunctions of features beyond a general age-related impairment in the ability to temporarily retain features. This is in stark contrast to the feature binding deficit observed in the early stages of Alzheimer’s disease. Nevertheless, there have been reports of age-related feature binding deficits in working memory under specific circumstances. Thus a major focus of the present work was to assess these potential boundary conditions. The change detection paradigm was used throughout this work to examine age-differences in visual working memory. Despite the popularity of this task important issues regarding the way in which working memory is probed have been left unaddressed. Chapter 2 reports three experiments with younger adults comparing two methods of testing recognition memory for features or conjunctions. Contrary to an influential study in the field, it appears that processing multiple items at test does not differentially impact on participants’ ability to detect binding changes. Chapters 3, 4, and 5 report a series of experiments motivated by previous findings of specific age-related feature binding deficits. These experiments, improving on previous methodology where possible, demonstrate that increasing the amount of time for which items can be studied (Chapter 3) or mixing feature-conjunction changes in trial-blocks with more salient changes to individual features (Chapters 4 and 5) does not differentially impact on healthy older adults’ ability to detect binding changes. Rather, the argument is made that specific procedural aspects of previous work led to the appearance of deficits that do not generalise. Chapter 5 also addresses the suggestion that healthy ageing specifically affects the retention of item-location conjunctions. The existing evidence for this claim is reviewed, and found wanting, and new data are presented providing evidence against it. To follow-up on the absence of a deficit for simple feature conjunctions, Chapter 6 contrasts two theoretically distinct binding mechanisms: one for features intrinsic to an object and another for extrinsic, contextual features. Preliminary evidence is reported that the cost associated with retaining pairings of features is specifically pronounced for older adults when the features are extrinsic to each other. In an attempt to separate out the contribution of working memory capacity and lapses of attention to age-differences in overall task performance, Chapter 7 reports the results of an exploratory analysis using processing models developed in Chapter 2. Analysis of two data sets from Chapters 4 and 5 demonstrates that lapses of attention make an important contribution to differences in change detection performance. Chapter 8 returns to the issue of measurement in assessing the evidence for specific age-related deficits. Simulations demonstrate that the choice of outcome measure can greatly affect conclusions regarding age-group by condition interactions, suggesting that some previous findings of such interactions in the literature may have been more apparent than real. In closing the General Discussion relates the present work to current theory regarding feature binding in visual working memory and to the wider literature on binding deficits in healthy and pathological ageing.
222

Using functional annotation to characterize genome-wide association results

Fisher, Virginia Applegate 11 December 2018 (has links)
Genome-wide association studies (GWAS) have successfully identified thousands of variants robustly associated with hundreds of complex traits, but the biological mechanisms driving these results remain elusive. Functional annotation, describing the roles of known genes and regulatory elements, provides additional information about associated variants. This dissertation explores the potential of these annotations to explain the biology behind observed GWAS results. The first project develops a random-effects approach to genetic fine mapping of trait-associated loci. Functional annotation and estimates of the enrichment of genetic effects in each annotation category are integrated with linkage disequilibrium (LD) within each locus and GWAS summary statistics to prioritize variants with plausible functionality. Applications of this method to simulated and real data show good performance in a wider range of scenarios relative to previous approaches. The second project focuses on the estimation of enrichment by annotation categories. I derive the distribution of GWAS summary statistics as a function of annotations and LD structure and perform maximum likelihood estimation of enrichment coefficients in two simulated scenarios. The resulting estimates are less variable than previous methods, but the asymptotic theory of standard errors is often not applicable due to non-convexity of the likelihood function. In the third project, I investigate the problem of selecting an optimal set of tissue-specific annotations with greatest relevance to a trait of interest. I consider three selection criteria defined in terms of the mutual information between functional annotations and GWAS summary statistics. These algorithms correctly identify enriched categories in simulated data, but in the application to a GWAS of BMI the penalty for redundant features outweighs the modest relationships with the outcome yielding null selected feature sets, due to the weaker overall association and high similarity between tissue-specific regulatory features. All three projects require little in the way of prior hypotheses regarding the mechanism of genetic effects. These data-driven approaches have the potential to illuminate unanticipated biological relationships, but are also limited by the high dimensionality of the data relative to the moderate strength of the signals under investigation. These approaches advance the set of tools available to researchers to draw biological insights from GWAS results.
223

Mining Data with Feature Interactions

January 2018 (has links)
abstract: Models using feature interactions have been applied successfully in many areas such as biomedical analysis, recommender systems. The popularity of using feature interactions mainly lies in (1) they are able to capture the nonlinearity of the data compared with linear effects and (2) they enjoy great interpretability. In this thesis, I propose a series of formulations using feature interactions for real world problems and develop efficient algorithms for solving them. Specifically, I first propose to directly solve the non-convex formulation of the weak hierarchical Lasso which imposes weak hierarchy on individual features and interactions but can only be approximately solved by a convex relaxation in existing studies. I further propose to use the non-convex weak hierarchical Lasso formulation for hypothesis testing on the interaction features with hierarchical assumptions. Secondly, I propose a type of bi-linear models that take advantage of interactions of features for drug discovery problems where specific drug-drug pairs or drug-disease pairs are of interest. These models are learned by maximizing the number of positive data pairs that rank above the average score of unlabeled data pairs. Then I generalize the method to the case of using the top-ranked unlabeled data pairs for representative construction and derive an efficient algorithm for the extended formulation. Last but not least, motivated by a special form of bi-linear models, I propose a framework that enables simultaneously subgrouping data points and building specific models on the subgroups for learning on massive and heterogeneous datasets. Experiments on synthetic and real datasets are conducted to demonstrate the effectiveness or efficiency of the proposed methods. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
224

Visualizing temporality in music: music perception – feature extraction

Hamidi Ghalehjegh, Nima 01 August 2017 (has links)
Recently, there have been efforts to design more efficient ways to internalize music by applying the disciplines of cognition, psychology, temporality, aesthetics, and philosophy. Bringing together the fields of art and science, computational techniques can also be applied to musical analysis. Although a wide range of research projects have been conducted, the automatization of music analysis remains emergent. Importantly, patterns are revealed by using automated tools to analyze core musical elements created from melodies, harmonies, and rhythms, high-level features that are perceivable by the human ear. For music to be captured and successfully analyzed by a computer, however, one needs to extract certain information found in the lower-level features of amplitude, frequency, and duration. Moreover, while the identity of harmonic progressions, melodic contour, musical patterns, and pitch quantification are crucial factors in traditional music analysis, these alone are not exclusive. Visual representations are useful tools that reflect form and structure of non-conventional musical repertoire. Because I regard the fluidity of music and visual shape as strongly interactive, the ultimate goal of this thesis is to construct a practical tool that prepares the visual material used for musical composition. By utilizing concepts of time, computation, and composition, this tool effectively integrates computer science, signal processing, and music perception. This will be obtained by presenting two concepts, one abstract and one mathematical, that will provide materials leading to the original composition. To extract the desired visualization, I propose a fully automated tool for musical analysis that is grounded in both the mid-level elements of loudness, density, and range, and low-level features of frequency and duration. As evidenced by my sinfonietta, Equilibrium, this tool, capable of rapidly analyzing a variety of musical examples such as instrumental repertoire, electro-acoustic music, improvisation and folk music, is highly beneficial to my proposed compositional procedure.
225

TOWARD A TWO-STAGE MODEL OF FREE CATEGORIZATION

Smith, Gregory J 01 September 2015 (has links)
This research examines how comparison of objects underlies free categorization, an essential component of human cognition. Previous results using our binomial labeling task have shown that classification probabilities are affected in a graded manner as a function of similarity, i.e., the number of features shared by two objects. In a similarity rating task, people also rated objects sharing more features as more similar. However, the effect of matching features was approximately linear in the similarity task, but superadditive (exponential) in the labeling task. We hypothesize that this difference is due to the fact that people must select specific objects to compare prior to deciding whether to put them in the same category in the labeling task, while they were given specific pairs to compare in the rating task. Thus, the number of features shared by two objects could affect both stages (selection and comparison) in the labeling task, which might explain their super-additive effect, whereas it affected only the latter comparison stage in the similarity rating task. In this experiment, participants saw visual displays consisting of 16 objects from three novel superordinate artificial categories, and were asked to generate binomial (letter-number) labels for each object to indicate their super-and-subordinate category membership. Only one object could be viewed at a time, and these objects could be viewed in any order. This made it possible to record what objects people examine when labeling a given object, which in turn permits separate assessment of stage 1 (selection) versus stage 2 (comparison/decision). Our primary objective in this experiment was to determine whether the increase in category labeling probabilities as a function of level of match (similarity) can be explained by increased sampling alone (stage 1 model), an increased perception of similarity following sampling (stage 2 model), or some combination (mixed model). The results were consistent with earlier studies in showing that the number of matching discrete features shred by two objects affected the probability of same-category label assignment. However, there was no effect of the level of match on the probability of visiting the first matching object while labeling the second. This suggests that the labeling effect is not due to differences in the likelihood of comparing matching objects (stage 1) as a function of the level of match. Thus, the present data provides support for a stage 2 only model, in which the evaluation of similarity is the primary component underlying the level of match effect on free categorization.
226

Multistructure segmentation of multimodal brain images using artificial neural networks

Kim, Eun Young 01 December 2009 (has links)
A method for simultaneously segmenting multiple anatomical brain structures from multi-modal MR images has been developed. An artificial neural network (ANN) was trained from a set of feature vectors created by a combination of high-resolution registration methods, atlas based spatial probability distributions, and a training set of 16 expert traced data sets. A set of feature vectors were adapted to increase performance of ANN segmentation; 1) a modified spatial location for structural symmetry of human brain, 2) neighbors along the priors' descent for directional consistency, and 3) candidate vectors based on the priors for the segmentation of multiple structures. The trained neural network was then applied to 8 data sets, and the results were compared with expertly traced structures for validation purposes. Comparing several reliability metrics, including a relative overlap, similarity index, and intraclass correlation of the ANN generated segmentations to a manual trace are similar or higher to those measures previously developed methods. The ANN provides a level of consistency between subjects and time efficiency comparing human labor that allows it to be used for very large studies.
227

Feature Screening of Ultrahigh Dimensional Feature Spaces With Applications in Interaction Screening

Reese, Randall D. 01 August 2018 (has links)
Data for which the number of predictors exponentially exceeds the number of observations is becoming increasingly prevalent in fields such as bioinformatics, medical imaging, computer vision, And social network analysis. One of the leading questions statisticians must answer when confronted with such “big data” is how to reduce a set of exponentially many predictors down to a set of a mere few predictors which have a truly causative effect on the response being modelled. This process is often referred to as feature screening. In this work we propose three new methods for feature screening. The first method we propose (TC-SIS) is specifically intended for use with data having both categorical response and predictors. The second method we propose (JCIS) is meant for feature screening for interactions between predictors. JCIS is rare among interaction screening methods in that it does not require first finding a set of causative main effects before screening for interactive effects. Our final method (GenCorr) is intended for use with data having a multivariate response. GenCorr is the only method for multivariate screening which can screen for both causative main effects and causative interactions. Each of these aforementioned methods will be shown to possess both theoretical robustness as well as empirical agility.
228

Scavenger: A Junk Mail Classification Program

Malkhare, Rohan V 20 January 2003 (has links)
The problem of junk mail, also called spam, has reached epic proportions and various efforts are underway to fight spam. Junk mail classification using machine learning techniques is a key method to fight spam. We have devised a machine learning algorithm where features are created from individual sentences in the subject and body of a message by forming all possible word-pairings from a sentence. Weights are assigned to the features based on the strength of their predictive capabilities for spam/legitimate determination. The predictive capabilities are estimated by the frequency of occurrence of the feature in spam/legitimate collections as well as by application of heuristic rules. During classification, total spam and legitimate evidence in the message is obtained by summing up the weights of extracted features of each class and the message is classified into whichever class accumulates the greater sum. We compared the algorithm against the popular naïve-bayes algorithm (in [8]) and found it's performance exceeded that of naïve-bayes algorithm both in terms of catching spam and for reducing false positives.
229

Off-line signature verification

Larkins, Robert L. January 2009 (has links)
In today’s society signatures are the most accepted form of identity verification. However, they have the unfortunate side-effect of being easily abused by those who would feign the identification or intent of an individual. This thesis implements and tests current approaches to off-line signature verification with the goal of determining the most beneficial techniques that are available. This investigation will also introduce novel techniques that are shown to significantly boost the achieved classification accuracy for both person-dependent (one-class training) and person-independent (two-class training) signature verification learning strategies. The findings presented in this thesis show that many common techniques do not always give any significant advantage and in some cases they actually detract from the classification accuracy. Using the techniques that are proven to be most beneficial, an effective approach to signature verification is constructed, which achieves approximately 90% and 91% on the standard CEDAR and GPDS signature datasets respectively. These results are significantly better than the majority of results that have been previously published. Additionally, this approach is shown to remain relatively stable when a minimal number of training signatures are used, representing feasibility for real-world situations.
230

Integrated feature, neighbourhood, and model optimization for personalised modelling and knowledge discovery

Liang, Wen January 2009 (has links)
“Machine learning is the process of discovering and interpreting meaningful information, such as new correlations, patterns and trends by sifting through large amounts of data stored in repositories, using pattern recognition technologies as well as statistical and mathematical techniques” (Larose, 2005). From my understanding, machine learning is a process of using different analysis techniques to observe previously unknown, potentially meaningful information, and discover strong patterns and relationships from a large dataset. Professor Kasabov (2007b) classified computational models into three categories (e.g. global, local, and personalised) which have been widespread and used in the areas of data analysis and decision support in general, and in the areas of medicine and bioinformatics in particular. Most recently, the concept of personalised modelling has been widely applied to various disciplines such as personalised medicine, personalised drug design for known diseases (e.g. cancer, diabetes, brain disease, etc.) as well as for other modelling problems in ecology, business, finance, crime prevention, and so on. The philosophy behind the personalised modelling approach is that every person is different from others, thus he/she will benefit from having a personalised model and treatment. However, personalised modelling is not without issues, such as defining the correct number of neighbours or defining an appropriate number of features. As a result, the principal goal of this research is to study and address these issues and to create a novel framework and system for personalised modelling. The framework would allow users to select and optimise the most important features and nearest neighbours for a new input sample in relation to a certain problem based on a weighted variable distance measure in order to obtain more precise prognostic accuracy and personalised knowledge, when compared with global modelling and local modelling approaches.

Page generated in 0.0264 seconds