• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 262
  • 31
  • 25
  • 19
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 457
  • 457
  • 156
  • 132
  • 75
  • 59
  • 51
  • 48
  • 48
  • 47
  • 44
  • 41
  • 37
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Because I love playing my instrument : Young musicians' internalised motivation and self-regulated practising behaviour

Renwick, James Michael, English, Media, & Performing Arts, Faculty of Arts & Social Sciences, UNSW January 2008 (has links)
Self-regulated learning theory explains how it is not only the amount of time musicians spend practising that affects achievement, but also the nature of the strategies employed. Because practice is self-directed, motivational effects on its efficiency are especially salient. One construct that has received little attention in relation to practising is self-determination theory, which interprets motivation as lying along a continuum of perceived autonomy. This mixed-methods study investigated links between motivational beliefs and self-regulated practising behaviour through a two-phase design. In Phase One, 677 music examination candidates aged 8-19 completed a questionnaire consisting of items addressing practising behaviour and perceived musical competence; in addition, the Self-Regulation Questionnaire (SRQ; Ryan & Connell, 1989) was adapted to explore intrinsic-extrinsic motives for learning an instrument. Factor analysis of the SRQ revealed five dimensions with partial correspondence to earlier research: internal, external, social, shame-related, and exam-related motives. Three practice behaviour factors consistent with self-regulated learning theory emerged: effort management, monitoring, and strategy use. Results of structural equation modelling showed that internal motivation accounted best for variance in these three types of practising behaviour, with a small added effect from competence beliefs and exam-related motivation. Phase Two consisted of observational case studies of four of the questionnaire participants preparing for their subsequent annual examination. Adolescent, intermediate-level musicians were recorded while practising at home; immediately afterwards, they watched the videotape and verbalised any recollected thoughts. The procedure concluded with a semi-structured interview and debriefing. The videotapes were analysed with The Observer Video-Pro and combined with verbal data; emerging themes were then compared with issues arising from the interviews. The observational aspect of the case studies largely confirmed the importance of three cyclical self-regulatory processes emerging from Phase One: (a) effort management and motivational self-regulation, (b) the role of self-monitoring of accuracy, and (c) the use of corrective strategies, such as structured repetition, task simplification, and vocalisation. The mixture of quantitative and qualitative methods used in the study has uncovered a rich body of information that begins to clarify the complex motivational and behavioural nature of young people practising a musical instrument.
102

The development of enhanced information retrieval strategies in undergraduates through the application of learning theory: an experimental study

Macpherson, Karen, n/a January 2002 (has links)
In this thesis, teaching and learning issues involved in end-user information retrieval from electronic databases are examined. A two-stage model of the information retrieval process, based on information processing theory, is proposed; and a framework for the teaching of information literacy is developed. The efficacy of cognitive psychology as a theoretical framework that enhances the understanding of a number of information retrieval issues, is discussed. These issues include: teaching strategies that can assist the development of conceptual knowledge of the information retrieval process; individual differences affecting information retrieval performance, particularly problemsolving ability; and expert and novice differences in search performance. The researcher investigated the impact of concept-based instruction on the development of information retrieval skills through the use of a two-stage experimental study conducted with undergraduates students at the University of Canberra, Australia. Phase 1 was conducted with 254 first-year undergraduates in 1997, with a 40 minute concept-based teaching module as the independent variable. A number of research questions were proposed: 1. Wdl type of instruction influence acquisition of knowledge of electronic database searching? 2. Will type of instruction influence information retrieval effectiveness? 3. Are problem-solving ability and information retrieval effectiveness related? 4. Are problem-solving ability and cognitive maturity related? 5. Are there any differences in the search behaviour of more effective and less effective searchers? Subjects completed a pre-test which measured knowledge of electronic databases, and problem-solving ability; and a post-test that measured changes in these abilities. Subjects in the experimental treatment were taught the 40 minute concept-based module, which incorporated teaching strateges grounded in leaming theory. The strategies included: the use of analogy; modelling; and the introduction of complexity. The aims of the module were to foster the development of a realistic concept of the information retrieval process; and to provide a problem-solving heuristic to guide subjects in their search strategy formulation. All subjects completed two post-tests: a survey that measured knowledge of search terminology and strategies; and an information retrieval assignment that measured effectiveness of search design and execution. Results suggested that using a concept-based approach is significantly more effective than using a traditional, skills-demonstration approach in the teaching of information retrieval. This effectiveness was both in terms of increasing knowledge of the search process; and in terms of improving search outcomes. Further, results suggested that search strategy formulation is significantly correlated with electronic database knowledge, and problemsolving ability; and that problem-solving ability and level of cognitive maturity may be related. Results supported the two-stage model of the information retrieval process suggested by the researcher as one possible construct of the thinking processes underlying information retrieval. These findings led to the implementation of Phase 2 of the research in 1999. Subjects were 68 second-year undergraduate students at the University of Canberra. In this Phase, concept-based teaching techniques were used to develop four modules covering a range of information literacy skills, including: critical thinking; information retrieval strategies; evaluation of sources; and determining relevance of articles. Results confirmed that subjects taught by methods based on leaming theory paradigms (the experimental treatment group), were better able to design effective searches than subjects who did not receive such instruction (the control treatment group). Further, results suggested that these teaching methods encouraged experimental group subjects to locate material from more credible sources than did control group subjects. These findings are of particular significance, given the increasing use of the unregulated intemet environment as an information source. Taking into account literature reviewed, and the results of Phases 1 and 2, a model of the information retrieval process is proposed. Finally, recognising the central importance of the acquisition of information literacy to student success at university, and to productive membership of the information society, a detailed framework for the teaching of information literacy in higher education is suggested.
103

Le Partage du Spectre dans les Réseaux Décentralisés Auto-Configurables : Une approche par la Théorie des Jeux.

Perlaza, Samir 08 July 2011 (has links) (PDF)
Les travaux de cette thèse s'inscrivent tous dans la thématique " traitement du signal pour les réseaux de communications distribués ". Le réseau est dit distribué au sens de la décision. Dans ce cadre, le problème générique et important que nous avons approfondi est le suivant. Comment un terminal, qui a accès à plusieurs canaux de communications, doit-il répartir (de manière autonome) sa puissance d'émission entre ses canaux et l'adapter dans le temps en fonction de la variabilité des conditions de communications ? C'est le problème de l'allocation de ressources adaptative et distribuée. Nous avons développé 4 axes de travail qui ont tous conduits à des réponses originales à ce problème ; la forte corrélation entre ces axes est expliquée dans le manuscrit de thèse. Le premier axe a été l'alignement opportuniste d'interférence. Un des scénarios de référence est le cas où deux couples émetteur-récepteur communiquent en interférant (sur la même bande, en même temps, au même endroit, ...), où les 4 terminaux sont équipés de plusieurs antennes et où un émetteur est contraint de ne pas (ou peu) interférer sur l'autre (canal à interférence dit MIMO). Nous avons conçu une technique d'émission de signal multi-antennaire qui exploite l'observation-clé suivante et jamais exploitée auparavant: même lorsqu'un émetteur est égoïste au sens de ses performances individuelles, celui-ci laisse des ressources spatiales (dans le bon espace de signal et que nous avons identifié) vacantes pour l'autre émetteur. L'apport en performances en termes de débit par rapport aux meilleurs algorithmes existants a été quantifié grâce à la théorie des matrices aléatoires et des simulations Monte Carlo. Ces résultats sont particulièrement importants pour le scénario de la radio cognitive en milieu dense. Dans un second temps, nous avons supposé que tous les émetteurs d'un réseau sont libres d'utiliser leurs ressources de manière égoïste. Les ressources sont données ici par les canaux fréquentiels et la métrique individuelle de performance est le débit. Ce problème peut être modélisé par un jeu dont les joueurs sont les émetteurs. Une de nos contributions a été de montrer que ce jeu est un jeu de potentiel, ce qui est fondamental pour la convergence des algorithmes distribués et l'existence d'équilibre de Nash. De plus, nous avons montré l'existence d'un paradoxe de Braess : si l'espace d'optimisation d'un joueur grandit, les performances individuelles et globales peuvent s'en trouver réduites. Cette conclusion a une conséquence pratique immédiate : il peut y a voir intérêt de restreindre le nombre de canaux fréquentiels utilisables dans un réseau à interférence distribué. Dans le jeu précédent, nous avions constaté que les algorithmes distribués d'allocation de ressources (les algorithmes d'apprentissage par renforcement typiquement) demandent un grand nombre d'itérations pour converger vers un état stable tel qu'un équilibre de Nash. Nous avons ainsi proposé un nouveau concept de solution d'un jeu, à savoir l'équilibre de satisfaction ; les joueurs ne modifient pas leur action, même si celle-ci ne maximise pas leur gain, pourvu qu'un niveau minimal de performance soit atteint. Nous avons alors développé une méthodologie d'étude de cette solution (existence, unicité, convergence, ...). Une de nos contributions a aussi été de donner des algorithmes d'apprentissage qui convergent vers cette solution en un temps fini (et même court génériquement). De nombreux résultats numériques réalisés dans des scénarios imposés par Orange ont confirmé la pertinence de cette nouvelle approche. Le quatrième axe de travail a été la conception de nouveaux algorithmes d'apprentissage qui convergent vers des solutions de type équilibre logit, epsilon-équilibre ou équilibre de Nash. Notre apport a été de montrer comment modifier les algorithmes existants pour que ceux-ci évitent les phénomènes de cycles et convergent vers un équilibre présélectionné au départ de la dynamique. Une idée importante a été d'introduire une dynamique d'apprentissage de la fonction métrique de performances en couplage avec la dynamique principale qui régit l'évolution de la distribution de probabilité sur les actions possibles d'un joueur. Le cadre de ces travaux est parfaitement réaliste d'un point de vue informatif au niveau des terminaux en pratique. Il est montré une voie possible pour améliorer l'efficacité des points de convergence, ce qui constitue un problème encore ouvert dans ce domaine.
104

Neural Networks

Jordan, Michael I., Bishop, Christopher M. 13 March 1996 (has links)
We present an overview of current research on artificial neural networks, emphasizing a statistical perspective. We view neural networks as parameterized graphs that make probabilistic assumptions about data, and view learning algorithms as methods for finding parameter values that look probable in the light of the data. We discuss basic issues in representation and learning, and treat some of the practical issues that arise in fitting networks to data. We also discuss links between neural networks and the general formalism of graphical models.
105

A Note on Support Vector Machines Degeneracy

Rifkin, Ryan, Pontil, Massimiliano, Verri, Alessandro 11 August 1999 (has links)
When training Support Vector Machines (SVMs) over non-separable data sets, one sets the threshold $b$ using any dual cost coefficient that is strictly between the bounds of $0$ and $C$. We show that there exist SVM training problems with dual optimal solutions with all coefficients at bounds, but that all such problems are degenerate in the sense that the "optimal separating hyperplane" is given by ${f w} = {f 0}$, and the resulting (degenerate) SVM will classify all future points identically (to the class that supplies more training data). We also derive necessary and sufficient conditions on the input data for this to occur. Finally, we show that an SVM training problem can always be made degenerate by the addition of a single data point belonging to a certain unboundedspolyhedron, which we characterize in terms of its extreme points and rays.
106

Perspective Transformation: Analyzing the Outcomes of International Education

Tacey, Krista Diane 2011 August 1900 (has links)
The purpose of this dissertation was to analyze the impact of international experiential education on life choices, specifically those related to career and educational goals. This was accomplished through two main phases of research. In the first phase, a web-based survey was used to explore the question of whether international experiential education did, in fact, impact life choices. Responses from this initial phase were used to identify a purposive sample of eight respondents with whom telephone interviews were conducted in the second phase of the study. The goal of the interviews was to determine, for those who indicated that their life choices had been impacted by the abroad experience, when and why it had happened. The evaluation was done by applying Mezirow’s transformative learning theory to the analysis. The self-reported responses indicated that there was an impact on life choices related to educational and career goals in almost 80 percent of the 74 survey respondents. These data were used as the foundation for the second phase of the study, which examined the catalysts for, and the process of, transformation through the lens of transformative learning theory. Almost all respondents indicated that the international experience had transformed their perspectives on their identity and purpose in life. Seven out of eight respondents discussed how they had gained an understanding of the fact that where one is born defines his or her perspective. One’s sociocultural environment defines who one is and how he or she sees the world. The international experience allows a person to see themselves through the eyes of others. While the timing and specifics of the catalysts varied, each of these seven had gone through the phases of transformation--disorienting dilemma, critical reflection, changed frame of reference--with some relation to the abroad experience.
107

Fundamental Limitations of Semi-Supervised Learning

Lu, Tyler (Tian) 30 April 2009 (has links)
The emergence of a new paradigm in machine learning known as semi-supervised learning (SSL) has seen benefits to many applications where labeled data is expensive to obtain. However, unlike supervised learning (SL), which enjoys a rich and deep theoretical foundation, semi-supervised learning, which uses additional unlabeled data for training, still remains a theoretical mystery lacking a sound fundamental understanding. The purpose of this research thesis is to take a first step towards bridging this theory-practice gap. We focus on investigating the inherent limitations of the benefits SSL can provide over SL. We develop a framework under which one can analyze the potential benefits, as measured by the sample complexity of SSL. Our framework is utopian in the sense that a SSL algorithm trains on a labeled sample and an unlabeled distribution, as opposed to an unlabeled sample in the usual SSL model. Thus, any lower bound on the sample complexity of SSL in this model implies lower bounds in the usual model. Roughly, our conclusion is that unless the learner is absolutely certain there is some non-trivial relationship between labels and the unlabeled distribution (``SSL type assumption''), SSL cannot provide significant advantages over SL. Technically speaking, we show that the sample complexity of SSL is no more than a constant factor better than SL for any unlabeled distribution, under a no-prior-knowledge setting (i.e. without SSL type assumptions). We prove that for the class of thresholds in the realizable setting the sample complexity of SL is at most twice that of SSL. Also, we prove that in the agnostic setting for the classes of thresholds and union of intervals the sample complexity of SL is at most a constant factor larger than that of SSL. We conjecture this to be a general phenomenon applying to any hypothesis class. We also discuss issues regarding SSL type assumptions, and in particular the popular cluster assumption. We give examples that show even in the most accommodating circumstances, learning under the cluster assumption can be hazardous and lead to prediction performance much worse than simply ignoring the unlabeled data and doing supervised learning. We conclude with a look into future research directions that build on our investigation.
108

Contributions to Unsupervised and Semi-Supervised Learning

Pal, David 21 May 2009 (has links)
This thesis studies two problems in theoretical machine learning. The first part of the thesis investigates the statistical stability of clustering algorithms. In the second part, we study the relative advantage of having unlabeled data in classification problems. Clustering stability was proposed and used as a model selection method in clustering tasks. The main idea of the method is that from a given data set two independent samples are taken. Each sample individually is clustered with the same clustering algorithm, with the same setting of its parameters. If the two resulting clusterings turn out to be close in some metric, it is concluded that the clustering algorithm and the setting of its parameters match the data set, and that clusterings obtained are meaningful. We study asymptotic properties of this method for certain types of cost minimizing clustering algorithms and relate their asymptotic stability to the number of optimal solutions of the underlying optimization problem. In classification problems, it is often expensive to obtain labeled data, but on the other hand, unlabeled data are often plentiful and cheap. We study how the access to unlabeled data can decrease the amount of labeled data needed in the worst-case sense. We propose an extension of the probably approximately correct (PAC) model in which this question can be naturally studied. We show that for certain basic tasks the access to unlabeled data might, at best, halve the amount of labeled data needed.
109

Exposure to Gambling-Related Media and its Relation to Gambling Expectancies and Behaviors

Valentine, Leanne 17 July 2008 (has links)
Today’s youth have been exposed to more gambling-related media than previous generations, and they have grown up in an era in which states not only sanction but also run and promote gambling enterprises. Social Learning Theory proposes that one can develop new attitudes or expectancies about a specific behavior by watching others engage in that behavior, and that the media is one avenue through which one can develop new expectancies (Bandura, 2001). In addition, the Theory of Reasoned Action proposes that one’s behaviors are influenced directly by both subjective norms and attitudes (Fishbein and Ajzen, 1975). A mixed methods explanatory design was used to test a modified version of the Theory of Reasoned Action in which subjective norms and gambling-related media were hypothesized to have an effect on gambling behaviors directly and indirectly through both positive and negative expectancies. Structural Equation Modeling was used to test the hypotheses, and semi-standardized interviews were used to help explain the results of the quantitative analyses and provide a richer and more accurate interpretation of the data. The hypothesized model was partially supported: the model was a good fit with the female college student data, accounting for 27.8% of variance in female student gambling behaviors, and it fit the male college student data reasonably well, accounting for 35.2% of variance in male student gambling behaviors. Results indicated that perceived subjective norms were more important for female college students. Results also indicated that exposure to gambling-related media has a direct positive association with both male and female college student gambling behaviors, and that exposure to gambling-related media has an indirect, positive association with male college student behaviors through positive expectancies. However, exposure to gambling-related media is not associated with positive expectancies about gambling for female college students. Data from the qualitative interviews supported the findings from the qualitative analyses and provided some clues about the progression from non-problematic to problematic behaviors, which may inform future research in this area.
110

Fundamental Limitations of Semi-Supervised Learning

Lu, Tyler (Tian) 30 April 2009 (has links)
The emergence of a new paradigm in machine learning known as semi-supervised learning (SSL) has seen benefits to many applications where labeled data is expensive to obtain. However, unlike supervised learning (SL), which enjoys a rich and deep theoretical foundation, semi-supervised learning, which uses additional unlabeled data for training, still remains a theoretical mystery lacking a sound fundamental understanding. The purpose of this research thesis is to take a first step towards bridging this theory-practice gap. We focus on investigating the inherent limitations of the benefits SSL can provide over SL. We develop a framework under which one can analyze the potential benefits, as measured by the sample complexity of SSL. Our framework is utopian in the sense that a SSL algorithm trains on a labeled sample and an unlabeled distribution, as opposed to an unlabeled sample in the usual SSL model. Thus, any lower bound on the sample complexity of SSL in this model implies lower bounds in the usual model. Roughly, our conclusion is that unless the learner is absolutely certain there is some non-trivial relationship between labels and the unlabeled distribution (``SSL type assumption''), SSL cannot provide significant advantages over SL. Technically speaking, we show that the sample complexity of SSL is no more than a constant factor better than SL for any unlabeled distribution, under a no-prior-knowledge setting (i.e. without SSL type assumptions). We prove that for the class of thresholds in the realizable setting the sample complexity of SL is at most twice that of SSL. Also, we prove that in the agnostic setting for the classes of thresholds and union of intervals the sample complexity of SL is at most a constant factor larger than that of SSL. We conjecture this to be a general phenomenon applying to any hypothesis class. We also discuss issues regarding SSL type assumptions, and in particular the popular cluster assumption. We give examples that show even in the most accommodating circumstances, learning under the cluster assumption can be hazardous and lead to prediction performance much worse than simply ignoring the unlabeled data and doing supervised learning. We conclude with a look into future research directions that build on our investigation.

Page generated in 0.0961 seconds