• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 593
  • 284
  • 85
  • 61
  • 40
  • 18
  • 17
  • 16
  • 16
  • 16
  • 14
  • 12
  • 6
  • 5
  • 5
  • Tagged with
  • 1335
  • 235
  • 166
  • 162
  • 139
  • 123
  • 109
  • 107
  • 103
  • 92
  • 89
  • 89
  • 87
  • 82
  • 80
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

An exploratory study of teachers’ use of mathematical knowledge for teaching to support mathematical argumentation in middle-grades classrooms

Kim, Hee-Joon 30 January 2012 (has links)
Mathematical argumentation is fundamental to doing mathematics and developing new knowledge. Working from the view that mathematical argumentation is also integral to teaching and learning mathematics, this study investigated teachers’ use of mathematical knowledge for teaching (MKT) to support student participation in mathematical argumentation. Classroom observations were made of three case-study teachers’ implementation of a three-day curriculum unit on mathematical argumentation and supplemented with paper and pencil assessments of teachers’ MKT. Teaching moves, or teachers’ actions directed toward supporting argumentation, were identified as a unit of discourse in which MKT-in-action appeared. Teachers’ MKT showed up in three types of teaching moves including: Revoicing by Reformulation, Responding to Student Difficulties, and Pressing for Generalization in Defining. MKT that was evident in these moves included knowledge of core information in argument, heuristic methods, and vii formulation of mathematical definition through and in argumentation. Findings highlight that supporting mathematical argumentation requires teachers to have a sophisticated understanding of the subject matter as well as how concepts develop through argumentation. Findings have limitations in understanding complex teaching practices by considering MKT as a single factor. The study has implications on teacher learning and MKT assessments. / text
192

Discourse Comprehension and Informational Masking: The Effect of Age, Semantic Content, and Acoustic Similarity

Lu, Zihui 10 January 2014 (has links)
It is often difficult for people to understand speech when there are other ongoing conversations in the background. This dissertation investigates how different background maskers interfere with our ability to comprehend speech and the reasons why older listeners have more difficulties than younger listeners in these tasks. An ecologically valid approach was applied: instead of words or short sentences, participants were presented with two fairly lengthy lectures simultaneously, and their task was to listen to the target lecture, and ignore the competing one. Afterwards, they answered questions regarding the target lecture. Experiment 1 found that both normal-hearing and hearing-impaired older adults performed poorer than younger adults when everyone was tested in identical listening situations. However, when the listening situation was individually adjusted to compensate for age-related differences in the ability to recognize individual words in noise, age-related difference in comprehension disappeared. Experiment 2 compared the masking effects of a single-talker competing lecture to a babble of 12 voices, and the signal-to-noise ratio (SNR) was manipulated so that the masker was either of similar volume as the target, or much louder. The results showed that the competing speech was much more distracting than babble. Moreover, increasing the masker level negatively affected speech comprehension only when the masker was babble; when it was a single-talker lecture, the performance plateaued as the SNR decreased from -2 to -12 dB. Experiment 3 compared the effects of semantic content and acoustic similarity on speech comprehension by comparing a normal speech masker with a time-reversed one (to examine the effect of semantic content) and a normal speech masker with an 8-band vocoded speech (to examine the effect of acoustic similarity). The results showed that both semantic content and acoustic similarity contributed to informational masking, but the latter seemed to play a bigger role than the former. Together, the results indicated that older adults’ speech comprehension difficulties with maskers were mainly due to declines in their hearing capacities rather than their cognitive functions. The acoustic similarity between the target and competing speech may be the main reason for informational masking, with semantic interference playing a secondary role.
193

Discourse Comprehension and Informational Masking: The Effect of Age, Semantic Content, and Acoustic Similarity

Lu, Zihui 10 January 2014 (has links)
It is often difficult for people to understand speech when there are other ongoing conversations in the background. This dissertation investigates how different background maskers interfere with our ability to comprehend speech and the reasons why older listeners have more difficulties than younger listeners in these tasks. An ecologically valid approach was applied: instead of words or short sentences, participants were presented with two fairly lengthy lectures simultaneously, and their task was to listen to the target lecture, and ignore the competing one. Afterwards, they answered questions regarding the target lecture. Experiment 1 found that both normal-hearing and hearing-impaired older adults performed poorer than younger adults when everyone was tested in identical listening situations. However, when the listening situation was individually adjusted to compensate for age-related differences in the ability to recognize individual words in noise, age-related difference in comprehension disappeared. Experiment 2 compared the masking effects of a single-talker competing lecture to a babble of 12 voices, and the signal-to-noise ratio (SNR) was manipulated so that the masker was either of similar volume as the target, or much louder. The results showed that the competing speech was much more distracting than babble. Moreover, increasing the masker level negatively affected speech comprehension only when the masker was babble; when it was a single-talker lecture, the performance plateaued as the SNR decreased from -2 to -12 dB. Experiment 3 compared the effects of semantic content and acoustic similarity on speech comprehension by comparing a normal speech masker with a time-reversed one (to examine the effect of semantic content) and a normal speech masker with an 8-band vocoded speech (to examine the effect of acoustic similarity). The results showed that both semantic content and acoustic similarity contributed to informational masking, but the latter seemed to play a bigger role than the former. Together, the results indicated that older adults’ speech comprehension difficulties with maskers were mainly due to declines in their hearing capacities rather than their cognitive functions. The acoustic similarity between the target and competing speech may be the main reason for informational masking, with semantic interference playing a secondary role.
194

Multi-Regional Analysis of Contact Maps for Protein Structure Prediction

Ahmed, Hazem Radwan A. 24 April 2009 (has links)
1D protein sequences, 2D contact maps and 3D structures are three different representational levels of detail for proteins. Predicting protein 3D structures from their 1D sequences remains one of the complex challenges of bioinformatics. The "Divide and Conquer" principle is applied in our research to handle this challenge, by dividing it into two separate yet dependent subproblems, using a Case-Based Reasoning (CBR) approach. Firstly, 2D contact maps are predicted from their 1D protein sequences; secondly, 3D protein structures are then predicted from their predicted 2D contact maps. We focus on the problem of identifying common substructural patterns of protein contact maps, which could potentially be used as building blocks for a bottom-up approach for protein structure prediction. We further demonstrate how to improve identifying these patterns by combining both protein sequence and structural information. We assess the consistency and the efficiency of identifying common substructural patterns by conducting statistical analyses on several subsets of the experimental results with different sequence and structural information. / Thesis (Master, Computing) -- Queen's University, 2009-04-23 22:01:04.528
195

Similarity analysis of industrial alarm flood data

Ahmed, Kabir Unknown Date
No description available.
196

Near Images: A Tolerance Based Approach to Image Similarity and its Robustness to Noise and Lightening

Shahfar, Shabnam 27 September 2011 (has links)
This thesis represents a tolerance near set approach to detect similarity between digital images. Two images are considered as sets of perceptual objects and a tolerance relation defines the nearness between objects. Two perceptual objects resemble each other if the difference between their descriptions is smaller than a tolerable level of error. Existing tolerance near set approaches to image similarity consider both images in a single tolerance space and compare the size of tolerance classes. This approach is shown to be sensitive to noise and distortions. In this thesis, a new tolerance-based method is proposed that considers each image in a separate tolerance space and defines the similarity based on differences between histograms of the size of tolerance classes. The main advantage of the proposed method is its lower sensitivity to distortions such as adding noise, darkening or brightening. This advantage has been shown here through a set of experiments.
197

The extended empirical likelihood

Wu, Fan 04 May 2015 (has links)
The empirical likelihood method introduced by Owen (1988, 1990) is a powerful nonparametric method for statistical inference. It has been one of the most researched methods in statistics in the last twenty-five years and remains to be a very active area of research today. There is now a large body of literature on empirical likelihood method which covers its applications in many areas of statistics (Owen, 2001). One important problem affecting the empirical likelihood method is its poor accuracy, especially for small sample and/or high-dimension applications. The poor accuracy can be alleviated by using high-order empirical likelihood methods such as the Bartlett corrected empirical likelihood but it cannot be completely resolved by high-order asymptotic methods alone. Since the work of Tsao (2004), the impact of the convex hull constraint in the formulation of the empirical likelihood on the finite sample accuracy has been better understood, and methods have been developed to break this constraint in order to improve the accuracy. Three important methods along this direction are [1] the penalized empirical likelihood of Bartolucci (2007) and Lahiri and Mukhopadhyay (2012), [2] the adjusted empirical likelihood by Chen, Variyath and Abraham (2008), Emerson and Owen (2009), Liu and Chen (2010) and Chen and Huang (2012), and [3] the extended empirical likelihood of Tsao (2013) and Tsao and Wu (2013). The latter is particularly attractive in that it retains not only the asymptotic properties of the original empirical likelihood, but also its important geometric characteristics. In this thesis, we generalize the extended empirical likelihood of Tsao and Wu (2013) to handle inferences in two large classes of one-sample and two-sample problems. In Chapter 2, we generalize the extended empirical likelihood to handle inference for the large class of parameters defined by one-sample estimating equations, which includes the mean as a special case. In Chapters 3 and 4, we generalize the extended empirical likelihood to handle two-sample problems; in Chapter 3, we study the extended empirical likelihood for the difference between two p-dimensional means; in Chapter 4, we consider the extended empirical likelihood for the difference between two p-dimensional parameters defined by estimating equations. In all cases, we give both the first- and second-order extended empirical likelihood methods and compare these methods with existing methods. Technically, the two-sample mean problem in Chapter 3 is a special case of the general two-sample problem in Chapter 4. We single out the mean case to form Chapter 3 not only because it is a standalone published work, but also because it naturally leads up to the more difficult two-sample estimating equations problem in Chapter 4. We note that Chapter 2 is the published paper Tsao and Wu (2014); Chapter 3 is the published paper Wu and Tsao (2014). To comply with the University of Victoria policy regarding the use of published work for thesis and in accordance with copyright agreements between authors and journal publishers, details of these published work are acknowledged at the beginning of these chapters. Chapter 4 is another joint paper Tsao and Wu (2015) which has been submitted for publication. / Graduate / 0463 / fwu@uvic.ca
198

DEVELOPMENTAL FMRI STUDY: FACE AND OBJECT RECOGNITION

Gathers, Ann D. 01 January 2005 (has links)
Visual processing, though seemingly automatic, is complex. Typical humansprocess objects and faces routinely. Yet, when a disease or disorder disrupts face andobject recognition, the effects are profound. Because of its importance and complexity,visual processing has been the subject of many adult functional imaging studies.However, relatively little is known about the development of the neural organization andunderlying cognitive mechanisms of face and object recognition. The current projectused functional magnetic resonance imaging (fMRI) to identify maturational changes inthe neural substrates of face and object recognition in 5-8 year olds, 9-11 year olds, andadults. A passive face and object viewing task revealed cortical shifts in the faceresponsiveloci of the ventral processing stream (VPS), an inferior occipito-temporalregion known to function in higher visual processing. Older children and adults recruitedmore anterior regions of the ventral processing stream than younger children. Toinvestigate the potential cognitive basis for these developmental changes, researchersimplemented a shape-matching task with parametric variations of shape overlap,structural similarity (SS), in stimulus pairs. VPS regions sensitive to high SS emerged inolder children and adults. Younger children recruited no structurally-sensitive regions inthe VPS. Two right hemisphere VPS regions were sensitive to maturational changes inSS. A comparison of face-responsive regions from the passive viewing task and the VPSSS regions did not reveal overlap. Though SS drives organization of the VPS, it did notexplain the cortical shifts in the neural substrates for face processing. In addition to VPSregions, results indicated additional maturational SS changes in frontal, parietal, andcerebellar regions. Based on these findings, further analyses were conducted to quantifyand qualify maturational changes in face and object processing throughout the brain.Results indicated developmental changes in activation extent, signal magnitude, andlateralization of face and object recognition networks. Collectively, this project supportsa developmental change in visual processing between 5-8 years and 9-11 years of age.Chapters Four through Six provide an in-depth discussion of the implications of thesefindings.
199

Likhetsteorin : En teoretisk utveckling av Collins interaktionsritualer / Similarity Theory : A Theoretical Development of Collins’ Interaction Rituals

Aronson, Olov January 2014 (has links)
Uppsatsen utgår från upptäckten av ett mönster i Collins teori om interaktionsritualer. Mönstret består i att flera av idéerna och begreppen som används i Collins teori kan förstås som upplevelser av likhet mellan de samspelande individerna. Baserat på upptäckten av detta mönster formuleras uppsatsens syfte. Syftet är att utveckla grunddragen till en ny teori om interaktionsritualer som förklarar interaktionsritualer utifrån individers upplevelser av likhet. För att uppfylla syftet diskuteras fyra aspekter av interaktionsritualer. I anslutning till den första aspekten presenteras en förklaring till varför individer som upplever likhet ger varandra stöd i sina interaktionsritualer. I diskussionen kring den andra aspekten presenteras begreppet likhetens upprymdhet, som visar hur individers upplevelse av likhet är avgörande för interaktionsritualers genererande av emotionell energi. Genom den tredje aspekten framställs en komplett beskrivning av likhetsteorin, som förklarar interaktionsritualer utifrån individers upplevelse av likhet. Slutligen, i diskussionen kring den fjärde aspekten av interaktionsritualer, prövas likhetsteorin mot empiriska studier. I uppsatsens avslutande avsnitt framhävs hur likhetsteorin ger viktiga bidrag till förståelsen av interaktionsritualer och förslag på vidare forskning presenteras. / The subject of the thesis originates in the discovery of a pattern in Collins’ theory of interaction rituals. The pattern indicates that several important ideas and concepts in Collins’ theory can be conceived as individuals’ experiences of similarity. Based upon the discovery of this pattern, the aim of the thesis is to develop the basic features of a new theory that explains interaction rituals by centering upon individuals’ experiences of similarity. In order to accomplish the aim, the essay discusses four aspects of interaction rituals. In connection to the first aspect, an explanation is presented that reveals why individuals who experience similarity give each other support in their interaction rituals. Discussing the second aspect, the concept of effervescence of similarity is presented, which explains why experiences of similarity are crucial for generating emotional energy in interaction rituals. When developing the third aspect, a complete description of similarity theory is presented. Similarity theory explains interaction rituals by referring to individuals’ experiences of similarity. Finally, in the discussion of the fourth aspect, similarity theory is tested against empirical research. The concluding sections argue for ways in which similarity theory may contribute considerably to the understanding of interaction rituals. Also, suggestions for further research are presented.
200

Evaluating Text Segmentation

Fournier, Christopher 24 April 2013 (has links)
This thesis investigates the evaluation of automatic and manual text segmentation. Text segmentation is the process of placing boundaries within text to create segments according to some task-dependent criterion. An example of text segmentation is topical segmentation, which aims to segment a text according to the subjective definition of what constitutes a topic. A number of automatic segmenters have been created to perform this task, and the question that this thesis answers is how to select the best automatic segmenter for such a task. This requires choosing an appropriate segmentation evaluation metric, confirming the reliability of a manual solution, and then finally employing an evaluation methodology that can select the automatic segmenter that best approximates human performance. A variety of comparison methods and metrics exist for comparing segmentations (e.g., WindowDiff, Pk), and all save a few are able to award partial credit for nearly missing a boundary. Those comparison methods that can award partial credit unfortunately lack consistency, symmetricity, intuition, and a host of other desirable qualities. This work proposes a new comparison method named boundary similarity (B) which is based upon a new minimal boundary edit distance to compare two segmentations. Near misses are frequent, even among manual segmenters (as is exemplified by the low inter-coder agreement reported by many segmentation studies). This work adapts some inter-coder agreement coefficients to award partial credit for near misses using the new metric proposed herein, B. The methodologies employed by many works introducing automatic segmenters evaluate them simply in terms of a comparison of their output to one manual segmentation of a text, and often only by presenting nothing other than a series of mean performance values (along with no standard deviation, standard error, or little if any statistical hypothesis testing). This work asserts that one segmentation of a text cannot constitute a “true” segmentation; specifically, one manual segmentation is simply one sample of the population of all possible segmentations of a text and of that subset of desirable segmentations. This work further asserts that an adapted inter-coder agreement statistics proposed herein should be used to determine the reproducibility and reliability of a coding scheme and set of manual codings, and then statistical hypothesis testing using the specific comparison methods and methodologies demonstrated herein should be used to select the best automatic segmenter. This work proposes new segmentation evaluation metrics, adapted inter-coder agreement coefficients, and methodologies. Most important, this work experimentally compares the state-or-the-art comparison methods to those proposed herein upon artificial data that simulates a variety of scenarios and chooses the best one (B). The ability of adapted inter-coder agreement coefficients, based upon B, to discern between various levels of agreement in artificial and natural data sets is then demonstrated. Finally, a contextual evaluation of three automatic segmenters is performed using the state-of-the art comparison methods and B using the methodology proposed herein to demonstrate the benefits and versatility of B as opposed to its counterparts.

Page generated in 0.0517 seconds