• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 33
  • 11
  • 7
  • 5
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 200
  • 29
  • 25
  • 21
  • 20
  • 17
  • 16
  • 16
  • 15
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Developing a national frame of reference on student achievement by weighting student records from a state assessment

Tudor, Joshua 01 May 2015 (has links)
A fundamental issue in educational measurement is what frame of reference to use when interpreting students’ performance on an assessment. One frame of reference that is often used to enhance interpretations of test scores is normative, which adds meaning to test score interpretations by indicating the rank of an individual’s score within a distribution of test scores of a well-defined reference group. One of the most commonly used frames of reference on student achievement provided by test publishers of large-scale assessments is national norms, whereby students’ test scores are referenced to a distribution of scores of a nationally representative sample. A national probability sample can fail to fully represent the population because of student and school nonparticipation. In practice, this is remedied by weighting the sample so that it better represents the intended reference population. The focus of this study was on weighting and determining the extent to which weighting grade 4 and grade 8 student records that are not fully representative of the nation can recover distributions of reading and math scores in a national probability sample. Data from a statewide testing program were used to create six grade 4 and grade 8 datasets, each varying in its degree of representativeness of the nation, as well as in the proximity of its reading and math distributions to those of a national sample. The six datasets created for each grade were separately weighted to different population totals in two different weighting conditions using four different bivariate stratification designs. The weighted distributions were then smoothed and compared to smoothed distributions of the national sample in terms of descriptive statistics, maximum absolute differences between the relative cumulative frequency distributions, and chi-square effect sizes. The impact of using percentile ranks developed from the state data was also investigated. By and large, the smoothed distributions of the weighted datasets were able to recover the national distribution in each content area, grade, and weighting condition. Weighting the datasets to the nation was effective in making the state test score distributions more similar to the national distributions. Moreover, the stratification design that defined weighting cells by the joint distribution of median household income and ethnic composition of the school consistently produced desirable results for the six datasets used in each grade. Log-linear smoothing using a polynomial of degree 4 was effective in making the weighted distributions even more similar to those in the national sample. Investigation of the impact of using the percentile ranks derived from the state datasets revealed that the percentile ranks of the distributions that were most similar to the national distributions resulted in a high percentage of agreement when classifying student performance based on raw scores associated with the same percentile rank in each dataset. The utility of having a national frame of reference on student achievement, and the efficacy of estimating such a frame of reference from existing data are also discussed.
32

Comparing Two Perspectives for Understanding Decisions from Description and Experience

Kauffman, Sandra S. 21 March 2014 (has links)
When trying to make sense of uncertain situations, we might rely on summary information from a description, or information gathered from our personal experience. There are two approaches that both attempt to explain how we make risky decisions using descriptive or experiential information—the cognitive-based explanation from the description-experience gap, and the emotion-based explanation from the somatic marker hypothesis (SMH). This dissertation brings together these two approaches to better understand how we make risky decisions. Four options were presented, with options differing in terms of advantageousness and riskiness. How easy or difficult it was to consciously comprehend the reward structure, or cognitive penetrability, was manipulated by displaying single outcomes or multiple, diverse outcomes per trial. Within the description or experience task, participants were randomly assigned to the more or less penetrable version of an all gain or all loss set of options. How often the riskier or advantageous options were chosen served as a measure of risky or advantageous decision making. Regardless of penetrability, risk preferences were generally but not completely as predicted by the SMH. Instead, the primary effect of cognitive penetrability was on advantageous decision making. Furthermore, description was found to be more cognitively penetrable than experience. Overall, the results suggest that clarification is needed regarding how somatic markers are formed in the loss versus gain domain, and future research should consider the difference in penetrability between description and experience when trying to explain preferences between the two decisions.
33

Constraint Weighting Local Search for Constraint Satisfaction

Thornton, John Richard, n/a January 2000 (has links)
One of the challenges for the constraint satisfaction community has been to develop an automated approach to solving Constraint Satisfaction Problems (CSPs) rather than creating specific algorithms for specific problems. Much of this work has concentrated on the development and improvement of general purpose backtracking techniques. However, the success of relatively simple local search techniques on larger satisfiability problems [Selman et a!. 1992] and CSPs such as the n-queens [Minton et al. 1992] has caused interest in applying local search to constraint satisfaction. In this thesis we look at the usefulness of constraint weighting as a local search technique for constraint satisfaction. The work is based on the clause weighting ideas of Selman and Kautz [1993] and Moths [1993] and applies, evaluates and extends these ideas from the satisfiability domain to the more general domain of CSPs. Specifically, the contributions of the thesis are: 1. The introduction of a local search taxonomy. We examine the various better known local search techniques and recognise four basic strategies: restart, randomness, memory and weighting. 2. The extension of the CSP modelling framework. In order to represent and efficiently solve more realistic problems we extend the C SP modelling framework to include array-based domains and array-based domain use constraints. 3. The empirical evaluation of constraint weighting. We compare the performance of three constraint weighting strategies on a range of CSP and satisflability problems and with several other local search techniques. We find that no one technique dominates in all problem domains. 4. The characterisation of constraint weighting performance. Based on our empirical study we identiIS' the weighting behaviours and problem features that favour constrtt weighting. We conclude weighting does better on structured problems where the algorithm can recognise a harder sub-group of constraints. 5. The extension of constraint weighting. We introduce an efficient arc weighting algorithm that additionally weights connections between constraints that are simultaneously violated at a local minimum. This algorithm is empirically shown to outperform standard constraint weighting on a range of CSPs and within a general constraint solving system. Also we look at combining constraint weighting with other local search heuristics and find that these hybrid techniques can do well on problems where the parent algorithms are evenly matched. 6. The application of constraint weighting to over constrained domains. Our empirical work suggests constraint weighting does well for problems with distinctions between constraint groups. This led us to investigate solving real-world over constrained problems with hard and soft constraint groups and to introduce two dynamic constraint weighting heuristics that maintain a distinction between hard and soft constraint groups while still adding weights to violated constraints in a local minimum. In an empirical study, the dynamic schemes are shown to outperform other fixed weighting and non-weighting systems on a range of real world problems. In addition, the performance of weighting is shown to degrade less severely when soft constraints are added to the system, suggesting constraint weighting is especially applicable to realistic, hard and soft constraint problems
34

Sensory Integration During Goal Directed Reaches: The Effects of Manipulating Target Availability

Khanafer, Sajida 19 October 2012 (has links)
When using visual and proprioceptive information to plan a reach, it has been proposed that the brain combines these cues to estimate the object and/or limb’s location. Specifically, according to the maximum-likelihood estimation (MLE) model, more reliable sensory inputs are assigned a greater weight (Ernst & Banks, 2002). In this research we examined if the brain is able to adjust which sensory cue it weights the most. Specifically, we asked if the brain changes how it weights sensory information when the availability of a visual cue is manipulated. Twenty-four healthy subjects reached to visual (V), proprioceptive (P), or visual + proprioceptive (VP) targets under different visual delay conditions (e.g. on V and VP trials, the visual target was available for the entire reach, it was removed with the go-signal or it was removed 1, 2 or 5 seconds before the go-signal). Subjects completed 5 blocks of trials, with 90 trials per block. For 12 subjects, the visual delay was kept consistent within a block of trials, while for the other 12 subjects, different visual delays were intermixed within a block of trials. To establish which sensory cue subjects weighted the most, we compared endpoint positions achieved on V and P reaches to VP reaches. Results indicated that all subjects weighted sensory cues in accordance with the MLE model across all delay conditions and that these weights were similar regardless of the visual delay. Moreover, while errors increased with longer visual delays, there was no change in reaching variance. Thus, manipulating the visual environment was not enough to change subjects’ weighting strategy, further i
35

Omkonstruktion av friskluftsventil / Redesign of an air vent

Basic, Eldar, Fransson, Johan January 2009 (has links)
Genom konstruktion och tillverkning av prototyper samt med hjälp av ljud- och flödesmätningar av dessa, tillsammans med standardmodellen, har vi lyckats förbättra flödet genom ventilen Al-db 450, producerad av Fresh AB i Gemla. För ljudmätningar har en testrigg konstruerats och tillverkats. / By design and manufacturing of prototypes, in conjunction with sound and flow measurements, we successfully increased air flow through the vent Al-db 450. The producer of this fresh air vent is Fresh AB in Gemla, Sweden. For sound testing a box was designed and manufactured.
36

Efficient case-based reasoning through feature weighting, and its application in protein crystallography

Gopal, Kreshna 02 June 2009 (has links)
Data preprocessing is critical for machine learning, data mining, and pattern recognition. In particular, selecting relevant and non-redundant features in highdimensional data is important to efficiently construct models that accurately describe the data. In this work, I present SLIDER, an algorithm that weights features to reflect relevance in determining similarity between instances. Accurate weighting of features improves the similarity measure, which is useful in learning algorithms like nearest neighbor and case-based reasoning. SLIDER performs a greedy search for optimum weights in an exponentially large space of weight vectors. Exhaustive search being intractable, the algorithm reduces the search space by focusing on pivotal weights at which representative instances are equidistant to truly similar and different instances in Euclidean space. SLIDER then evaluates those weights heuristically, based on effectiveness in properly ranking pre-determined matches of a set of cases, relative to mismatches. I analytically show that by choosing feature weights that minimize the mean rank of matches relative to mismatches, the separation between the distributions of Euclidean distances for matches and mismatches is increased. This leads to a better distance metric, and consequently increases the probability of retrieving true matches from a database. I also discuss how SLIDER is used to improve the efficiency and effectiveness of case retrieval in a case-based reasoning system that automatically interprets electron density maps to determine the three-dimensional structures of proteins. Electron density patterns for regions in a protein are represented by numerical features, which are used in a distance metric to efficiently retrieve matching patterns by searching a large database. These pre-selected cases are then evaluated by more expensive methods to identify truly good matches – this strategy speeds up the retrieval of matching density regions, thereby enabling fast and accurate protein model-building. This two-phase case retrieval approach is potentially useful in many case-based reasoning systems, especially those with computationally expensive case matching and large case libraries.
37

An Ontology-based Hybrid Recommendation System Using Semantic Similarity Measure And Feature Weighting

Ceylan, Ugur 01 September 2011 (has links) (PDF)
The task of the recommendation systems is to recommend items that are relevant to the preferences of users. Two main approaches in recommendation systems are collaborative filtering and content-based filtering. Collaborative filtering systems have some major problems such as sparsity, scalability, new item and new user problems. In this thesis, a hybrid recommendation system that is based on content-boosted collaborative filtering approach is proposed in order to overcome sparsity and new item problems of collaborative filtering. The content-based part of the proposed approach exploits semantic similarities between items based on a priori defined ontology-based metadata in movie domain and derived feature-weights from content-based user models. Using the semantic similarities between items and collaborative-based user models, recommendations are generated. The results of the evaluation phase show that the proposed approach improves the quality of recommendations.
38

Omkonstruktion av friskluftsventil / Redesign of an air vent

Basic, Eldar, Fransson, Johan January 2009 (has links)
<p>Genom konstruktion och tillverkning av prototyper samt med hjälp av ljud- och flödesmätningar av dessa, tillsammans med standardmodellen, har vi lyckats förbättra flödet genom ventilen Al-db 450, producerad av Fresh AB i Gemla. För ljudmätningar har en testrigg konstruerats och tillverkats.</p> / <p>By design and manufacturing of prototypes, in conjunction with sound and flow measurements, we successfully increased air flow through the vent Al-db 450. The producer of this fresh air vent is Fresh AB in Gemla, Sweden. For sound testing a box was designed and manufactured.</p>
39

Analysis of Dependently Truncated Sample Using Inverse Probability Weighted Estimator

Liu, Yang 01 August 2011 (has links)
Many statistical methods for truncated data rely on the assumption that the failure and truncation time are independent, which can be unrealistic in applications. The study cohorts obtained from bone marrow transplant (BMT) registry data are commonly recognized as truncated samples, the time-to-failure is truncated by the transplant time. There are clinical evidences that a longer transplant waiting time is a worse prognosis of survivorship. Therefore, it is reasonable to assume the dependence between transplant and failure time. To better analyze BMT registry data, we utilize a Cox analysis in which the transplant time is both a truncation variable and a predictor of the time-to-failure. An inverse-probability-weighted (IPW) estimator is proposed to estimate the distribution of transplant time. Usefulness of the IPW approach is demonstrated through a simulation study and a real application.
40

CLUSTER-BASED TERM WEIGHTING AND DOCUMENT RANKING MODELS

Murugesan, Keerthiram 01 January 2011 (has links)
A term weighting scheme measures the importance of a term in a collection. A document ranking model uses these term weights to find the rank or score of a document in a collection. We present a series of cluster-based term weighting and document ranking models based on the TF-IDF and Okapi BM25 models. These term weighting and document ranking models update the inter-cluster and intra-cluster frequency components based on the generated clusters. These inter-cluster and intra-cluster frequency components are used for weighting the importance of a term in addition to the term and document frequency components. In this thesis, we will show how these models outperform the TF-IDF and Okapi BM25 models in document clustering and ranking.

Page generated in 0.0792 seconds