• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 358
  • 153
  • 76
  • 24
  • 18
  • 16
  • 14
  • 11
  • 9
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 855
  • 432
  • 421
  • 135
  • 126
  • 123
  • 118
  • 117
  • 115
  • 108
  • 100
  • 86
  • 86
  • 86
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Schizotypy's shape: structure, items, and dependability

Stringer, Deborah Michele 01 December 2012 (has links)
Dimensional models of schizotypy and associated traits have taken on current relevance in the DSM-5 (http://www.dsm5.org) proposal for personality disorder (PD), which includes a personality trait initially conceptualized as a five-facet schizotypy domain and then simplified into a three-facet psychoticism domain that has yet to be evaluated extensively. In this study, I (1) reviewed the literature to develop hypotheses about the content and boundaries of the schizotypy domain, and (2) measured this content in a mixed sample of students and patents with 657 usable protocols at Time 1 (193 Notre Dame students, 301 University of Iowa students; 163 outpatients) and 263 usable protocols (74 Notre Dame students, 76 University of Iowa students, 113 outpatients) at Time 2, at least 1.5 weeks later. I then (3) evaluated confirmatory models including DSM-5 schizotypy and psychoticism facet models and other empirically grounded models and (4) used the best confirmatory model to provide item pools for classically constructing scales of schizotypy facets. This four-factor structure provided content pools used to create four corresponding scales: Unusual Perceptions, Unusual Beliefs, Dissociation Proneness, and Cognitive and Communicative Peculiarity. Additionally, (5) I used item response theory (IRT)-based analyses to evaluate items in these facet scales, both in terms of the level of schizotypy they best measure and the strength of their relations to the schizotypy construct. I also (6) examined the short-term test-retest reliability of the schizotypy scales, as well as that of the established measures used in this study; new and existing measures were comparably stable. Finally, (7) I evaluated schizotypy's convergent and discriminant validity in relation to three other types of traits: (a) those correlated with the domain (e.g., Obsessive Compulsive Disorder [OCD] and non-delusional mistrust), (b) other higher level traits (i.e., measures of the 3 factor and 5-factor models of higher order personality/temperament), and (c) familially related traits (e.g., social anxiety). Overall, the schizotypy facet measures appeared to assess moderate amounts of variance that were unexplained by the established measures of personality, temperament, and psychopathology that were included in this study. The implications of adding a schizotypy trait to the overall personality trait taxonomy are discussed.
142

Implementation of the Apriori algorithm for effective item set mining in VigiBaseTM : Project report in Teknisk Fysik 15 hp

Olofsson, Niklas January 2010 (has links)
No description available.
143

A Comparison of Adjacent Categories and Cumulative DSF Effect Estimators

Gattamorta, Karina Alvarez 18 December 2009 (has links)
The study of measurement invariance in polytomous items that targets individual score levels is known as differential step functioning (DSF; Penfield, 2007, 2008). DSF methods provide specific information describing the manifestation of the invariance effect within particular score levels and therefore serve a diagnostic role in identifying the individual score levels involved in the item's invariance effect. The analysis of DSF requires the creation of a set of dichotomizations of the item response variable. There are two primary approaches for creating the set of dichotomizations to conduct a DSF analysis. The first approach, known as the adjacent categories approach, is consistent with the dichotomization scheme underlying the generalized partial credit model (GPCM; Muraki, 1992) and considers each pair of adjacent score levels while treating the other score levels as missing. The second approach, known as the cumulative approach, is consistent with the dichotomization scheme underlying the graded response model (GRM; Samejima, 1997) and includes data from every score level in each dichotomization. To date, there is limited research on how the cumulative and adjacent categories approaches compare within the context of DSF, particularly as applied to a real data set. The understanding of how the interpretation and practical outcomes may vary given these two approaches is also limited. The current study addressed these two issues. This study evaluated the results of a DSF analysis using both the adjacent categories and cumulative dichotomization schemes in order to determine if the two approaches yield similar results and interpretations of DSF. These approaches were applied to data from a polytomously scored alternate assessment administered to children with significant cognitive disabilities. The results of the DSF analyses revealed that the two approaches generally led to consistent results, particularly in the case where DSF effects were negligible. For steps where significant DSF was present, the two approaches generally guide analysts to the same location of the item. However, several aspects of the results rose questions about the use of the adjacent categories dichotomization scheme. First, there seemed to be a lack of independence of the adjacent categories method since large DSF effects at one step are often paired with large DSF effects in the opposite direction found in the previous step. Additionally, when a substantial DSF effect existed, it was more likely to be significant using the cumulative approach over the adjacent categories approach. This is likely due to the smaller standard errors that lead to greater stability of the cumulative approach. In sum, the results indicate that the cumulative approach is preferable over the adjacent categories approach when conducting a DSF analysis.
144

Carry-Over Facilitation for Non-Familiar Trials in Item-Recognition

Engström, Lisa January 2010 (has links)
Two aspects of cognitive control were investigated using the item-recognition task and the verb generation task. The item-recognition task had two conditions, high and low interference. The verb generation task was manipulated in three ways, for different levels of interference and time interval. The intention was to more deeply investigate one aspect of the item-recognition task, comparing response times for different trial types in different conditions, and to investigate a fatigue effect between the item-recognition and verb generation task. Thirty-two participants were tested at two occasions, in a within-subjects design. Results for the verb generation task revealed effects for levels of interference and time interval, although there was no difference in the manipulation. Results for the item-recognition task revealed effects for condition and trial type, as well as an interaction effect between these. The non-familiar trials in the high interference condition resulted in faster response times compared to the same kind of trials in the low condition. The result from the item-recognition task extends those from previous studies, revealing details for differences between trial types. This finding demonstrates a carry-over facilitation effect.
145

Representations and Parameterizations of Combinatorial Auctions

Loker, David Ryan January 2007 (has links)
Combinatorial auctions (CAs) are an important mechanism for allocating multiple items while allowing agents to specify preferences over bundles of items. In order to communicate these preferences, agents submit bids, which consist of one or more items and a value indicating the agent’s preference for these items. The process of determining the allocation of items is known as the winner determination problem (WDP). WDP for CAs is known to be NP-complete in the general case. We consider two distinct graph representations of a CA; the bid graph and the item graph. In a bid graph, vertices represent bids, and two vertices are adjacent if and only if the bids share items in common. In an item graph, each vertex represents a unique item, there is a vertex for each item, and any bid submitted by any agent must induce a connected subgraph of the item graph. We introduce a new definition of combinatorial auction equivalence by declaring two CAs equivalent if and only if their bid graphs are isomorphic. Parameterized complexity theory can be used to further distinguish between NP-hard problems. In order to make use of parameterized complexity theory in the investigation of a problem, we aim to find one or more parameters that describe some aspect of the problem such that if we fix these parameters, then either the problem is still hard (fixed-parameter intractable), or the problem can be solved in polynomial time (fixed-parameter tractable). We analyze WDP using bid graphs from within the formal scope of parameterized complexity theory. This approach has not previously been used to analyze WDP for CAs, although it has been used to solve set packing, which is related to WDP for CAs and is discussed in detail. We investigate a few parameterizations of WDP; some of the parameterizations are shown to be fixed-parameter intractable, while others are fixed-parameter tractable. We also analyze WDP when the graph class of a bid graph is restricted. We also discuss relationships between item graphs and bid graphs. Although both graphs can represent the same problem, there is little previous work analyzing direct relationships between them. Our discussion on these relationships begins with a result by Conitzer et al. [7], which focuses on the item graph representation and its treewidth, a property of a graph that measures how close the graph is to a tree. From a result by Gavril, if an item graph has treewidth one, then the bid graph must be chordal [16]. To apply the other direction of Gavril’s theorem, we use our new definition of CA equivalence. With this new definition, Gavril’s result shows that if a bid graph of a CA is chordal, then we can construct an item graph that has treewidth one for some equivalent CA.
146

Implementation of the Apriori algorithm for effective item set mining in VigiBaseTM : Project report in Teknisk Fysik 15 hp

Olofsson, Niklas January 2010 (has links)
No description available.
147

Representations and Parameterizations of Combinatorial Auctions

Loker, David Ryan January 2007 (has links)
Combinatorial auctions (CAs) are an important mechanism for allocating multiple items while allowing agents to specify preferences over bundles of items. In order to communicate these preferences, agents submit bids, which consist of one or more items and a value indicating the agent’s preference for these items. The process of determining the allocation of items is known as the winner determination problem (WDP). WDP for CAs is known to be NP-complete in the general case. We consider two distinct graph representations of a CA; the bid graph and the item graph. In a bid graph, vertices represent bids, and two vertices are adjacent if and only if the bids share items in common. In an item graph, each vertex represents a unique item, there is a vertex for each item, and any bid submitted by any agent must induce a connected subgraph of the item graph. We introduce a new definition of combinatorial auction equivalence by declaring two CAs equivalent if and only if their bid graphs are isomorphic. Parameterized complexity theory can be used to further distinguish between NP-hard problems. In order to make use of parameterized complexity theory in the investigation of a problem, we aim to find one or more parameters that describe some aspect of the problem such that if we fix these parameters, then either the problem is still hard (fixed-parameter intractable), or the problem can be solved in polynomial time (fixed-parameter tractable). We analyze WDP using bid graphs from within the formal scope of parameterized complexity theory. This approach has not previously been used to analyze WDP for CAs, although it has been used to solve set packing, which is related to WDP for CAs and is discussed in detail. We investigate a few parameterizations of WDP; some of the parameterizations are shown to be fixed-parameter intractable, while others are fixed-parameter tractable. We also analyze WDP when the graph class of a bid graph is restricted. We also discuss relationships between item graphs and bid graphs. Although both graphs can represent the same problem, there is little previous work analyzing direct relationships between them. Our discussion on these relationships begins with a result by Conitzer et al. [7], which focuses on the item graph representation and its treewidth, a property of a graph that measures how close the graph is to a tree. From a result by Gavril, if an item graph has treewidth one, then the bid graph must be chordal [16]. To apply the other direction of Gavril’s theorem, we use our new definition of CA equivalence. With this new definition, Gavril’s result shows that if a bid graph of a CA is chordal, then we can construct an item graph that has treewidth one for some equivalent CA.
148

Application and evaluation of UF and RO membrane

Su, Huan-Shen 30 June 2011 (has links)
Currently influence of water quality of water resource is greatly affected typhoon and rainstorm caused by climate change. Additional factors are including over cut trees, soil-rock flood and bad conservation of water-soil in hillside. Thus many researchers used ¡§membrane technology ¡§to remove pollutants such as suspended solids, alga, heavy metals and organic toxics. This work is studying performance of advanced water treatment processes using UF and LPRO membranes in a plant (noted as plant A). During the period of research; we analyzed items such as turbidity, TOC and hardness and operation parameters to investigate the efficiency of UF and LPRO. Results showed the traditional treatment processes has not effective removal on TOC, Fe ions ,Mn ions and hardness in raw water in plant A. But the removal efficiency was over 80% by using the later treatment of UF/RO. When plant A operated at good control and well detergent-wash, the life of UF/RO system is longer.
149

Assessing Invariance of Factor Structures and Polytomous Item Response Model Parameter Estimates

Reyes, Jennifer McGee 2010 December 1900 (has links)
The purpose of the present study was to examine the invariance of the factor structure and item response model parameter estimates obtained from a set of 27 items selected from the 2002 and 2003 forms of Your First College Year (YFCY). The first major research question of the present study was: How similar/invariant are the factor structures obtained from two datasets (i.e., identical items, different people)? The first research question was addressed in two parts: (1) Exploring factor structures using the YFCY02 dataset; and (2) Assessing factorial invariance using the YFCY02 and YFCY03 datasets. After using exploratory and confirmatory and factor analysis for ordered data, a four-factor model using 20 items was selected based on acceptable model fit for the YFCY02 and YFCY03 datasets. The four factors (constructs) obtained from the final model were: Overall Satisfaction, Social Agency, Social Self Concept, and Academic Skills. To assess factorial invariance, partial and full factorial invariance were examined. The four-factor model fit both datasets equally well, meeting the criteria for partial and full measurement invariance. The second major research question of the present study was: How similar/invariant are person and item parameter estimates obtained from two different datasets (i.e., identical items, different people) for the homogenous graded response model (Samejima, 1969) and the partial credit model (Masters, 1982)? To evaluate measurement invariance using IRT methods, the item discrimination and item difficulty parameters obtained from the GRM need to be equivalent across datasets. The YFCY02 and YFCY03 GRM item discrimination parameters (slope) correlation was 0.828. The YFCY02 and YFCY03 GRM item difficulty parameters (location) correlation was 0.716. The correlations and scatter plots indicated that the item discrimination parameter estimates were more invariant than the item difficulty parameter estimates across the YFCY02 and YFCY03 datasets.
150

A Randomness Based Analysis on the Data Size Needed for Removing Deceptive Patterns

IBARAKI, Toshihide, BOROS, Endre, YAGIURA, Mutsunori, HARAGUCHI, Kazuya 01 March 2008 (has links)
No description available.

Page generated in 0.0899 seconds