1 |
Ranked set sampling for binary and ordered categorical variables with applications in health survey dataChen, Haiying 29 September 2004 (has links)
No description available.
|
2 |
Frequency Judgments and Recognition: Additional Evidence for Task DifferencesFisher, Serena Lynn 27 October 2004 (has links)
Four linked experiments were run in order to understand the relationship between frequency judgment and recognition discrimination tasks. The purpose of these studies was to contrast the common-path model and recursive reminding hypothesis as explanations for the underlying principles that drive these tasks. Item-attribute variables such as printed frequency, connectivity, and set size, and an episodic variable, study frequency were manipulated. Memory for recent episodes was evaluated using recognition and frequency judgment tasks. Although all of the variables, with the exception of set size, had significant effects in both tasks, an analysis of effect sizes revealed differences between the tasks in relation to the variables. Specifically, the item-attribute variables had larger effects in recognition than in JOF, and the effect size for study frequency was greater in the JOF task compared to recognition. The reliability of these differences was statistically established by a repeated measures analysis run on the correlations between each subject's mean and the variables. Although the effect size pattern is consistent with the reminding hypothesis, the effects of connectivity and printed frequency in the JOF task are not as they represent familiarity measures. Thus, this finding indicates that familiarity must be involved in making frequency judgments, making the reminding hypothesis inadequate as an explanation as it does not take into account the effect of item-attribute variables and their contribution to familiarity with its subsequent effect on frequency estimates. Therefore, it is proposed that a dual-process approach that takes into account both the reminding and recollection at test in the JOF task, as well as attempting to explain the influence of an underlying construct such as familiarity that effects both tasks may be the most appropriate explanation for frequency estimation results.
|
3 |
Spectral Pattern Recognition by a Two-Layer Perceptron: Effects of Training Set SizeFischer, Manfred M., Staufer-Steinnocher, Petra 10 1900 (has links) (PDF)
Pattern recognition in urban areas is one of the most challenging issues in
classifying satellite remote sensing data. Parametric pixel-by-pixel classification
algorithms tend to perform poorly in this context. This is because urban areas
comprise a complex spatial assemblage of disparate land cover types - including
built structures, numerous vegetation types, bare soil and water bodies. Thus,
there is a need for more powerful spectral pattern recognition techniques,
utilizing pixel-by-pixel spectral information as the basis for automated urban
land cover detection. This paper adopts the multi-layer perceptron classifier
suggested and implemented in [5]. The objective of this study is to analyse the
performance and stability of this classifier - trained and tested for supervised
classification (8 a priori given land use classes) of a Landsat-5 TM image
(270 x 360 pixels) from the city of Vienna and its northern surroundings
- along with varying the training data set in the single-training-site case.
The performance is measured in terms of total classification, map user's and
map producer's accuracies. In addition, the stability with initial parameter
conditions, classification error matrices, and error curves are analysed in some
detail. (authors' abstract) / Series: Discussion Papers of the Institute for Economic Geography and GIScience
|
4 |
Sequential Encoding in Visual Working Memory: In the Absence of Structure, Recency Determines PerformanceDurbin, Jeffery 29 October 2019 (has links)
Most prior investigations of visual working memory (VWM) presented the to-be-remembered items simultaneously in a static configuration (e.g., Luck & Vogel, 1997). However, in everyday situations, such as driving on a busy multilane highway, items (e.g., cars) are presented sequentially and must be retained to support later actions (e.g., knowing if it’s safe to change lanes). In a simultaneous presentation, the relative positions of items are apparent but for sequential presentation, relative positions must be inferred in relation to the background structure (e.g., highway lane markings). To examine sequential encoding in VWM, we developed a novel task in which dots were presented slowly, one at a time, with each dot appearing in one of six boxes (Experiment 1), or in invisible boxes within a visible encompassing outer frame (Experiment 2). Experiment 1 found strong recency effects for judgments of color at the end of the sequence but not for the location of dots. In contrast, without dividing lines, Experiment 2 found strong recency effects for both color and location judgments. These results held true for accuracy, reaction time, and an integrated measure of speed and accuracy. We hypothesize that background structure allows the updating of VWM, slotting each new item into that structure to provide a new configuration that retains both old and new items, whereas in the absence of structure, VWM suffers from severe retroactive interference.
|
5 |
The impact of stimulus set size on efficiency of sight words trainingGuo, Junchen 13 August 2024 (has links) (PDF)
Reading skills are widely recognized as fundamental abilities, crucial not only for academic success but also for participation in social activities and navigating interpersonal challenges. In the early formation of reading abilities, mastery of sight words is instrumental in effectively enhancing reading proficiency, particularly for individuals lacking foundational reading skills. In educational practice, flashcard intervention stands as a widely utilized instructional approach. When Kodak et al. (2020) first introduced the concept of stimulus set size and its impact on the efficiency of skill acquisition interventions, they examined differences in training efficiency among children with autism spectrum disorder (ASD) when exposed to four distinct stimulus set size conditions. Their findings suggested that larger stimulus set sizes tend to correlate with higher training efficiency. Expanding upon Kodak et al.'s research, the present study transitions this investigation into the context of general education settings, focusing on children's learning of sight words. By comparing the number of training trials, training time, and successful recognition rates among three participants across four different stimulus set size conditions, the study assesses the influence of stimulus set size on the efficiency of sight word training. The results indicate a positive correlation between larger stimulus set sizes and higher training efficiency. Nevertheless, it is imperative to acknowledge the potential constraints on the generalizability of these findings stemming from the homogeneity observed among participants. To fortify the relevance and resilience of the conclusions drawn, forthcoming research initiatives should seek to rectify these limitations by inclusively sampling diverse cohorts. By doing so, the resultant insights can be more effectively applied across various domains, thereby augmenting their broader utility and impact.
|
6 |
Power Efficient Last Level Cache For Chip MultiprocessorsMandke, Aparna 01 1900 (has links) (PDF)
The number of processor cores and on-chip cache size has been increasing on chip multiprocessors (CMPs). As a result, leakage power dissipated in the on-chip cache has become very significant. We explore various techniques to switch-off the over-allocated cache so as to reduce leakage power consumed by it. A large cache offers non-uniform access latency to different cores present on a CMP and such a cache is called “Non-Uniform Cache Architecture (NUCA)”. Past studies have explored techniques to reduce leakage power for uniform access latency caches and with a single application executing on a uniprocessor. Our ideas of power optimized caches are applicable to any memory technology and architecture for which the difference of leakage power in the on-state and off-state of on-chip cache bank is significant.
Switching off the last level shared cache on a CMP is a challenging problem due to concurrently executing threads/processes and large dispersed NUCA cache. Hence, to determine cache requirement on a CMP, first we propose a new highly accurate method to estimate working set size of an application, which we call “tagged working set size estimation (TWSS)” method. This method has a negligible hardware storage overhead of 0.1% of the cache size. The use of TWSS is demonstrated by adaptively adjusting cache associativity. Our ideas of adaptable associative cache is scalable with respect to the number of cores present on a CMP. It uses information available locally in a tile on a tiled CMP and thus avoids network access unlike other commonly used heuristics such as average memory access latency and cache miss ratio. Our implementation gives 25% and 19% higher EDP savings than that obtained with average memory access latency and cache miss ratio heuristics on a static NUCA platform (SNUCA), respectively.
Cache misses increase with reduced cache associativity. Hence, we also propose to map some of the L2 slices onto the rest L2 slices and switch-off mapped L2 slices. The L2 slice includes all L2 banks in a tile. We call this technique the “remap policy”. Some applications execute with lesser number of threads than available cores during their execution. In such applications L2 slices which are farther to those threads are switched-off and mapped on-to L2 slices which are located nearer to those threads. By using nearer L2 slices with the help of remapped technology, some applications show improved execution time apart from reduction in leakage power consumption in NUCA caches.
To estimate the maximum possible gains that can be obtained using the remap policy, we statically determine the near-optimal remap configuration using the genetic algorithms. We formulate this problem as a energy-delay product minimization problem. Our dynamic remap policy implementation gives energy-delay savings within an average of 5% than that obtained with the near-optimal remap configuration.
Energy-delay product can also be minimized by improving execution time, which depends mainly on the static and dynamic NUCA access policies (DNUCA). The suitability of cache access policy depends on data sharing properties of a multi-threaded application. Hence, we propose three indices to quantify data sharing properties of an application and use them to predict a more suitable cache access policy among SNUCA and DNUCA for an application.
|
7 |
Power Efficient Last Level Cache for Chip MultiprocessorsMandke, Aparna January 2013 (has links) (PDF)
The number of processor cores and on-chip cache size has been increasing on chip multiprocessors (CMPs). As a result, leakage power dissipated in the on-chip cache has become very significant. We explore various techniques to switch-off the over-allocated cache so as to reduce leakage power consumed by it. A large cache offers non-uniform access latency to different cores present on a CMP and such a cache is called “Non-Uniform Cache Architecture (NUCA)”. Past studies have explored techniques to reduce leakage power for uniform access latency caches and with a single application executing on a uniprocessor. Our ideas of power optimized caches are applicable to any memory technology and architecture for which the difference of leakage power in the on-state and off-state of on-chip cache bank is significant.
Switching off the last level shared cache on a CMP is a challenging problem due to concurrently executing threads/processes and large dispersed NUCA cache. Hence, to determine cache requirement on a CMP, first we propose a new highly accurate method to estimate working set size of an application, which we call “tagged working set size estimation (TWSS)” method. This method has a negligible hardware storage overhead of 0.1% of the cache size. The use of TWSS is demonstrated by adaptively adjusting cache associativity. Our ideas of adaptable associative cache is scalable with respect to the number of cores present on a CMP. It uses information available locally in a tile on a tiled CMP and thus avoids network access unlike other commonly used heuristics such as average memory access latency and cache miss ratio. Our implementation gives 25% and 19% higher EDP savings than that obtained with average memory access latency and cache miss ratio heuristics on a static NUCA platform (SNUCA), respectively.
Cache misses increase with reduced cache associativity. Hence, we also propose to map some of the L2 slices onto the rest L2 slices and switch-off mapped L2 slices. The L2 slice includes all L2 banks in a tile. We call this technique the “remap policy”. Some applications execute with lesser number of threads than available cores during their execution. In such applications L2 slices which are farther to those threads are switched-off and mapped on-to L2 slices which are located nearer to those threads. By using nearer L2 slices with the help of remapped technology, some applications show improved execution time apart from reduction in leakage power consumption in NUCA caches.
To estimate the maximum possible gains that can be obtained using the remap policy, we statically determine the near-optimal remap configuration using the genetic algorithms. We formulate this problem as a energy-delay product minimization problem. Our dynamic remap policy implementation gives energy-delay savings within an average of 5% than that obtained with the near-optimal remap configuration.
Energy-delay product can also be minimized by improving execution time, which depends mainly on the static and dynamic NUCA access policies (DNUCA). The suitability of cache access policy depends on data sharing properties of a multi-threaded application. Hence, we propose three indices to quantify data sharing properties of an application and use them to predict a more suitable cache access policy among SNUCA and DNUCA for an application.
|
Page generated in 0.0693 seconds