• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Concurrent Online Testing for Many Core Systems-on-Chips

Lee, Jason Daniel 2010 December 1900 (has links)
Shrinking transistor sizes have introduced new challenges and opportunities for system-on-chip (SoC) design and reliability. Smaller transistors are more susceptible to early lifetime failure and electronic wear-out, greatly reducing their reliable lifetimes. However, smaller transistors will also allow SoC to contain hundreds of processing cores and other infrastructure components with the potential for increased reliability through massive structural redundancy. Concurrent online testing (COLT) can provide sufficient reliability and availability to systems with this redundancy. COLT manages the process of testing a subset of processing cores while the rest of the system remains operational. This can be considered a temporary, graceful degradation of system performance that increases reliability while maintaining availability. In this dissertation, techniques to assist COLT are proposed and analyzed. The techniques described in this dissertation focus on two major aspects of COLT feasibility: recovery time and test delivery costs. To reduce the time between failure and recovery, and thereby increase system availability, an anomaly-based test triggering unit (ATTU) is proposed to initiate COLT when anomalous network behavior is detected. Previous COLT techniques have relied on initiating tests periodically. However, determining the testing period is based on a device's mean time between failures (MTBF), and calculating MTBF is exceedingly difficult and imprecise. To address the test delivery costs associated with COLT, a distributed test vector storage (DTVS) technique is proposed to eliminate the dependency of test delivery costs on core location. Previous COLT techniques have relied on a single location to store test vectors, and it has been demonstrated that centralized storage of tests scales poorly as the number of cores per SoC grows. Assuming that the SoC organizes its processing cores with a regular topology, DTVS uses an interleaving technique to optimally distribute the test vectors across the entire chip. DTVS is analyzed both empirically and analytically, and a testing protocol using DTVS is described. COLT is only feasible if the applications running concurrently are largely unaffected. The effect of COLT on application execution time is also measured in this dissertation, and an application-aware COLT protocol is proposed and analyzed. Application interference is greatly reduced through this technique.
2

Tiesioginio testavimo sistemų lyginamoji analizė / Comparative analysis of online testing systems

Astrauskas, Aurelijus 22 May 2006 (has links)
This work is to compare two online testing systems – EDU Campus and TestTool4. The first one is using various types of standard test questions with the emphasis to math-based subjects. It has some unique features too. The second one is using graphical model based approach to the testing of the knowledge.
3

Characteristics of item response time for standardized achievement assessments

Wang, Min 01 May 2017 (has links)
Response time (RT) data are able to provide unique insight into both items and examinees regarding speededness and time-demand and should be incorporated into test development practice. To allow test developers to utilize RT information, item RT needs to be summarized into point estimate(PE)(s) that can be understood by content specialists and saved into the item pool. The recent expansion of online testing in K-12 achievement assessments brings opportunities and challenges for measurement experts to investigate and utilize RT information in a context different from that in the majority of literature, in which licensing and certification tests, graduate admission tests, and other applications that incorporate computer-adaptive testing. Using empirical data from four tests in two grade levels of a K-12 standardized achievement assessment, this study explored the empirical distributions of item RT and their fit to five probability distributions, the characteristics of four item RTPEs, and the relationships between item RTPEs and eight item attributes. Based on the principal findings across tests and grades, the empirical distributions of item RT presented widely variable shapes and did not fit any of the five proposed probability distributions; the 90th quantile showed its important capability of capturing and avoiding speededness issues; and the associations between item RTPEs and item attributes proved to be mixed. The generally idiosyncratic findings of this study call for a different perspective and approach to explore RT data and call for more empirical studies to enlighten test development practice in the K-12 standardized achievement assessment field.
4

Une méthode de test fonctionnel en-ligne basée sur une approche de monitorage distribuée continue appliquée aux systèmes communicants / A novel online functional testing methodology based on a fully distributed continuous monitoring approach applied to communicating systems

Alvarez Aldana, José Alfredo 28 September 2018 (has links)
Les réseaux MANET représentent un domaine important de recherche en raison des nombreuses opportunités découlant des problématiques et des applications inhérentes à ce type de réseau. Les problématiques les plus récurrentes sont la mobilité, la disponibilité ainsi que les ressources limitées. Un intérêt bien connu dans les réseaux et donc dans les MANET est de monitorer les propriétés de ce réseau et de ses nœuds. Les contraintes des MANET peuvent avoir un impact significatif sur les efforts mis en œuvre pour les monitorer. La mobilité et la disponibilité peuvent créer des résultats incomplets pour le monitorage. Les propriétés usuelles utilisées en monitorage sont simples, comme notamment la consommation moyenne du processeur, la bande passante moyenne, etc. De plus, l'évolution des réseaux a conduit à un besoin croissant d'examiner des comportements plus complexes, dépendants et imbriqués. La littérature indique que la précision des valeurs obtenues par monitorage et donc des approches n'est pas fiable et difficile à atteindre en raison des propriétés dynamiques du MANET. Nous proposons donc des architectures de surveillance décentralisées et distribuées qui reposent sur de multiples points d'observation. L'approche décentralisée combine des algorithmes dits hiérarchiques et de ‘gossip’ pour fournir une approche de monitorage efficace. Grâce à des expérimentations approfondies, nous avons conclu que même si nous étions en mesure d'atteindre d’excellentes performances, la fragmentation du réseau a toujours un impact sévère sur la méthodologie mise en place. Essayant d'améliorer notre technique, nous avons proposé une approche distribuée pour améliorer l'efficacité et la précision globale.Il fournit un mécanisme de consensus qui lui permet d'agréger de nombreux résultats fournis par plusieurs nœuds et fournit un résultat plus significatif et plus précis. Nous soutenons notre proposition avec de nombreuses définitions mathématiques qui modélisent les résultats locaux pour un seul nœud et les résultats globaux pour le réseau. Nos expériences ont été évaluées avec un émulateur construit en interne qui s'appuie sur Amazon Web Services, NS-3, Docker et GoLang avec un nombre variable de nœuds, la taille du réseau, sa densité, la vitesse des nœuds, les algorithmes de mobilité et les délais. Grâce à cet émulateur, nous avons pu analyser plusieurs aspects en fournissant des testbeds reproductibles, documentés et accessibles. Nous avons obtenu des résultats prometteurs pour les deux approches, et surtout pour l'approche distribuée en particulier en ce qui concerne la précision des valeurs obtenues par monitorage / MANETs represent a significant area of network research due to the many opportunities derived from the problematics and applications. The most recurring problematics are the mobility, the availability and also the limited resources. A well-known interest in networks and therefore in MANETs is to monitor properties of the network and nodes. The problematics of the MANETs can have a significant impact on the monitoring efforts. Mobility and availability can create incomplete results for the monitoring. The usual properties discussed in monitoring are simple ones, e.g., average CPU consumption, average bandwidth and so on. Moreover, the evolution of networks has led to an increasing need to examine more complex, dependent and intertwined behaviors. The literature states that accuracy of the approaches is not reliable and difficult to achieve due to the dynamic properties of the MANET. Therefore, we propose a decentralized and distributed monitoring architecture that rely on multiple points of observation. The decentralized approach combines gossip and hierarchical algorithms to provide an effective monitoring approach. Through extensive experimentation, we concluded that although we were able to achieve exceptional performance, network fragmentation still has a harsh impact on the approach. Trying to improve our approach, we proposed a distributed approach, relying on stronger bedrock to enhance the overall efficiency and accuracy. It provides a consensus mechanism that allows it to aggregate and provides a more meaningful and accurate result. We support our proposal with numerous mathematical definition that models local results for a single node and global results for the network. Our experiments were evaluated with an emulator built in-house that relies on Amazon Web Services, NS-3, Docker and GoLang with varying number of nodes, network size, network density, speed, mobility algorithms and timeouts. Through this emulator, we were able to analyze multiple aspects of the approaches by providing a repeatable, documented and accessible test beds. We obtained promising results for both approaches, but for the distributed approach, especially regarding accuracy
5

Online Management of Resilient and Power Efficient Multicore Processors

Rodrigues, Rance 01 September 2013 (has links)
The semiconductor industry has been driven by Moore's law for almost half a century. Miniaturization of device size has allowed more transistors to be packed into a smaller area while the improved transistor performance has resulted in a significant increase in frequency. Increased density of devices and rising frequency led, unfortunately, to a power density problem which became an obstacle to further integration. The processor industry responded to this problem by lowering processor frequency and integrating multiple processor cores on a die, choosing to focus on Thread Level Parallelism (TLP) for performance instead of traditional Instruction Level Parallelism (ILP). While continued scaling of devices have provided unprecedented integration, it has also unfortunately led to a few serious problems: The first problem is that of increasing rates of system failures due to soft errors and aging defects. Soft errors are caused by ionizing radiations that originate from radioactive contaminants or secondary release of charged particles from cosmic neutrons. Ionizing radiations may charge/discharge a storage node causing bit flips which may result in a system failure. In this dissertation, we propose solutions for online detection of such errors in microprocessors. A small and functionally limited core called the Sentry Core (SC) is added to the multicore. It monitors operation of the functional cores in the multicore and whenever deemed necessary, it opportunistically initiates Dual Modular redundancy (DMR) to test the operation of the cores in the multicore. This scheme thus allows detection of potential core failure and comes at a small hardware overhead. In addition to detection of soft errors, this solution is also capable of detecting errors introduced by device aging that results in failure of operation. The solution is further extended to verify cache coherence transactions. A second problem we address in this dissertation relate to power concerns. While the multicore solution addresses the power density problem, overall power dissipation is still limited by packaging and cooling technologies. This limits the number of cores that can be integrated for a given package specification. One way to improve performance within this constraint is to reduce power dissipation of individual cores without sacrificing system performance. There have been prior solutions to achieve this objective that involve Dynamic Voltage and Frequency Scaling (DVFS) and the use of sleep states. DVFS and sleep states take advantage of coarse grain variation in demand for computation. In this dissertation, we propose techniques to maximize performance-per-power of multicores at a fine grained time scale. We propose multiple alternative architectures to attain this goal. One of such architectures we explore is Asymmetric Multicore Processors (AMPs). AMPs have been shown to outperform the symmetric ones in terms of performance and Performance-per-Watt for a fixed resource and power budget. However, effectiveness of these architectures depends on accurate thread-to-core scheduling. To address this problem, we propose online thread scheduling solutions responding to changing computational requirements of the threads. Another solution we consider is for Symmetric Multicore processors (SMPs). Here we target sharing of the large and underutilized resources between pairs of cores. While such architectures have been explored in the past, the evaluations were incomplete. Due to sharing, sometimes the shared resource is a bottleneck resulting in significant performance loss. To mitigate such loss, we propose the Dynamic Voltage and Frequency Boosting (DVFB) of the shared resources. This solution is found to significantly mitigate performance loss in times of contention. We also explore in this dissertation, performance-per-Watt improvement of individual cores in a multicore. This is based on dynamic reconfiguration of individual cores to run them alternately in out-of-order (OOO) and in-order (InO) modes adapting dynamically to workload characteristics. This solution is found to significantly improve power efficiency without compromising overall performance. Thus, in this dissertation we propose solutions for several important problems to facilitate continued scaling of processors. Specifically, we address challenges in the area of reliability of computation and propose low power design solutions to address power constraints.
6

Počítačové adaptivní testování a možnosti jeho využití v psychodiagnostice / Computerized adaptive testing and its use in psychodiagnostic

Dlouhá, Jana January 2014 (has links)
5 Abstract The theoretical part of the paper focuses on computerized adaptive testing (CAT) and item response theory (IRT). Also included is a chapter comparing IRT with the commonly used classical test theory (CTT). There is also a brief mention of computerized and online testing, as these types of administration differ in many aspects from conventional paper & pencil tests. The goal of this paper was to evaluate the individual ways of eEPI test administration and to compare them with eEPQ tests and self-evaluation. In the practical part the items of the extraversion scale of the Eysenck Personality Inventory (eEPI) were calibrated using a group of 124 respondents. The acquired data were subsequently used to carry out a simulation of computerized adaptive testing, which clearly demonstrated the benefits of this type of testing in comparison to the classical test form. These results were compared with the results of real CAT test administration using the original sample and a new group of respondents (Np=69, Nn=68). The results were highly correlated with the results of the simulated test. Moreover, to verify the validity of the computerized adaptive version of eEOD, the respondents' results in this test were compared with the results in the eEPQ test and in a short self-assessment scale. Finally,...
7

The Revised Test Anxiety-Online-Short Form Scale: Bifactor Modeling

Soyturk, Ilker 06 August 2021 (has links)
No description available.
8

Cross-fertilizing formal approaches for protocol conformance and performance testing / Approches formelles croisées pour les tests de protocole de conformité et de performance

Che, Xiaoping 26 June 2014 (has links)
Les technologies de communication et les services web sont devenus disponibles dans notre vie numérique, les réseaux informatiques continuent de croître et de nouveaux protocoles de communication sont constamment définis et développés. Par la suite, la standardisation et la normalisation des protocoles sont dispensables pour permettre aux différents systèmes de dialoguer. Bien que ces normes peuvent être formellement vérifiés, les développeurs peuvent produire des erreurs conduisant à des implémentations défectueuses. C'est la raison pour laquelle leur mise en œuvre doit être strictement examinée. Cependant, la plupart des approches de tests actuels exigent une stimulation de l’exécution dans le cadre des tests (IUT). Si le système ne peut être consulté ou interrompu, l'IUT ne sera pas en mesure d'être testé. En outre, la plupart des travaux existants sont basées sur des modèles formels et très peu de travaux s'intéressent à la formalisation des exigences de performance. Pour résoudre ces problèmes, nous avons proposé une approche de test basé sur la logique "Horn" afin de tester passivement la conformité et la performance des protocoles. Dans notre approche, les exigences peuvent être formalisées avec précision. Ces exigences formelles sont également testées par des millions de messages collectés à partir des communicants réels. Les résultats satisfaisants des expériences effectuées ont prouvé le bon fonctionnement et l'efficacité de notre approche. Aussi pour satisfaire les besoins croissants de tests distribués en temps réel, nous avons également proposé un cadre de tests distribués et un cadre de tests en ligne et nous avons mis en œuvre notre plateforme dans un environnement réel à petite échelle avec succès / While today’s communications are essential and a huge set of services is available online, computer networks continue to grow and novel communication protocols are continuously being defined and developed. De facto, protocol standards are required to allow different systems to interwork. Though these standards can be formally verified, the developers may produce some errors leading to faulty implementations. That is the reason why their implementations must be strictly tested. However, most current testing approaches require a stimulation of the implementation under tests (IUT). If the system cannot be accessed or interrupted, the IUT will not be able to be tested. Besides, most of the existing works are based on formal models and quite few works study formalizing performance requirements. To solve these issues, we proposed a novel logic-based testing approach to test the protocol conformance and performance passively. In our approach, conformance and performance requirements can be accurately formalized using the Horn-Logic based syntax and semantics. These formalized requirements are also tested through millions of messages collected from real communicating environments. The satisfying results returned from the experiments proved the functionality and efficiency of our approach. Also for satisfying the increasing needs in real-time distributed testing, we also proposed a distributed testing framework and an online testing framework, and performed the frameworks in a real small scale environment. The preliminary results are obtained with success. And also, applying our approach under billions of messages and optimizing the algorithm will be our future works
9

A Framework for Evaluating an Introductory Statistics Programme at the University of the Western Cape.

Makapela, Nomawabo. January 2009 (has links)
<p>There have been calls both from the government and private sector for Higher Education institutions to introduce programmes that produce employable graduates whilst at the same time contributing to the growing economy of the country by addressing the skills shortage. Transformation and intervention committees have since been introduced to follow the extent to which the challenges are being addressed (DOE, 1996 / 1997 / Luescher and Symes, 2003 / Forbes, 2007). Amongst the list of issues that needed urgent address were the skills shortage and underperformance of students particularly university entering students (Daniels, 2007 / De Klerk, 2006 / Cooper, 2001). Research particularly in the South African context, has revealed that contributing to the underperformance of university entering students and shortage of skills are: the legacy of apartheid (forcing certain racial groups to focus on selected areas such as teaching and nursing), the schooling system (resulting in university entering students to struggle), the home language and academic language. Barrell (1998), places stress on language as a contributing factor towards the performance of students. Although not much research has been done on skills shortage, most of the areas with skills shortage require Mathematics, either on a minimum or comprehensive scale. Students who have a strong Mathematics background have proved to perform better compared to students who have a limited or no Mathematics background at all in Grade 12 (Hahn, 1988 / Conners, McCown &amp / Roskos-Ewoldsen, 1998 / Nolan, 2002).The department of Statistics offers an Introductory Statistics (IS) course at first year level. Resources available to enhance student learning include: a problem-solving component with web-based tutorials and students attending lectures three hours per week. The course material and all the necessary information regarding the course including teach yourself problems, useful web-sites and links students can make use of, are all stored under the Knowledge- Environment for Web-based learning (KEWL). Despite all the available information, the students were not performing well and they were not interested in the course. The department regards statistical numeracy as a life skill. The desire of the department is to break down the fear of Statistics and to bring about a perspective change in students&rsquo / mindsets. The study was part of a contribution to ensuring that the department has the best first year students in Statistics in the Western Cape achieving a success rate comparable to the national norm.</p>
10

A Framework for Evaluating an Introductory Statistics Programme at the University of the Western Cape.

Makapela, Nomawabo. January 2009 (has links)
<p>There have been calls both from the government and private sector for Higher Education institutions to introduce programmes that produce employable graduates whilst at the same time contributing to the growing economy of the country by addressing the skills shortage. Transformation and intervention committees have since been introduced to follow the extent to which the challenges are being addressed (DOE, 1996 / 1997 / Luescher and Symes, 2003 / Forbes, 2007). Amongst the list of issues that needed urgent address were the skills shortage and underperformance of students particularly university entering students (Daniels, 2007 / De Klerk, 2006 / Cooper, 2001). Research particularly in the South African context, has revealed that contributing to the underperformance of university entering students and shortage of skills are: the legacy of apartheid (forcing certain racial groups to focus on selected areas such as teaching and nursing), the schooling system (resulting in university entering students to struggle), the home language and academic language. Barrell (1998), places stress on language as a contributing factor towards the performance of students. Although not much research has been done on skills shortage, most of the areas with skills shortage require Mathematics, either on a minimum or comprehensive scale. Students who have a strong Mathematics background have proved to perform better compared to students who have a limited or no Mathematics background at all in Grade 12 (Hahn, 1988 / Conners, McCown &amp / Roskos-Ewoldsen, 1998 / Nolan, 2002).The department of Statistics offers an Introductory Statistics (IS) course at first year level. Resources available to enhance student learning include: a problem-solving component with web-based tutorials and students attending lectures three hours per week. The course material and all the necessary information regarding the course including teach yourself problems, useful web-sites and links students can make use of, are all stored under the Knowledge- Environment for Web-based learning (KEWL). Despite all the available information, the students were not performing well and they were not interested in the course. The department regards statistical numeracy as a life skill. The desire of the department is to break down the fear of Statistics and to bring about a perspective change in students&rsquo / mindsets. The study was part of a contribution to ensuring that the department has the best first year students in Statistics in the Western Cape achieving a success rate comparable to the national norm.</p>

Page generated in 0.0716 seconds