• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 7
  • 2
  • 1
  • Tagged with
  • 40
  • 13
  • 9
  • 9
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The realist evaluation of educational technology

King, Melanie R. N. January 2017 (has links)
PURPOSE. This thesis considers the best way to address the challenges faced by educators, institutions and funding bodies trying to not only develop and implement educational technology successfully but tackle the challenge of understanding and evidencing what works (and what does not) and why. The aim of the research was to find and validate an evaluation method that provided usable and useful evidence. APPROACH. A range of evaluations were undertaken to elicit the strengths and weaknesses of different approaches, augmented by drawing upon the experiences and outcomes published by others. An analysis of the issues was made and significance of the problem established. The problem being premature timing, unsuitable models, rapid change, complex implementation chains, inconsistent terminology, ideology and marketisation. A tailored realist evaluation framework was proposed as an alternative method and it was tested to evaluate an institutional lecture capture (LC) initiative. FINDINGS. The theory-driven realist approach provided a level of abstraction that helped gather evidence about wider influences and theories of potential future impact of the LC programme and its linked policy. It proved valuable in generating real and practical recommendations for the institution, including what more could be done to improve uptake and support embedding in teaching and learning, from practice, policy and technological points of view. It identified some unanticipated disadvantages of LC as well determining how and when it was most effective. PRACTICAL IMPLICATIONS. A Realist Evaluation of Technology Initiative (RETI) framework has been produced as tool to aid the rapid adoption of the approach. Recommendations for future research and seven guiding principles have been proposed to encourage the formation of a community of realist evaluative researchers in educational technology. ORIGINALITY/VALUE. The rigorous application of a tailored realist evaluation framework (RETI) for educational technology (including the development of two Domain Reference Models) is the primary contribution to new knowledge. This research is significance because it has potential to enable the synthesis of evaluation findings within the sector. This will enable an evidence-base of what works, for whom, in which contexts and why, ultimately benefiting policy-makers and practitioners to support better informed decision making and investment in education.
2

Model checking CSPZ: Techniques to overcome state explosion

MOTA, Alexandre Cabral January 2001 (has links)
Made available in DSpace on 2014-06-12T15:53:07Z (GMT). No. of bitstreams: 2 arquivo4927_1.pdf: 1466209 bytes, checksum: 2dd8cd7b46b828a5aa1d2a3f50a6ebef (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2001 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Cabral Mota, Alexandre; Cezar Alves Sampaio, Augusto. Model checking CSPZ: Techniques to overcome state explosion. 2001. Tese (Doutorado). Programa de Pós-Graduação em Ciência da Computação, Universidade Federal de Pernambuco, Recife, 2001.
3

Large-scale computer implementations and systemic organizational change

Cogan, Richard Brian 06 June 2003 (has links)
No description available.
4

Side Channel Leakage Exploitation, Mitigation and Detection of Emerging Cryptosystems

Chen, Cong 26 March 2018 (has links)
With the emerging computing technologies and applications in the past decades, cryptography is facing tremendous challenges in its position of guarding our digital world. The advent of quantum computers is potentially going to cease the dominance of RSA and other public key algorithms based on hard problems of factorization and discrete logarithm. In order to protect the Internet at post-quantum era, great efforts have been dedicated to the design of RSA substitutions. One of them is code- based McEliece public key schemes which are immune to quantum attacks. Meanwhile, new infrastructures like Internet of Things are bringing the world enormous benefits but, due to the resource-constrained nature, require compact and still reliable cryptographic solutions. Motivated by this, many lightweight cryptographic algorithms are introduced. Nevertheless, side channel attack is still a practical threat for implementations of these new algorithms if no countermeasures are employed. In the past decades two major categories of side channel countermeasures, namely masking and hiding, have been studied to mitigate the threat of such attacks. As a masking countermeasure, Threshold Implementation becomes popular in recent years. It is sound in providing provable side channel resistance for hardware-based cryptosystems but meanwhile it also incurs significant overheads which need further optimization for constrained applications. Masking, especially for higher order masking schemes, requires low signal-to-noise ratio to be effective which can be achieved by applying hiding countermeasures. In order to evaluate side channel resistance of countermeasures, several tools have been introduced. Due to its simplicity, TVLA is being accepted by academy and industry as a one-size-fit-all leakage detection methodolgy that can be used by non-experts. However, its effectiveness can be negatively impacted by environmental factors such as temperature variations. Thus, a robust and simple evaluation method is desired. In this dissertation, we first show how differential power analysis can efficiently exploit the power consumption of a McEliece implementation to recover the private key. Then, we apply Threshold Implementation scheme in order to protect from the proposed attack. This is, to the best of our knowledge, the first time of applying Threshold Implementation in a public key cryptosystem. Next, we investigate the reduction of shares in Threshold Implementation so as to bring down its overhead for constrained applications. Our study shows that Threshold Implementation using only two shares reduces the overheads while still provides reliable first-order resistance but in the meantime it also leaks a strong second-order leakage. We also propose a hiding countermeasure, namely balanced encoding scheme based on the idea of Dual- Rail Pre-charge logic style in hardwares. We show that it is effective to mitigate the leakage and can be combined with masking schemes to achieve better resistance. Finally, we study paired t-test versus Welch's t-test in the original TVLA and show its robustness against environmental noises. We also found that using moving average in computing t statistics can detect higher-order leakage faster.
5

Image processing and forward propagation using binary representations, and robust audio analysis using deep learning

Pedersoli, Fabrizio 15 March 2019 (has links)
The work presented in this thesis consists of three main topics: document segmentation and classification into text and score, efficient computation with binary representations, and deep learning architectures for polyphonic music transcription and classification. In the case of musical documents, an important problem is separating text from musical score by detecting the corresponding boundary boxes. A new algorithm is proposed for pixel-wise classification of digital documents in musical score and text. It is based on a bag-of-visual-words approach and random forest classification. A robust technique for identifying bounding boxes of text and music score from the pixel-wise classification is also proposed. For efficient processing of learned models, we turn our attention to binary representations. When dealing with binary data, the use of bit-packing and bit-wise computation can reduce computational time and memory requirements considerably. Efficiency is a key factor when processing large scale datasets and in industrial applications. SPmat is an optimized framework for binary image processing. We propose a bit-packed representation for binary images that encodes both pixels and square neighborhoods, and design SPmat, an optimized framework for binary image processing, around it. Bit-packing and bit-wise computation can also be used for efficient forward propagation in deep neural networks. Quantified deep neural networks have recently been proposed with the goal of improving computational time performance and memory requirements while maintaining as much as possible classification performance. A particular type of quantized neural networks are binary neural networks in which the weights and activations are constrained to $-1$ and $+1$. In this thesis, we describe and evaluate Espresso, a novel optimized framework for fast inference of binary neural networks that takes advantage of bit-packing and bit-wise computations. Espresso is self contained, written in C/CUDA and provides optimized implementations of all the building blocks needed to perform forward propagation. Following the recent success, we further investigate Deep neural networks. They have achieved state-of-the-art results and outperformed traditional machine learning methods in many applications such as: computer vision, speech recognition, and machine translation. However, in the case of music information retrieval (MIR) and audio analysis, shallow neural networks are commonly used. The effectiveness of deep and very deep architectures for MIR and audio tasks has not been explored in detail. It is also not clear what is the best input representation for a particular task. We therefore investigate deep neural networks for the following audio analysis tasks: polyphonic music transcription, musical genre classification, and urban sound classification. We analyze the performance of common classification network architectures using different input representations, paying specific attention to residual networks. We also evaluate the robustness of these models in case of degraded audio using different combinations of training/testing data. Through experimental evaluation we show that residual networks provide consistent performance improvements when analyzing degraded audio across different representations and tasks. Finally, we present a convolutional architecture based on U-Net that can improve polyphonic music transcription performance of different baseline transcription networks. / Graduate
6

K-12 Educational Technology Implementations: A Delphi Study

VanDykGibson, Jennie L. 01 January 2016 (has links)
The use of educational technologies is a key component of education reform. In its current national technology plan, Future Ready Learning: Reimagining the Role of Technology in Education, the U.S. Department of Education asserts that educational technologies can transform student learning. Successful integration of educational technology could increase student achievement and transform the setting to bring about positive social change. The purpose of this study was to provide a group of expert panelists an opportunity to identify strategies and guidelines to create an effective educational technology plan. Data were gathered using a modified Delphi technique from 7 teachers, 8 administrators, and 7 policymakers. All had expertise in educational technologies and experience with past state technology implementations, and all used a Delphi instrument to rate statements from current research. Their recommendations confirmed the importance of each stage of Rogers' 5 stages of the innovation-decision process; the panelists also reached consensus about the role of the state and its responsibility to provide support and guidance to districts and schools when implementing educational technology plans. The results showed that an individualized approach to implementation of an educational technology innovation, rather than an organizational approach, may improve the rate of diffusion and adoption of educational technology innovations in this state's K-12 public schools. This shift in how implementations are managed could produce a more efficient and effective way to integrate educational technology innovations in U.S. K-12 schools.
7

Implementation of adaptive digital FIR and reprogrammable mixed-signal filters using distributed arithmetic

Huang, Walter 12 November 2009 (has links)
When computational resources are limited, especially multipliers, distributed arithmetic (DA) is used in lieu of the typical multiplier-based filtering structures. However, DA is not well suited for adaptive applications. The bottleneck is updating the memory table. Several attempts have been done to accelerate updating the memory, but at the expense of additional memory usage and of convergence speed. To develop an adaptive DA filter with an uncompromised convergence rate, the memory table must be fully updated. In this research, an efficient method for fully updating a DA memory table is proposed. The proposed update method is based on exploiting the temporal locality of the stored data and subexpression sharing. The proposed update method reduces the computational workload and requires no additional memory resources. DA using the proposed update method is called conjugate distributed arithmetic. Filters can also be constructed from analog components. Often, for lower precision computations, analog circuits use less power and less chip area than their digital counterparts. However, digital components are often used because of their ease of reprogrammability. Achieving such reprogrammability in analog is possible, but at the expense of additional chip area. A reprogrammable mixed-signal DA finite impulse response (FIR) filter is proposed to address the issues with reprogrammable analog FIR filters that are constructing compact reprogrammable filtering structures, non-symmetric and imprecise filter coefficients, inconsistent sampling of the input data, and input sample data corruption. These issues are successfully addressed using distributed arithmetic, digital registers, and epots. Also, a mixed-signal DA second-order section (SOS), which is used as the building block for higher order infinite impulse response filters, was proposed. The type of issues with an analog SOS filter are similar to those of an analog FIR filter, which are the lack of a compact reprogrammable filtering structure, the imprecise filter coefficients, the inconsistent sampling of the data, and the corruption of the data samples. These issues are successfully addressed using distributed arithmetic and digital registers.
8

A comparison of circuit implementations from a security perspective

Sundström, Timmy January 2005 (has links)
<p>In the late 90's research showed that all circuit implementations were susceptible to power analysis and that this analysis could be used to extract secret information. Further research to counteract this new threat by adding countermeasures or modifying the nderlaying algorithm only seemed to slow down the attack.</p><p>There were no objective analysis of how different circuit implementations leak information and by what magnitude.</p><p>This thesis will present such an objective comparison on five different logic styles. The comparison results are based on simulations performed on transistor level and show that it is possible to implement circuits in a more secure and easier way than what has been previously suggested.</p>
9

Optimized Composition of Parallel Components on a Linux Cluster

Al-Trad, Anas January 2012 (has links)
We develop a novel framework for optimized composition of explicitly parallel software components with different implementation variants given the problem size, data distribution scheme and processor group size on a Linux cluster. We consider two approaches (or two cases of the framework).  In the first approach, dispatch tables are built using measurement data obtained offline by executions for some (sample) points in the ranges of the context properties. Inter-/extrapolation is then used to do actual variant-selection for a given execution context at run-time. In the second approach, a cost function of each component variant is provided by the component writer for variant-selection. These cost functions can internally lookup measurements' tables built, either offline or at deployment time, for computation- and communication-specific primitives. In both approaches, the call to an explicitly parallel software component (with different implementation variants) is made via a dispatcher instead of calling a variant directly. As a case study, we apply both approaches on a parallel component for matrix multiplication with multiple implementation variants. We implemented our variants using Message Passing Interface (MPI). The results show the reduction in execution time for the optimally composed applications compared to applications with hard-coded composition. In addition, the results show the comparison of estimated and measured times for each variant using different data distributions, processor group and problem sizes.
10

A comparison of circuit implementations from a security perspective

Sundström, Timmy January 2005 (has links)
In the late 90's research showed that all circuit implementations were susceptible to power analysis and that this analysis could be used to extract secret information. Further research to counteract this new threat by adding countermeasures or modifying the nderlaying algorithm only seemed to slow down the attack. There were no objective analysis of how different circuit implementations leak information and by what magnitude. This thesis will present such an objective comparison on five different logic styles. The comparison results are based on simulations performed on transistor level and show that it is possible to implement circuits in a more secure and easier way than what has been previously suggested.

Page generated in 0.1375 seconds