271 |
Variation Aware Placement for Efficient Key Generation using Physically Unclonable Functions in Reconfigurable SystemsVyas, Shrikant S 07 November 2016 (has links)
With the importance of data security at its peak today, many reconfigurable systems are used to provide security. This protection is often provided by FPGA-based encrypt/decrypt cores secured with secret keys. Physical unclonable functions (PUFs) use random manufacturing variations to generate outputs that can be used in keys. These outputs are specific to a chip and can be used to create device-tied secret keys. Due to reliability issues with PUFs, key generation with PUFs typically requires error correction techniques. This can result in substantial hardware costs. Thus, the total cost of a $n$-bit key far exceeds just the cost of producing $n$ bits of PUF output. To tackle this problem, we propose the use of variation aware intra-FPGA PUF placement to reduce the area cost of PUF-based keys on FPGAs. We show that placing PUF instances according to the random variations of each chip instance reduces the bit error rate of the PUFs and the overall resources required to generate the key. Our approach has been demonstrated on a Xilinx Zynq-7000 programmable SoC using FPGA specific PUFs with code-offset error correction based on BCH codes. The approach is applicable to any PUF-based system implemented in reconfigurable logic. To evaluate our approach, we first analyze the key metrics of a PUF - reliability and uniqueness. Reliability is related to bit error rate, an important parameter with respect to error correction. In order to generate reliable results from the PUFs, a total of four ZedBoards containing FPGAs are used in our approach. We quantify the effectiveness of our approach by implementing the same key generation scheme using variation-aware and default placement, and show the resources saved by our approach.
|
272 |
Stabilisation exponentielle des systèmes quantiques soumis à des mesures non destructives en temps continu / Exponential stabilization of quantum systems subject to non-demolition measurements in continuous timeCardona Sanchez, Gerardo 30 October 2019 (has links)
Dans cette thèse, nous développons des méthodes de contrôle pour stabiliser des systèmes quantiques en temps continu sous mesures quantiques non-destructives. En boucle ouverte, ces systèmes convergent vers un état propre de l'opérateur de mesure, mais l'état résultant est aléatoire. Le rôle du contrôle est de préparer un état prescrit avec une probabilité de un. Le nouvel élément pour atteindre cet objectif est l'utilisation d'un mouvement Brownien pour piloter les actions de contrôle. En utilisant la théorie stochastique de Lyapunov, nous montrons stabilité exponentielle globale du système en boucle fermés. Nous explorons aussi la syntèse du contrôle pour stabiliser un code correcteur d'erreurs quantiques en temps continu. Un autre sujet d'intérêt est l'implementation de contrôles efficacement calculables dans un contexte expérimental. Dans cette direction, nous proposons l'utilisation de contrôles et filtres qui calculent seulement les characteristiques classiques du système, correspondant a la base propre de l'opérateur de mesure. La formulation de dites filtres est importante pour adresser les problèmes de scalabilité du filtre posées par l'avancement des technologies quantiques. / In this thesis, we develop control methods to stabilize quantum systems in continuous-time subject to quantum nondemolition measurements. In open-loop such quantum systems converge towards a random eigenstate of the measurement operator. The role of feedback is to prepare a prescribed eigenstate with unit probability. The novel element to achieve this is the introduction of an exogenous Brownian motion to drive the control actions. By using standard stochastic Lyapunov techniques, we show global exponential stability of the closed-loop dynamics. We explore as well the design of the control layer for a quantum error correction scheme in continuous-time. Another theme of interest is towards the implementation of efficiently computable control laws in experimental settings. In this direction, we propose the use control laws and of reduced-order filters which only track classical characteristics of the system, corresponding to the populations on the measurement eigenbasis. The formulation of these reduced filters is important to address the scalability issues of the filter posed by the advancement of quantum technologies.
|
273 |
Vart är kronan på väg? : Utmaningen med växelkursprognoser - en jämförelse av prognosmodellerDahlberg, Magnus, Anders, Gombrii January 2021 (has links)
Riksbanken har under senaste åren blivit kritiserade för deras bristande prognoser av svenska valutakurser. I denna uppsats undersöks det om slumpvandring (RW) är den mest framgångsrika prognosmodellen eller om alternativa ekonometriska prognosmodeller (AR, VAR och VECM) kan estimera framtida växelkurser mer korrekt på kort sikt, ett kvartal fram, och medellång sikt, fyra kvartal fram. I dessa prognosmodeller behandlas fem Svenska makroekonomiska variabler som endogena; KPI, BNP, arbetslöshet, 3 månaders statsobligationer (T-bonds), samt en exogen variabel, Amerikansk-BNP. Den data som används är kvartalsdata från första kvartalet 1993 till andra kvartalet 2020 för respektive variabel. Resultaten från studie visar på att RW är mer ackurat än de multivariata modellerna (VAR och VECM) på både kort sikt och medellång sikt. Residualerna utvärderas genom att kolla på rotmedelkvadratfel (RMSE) från respektive prognos. / In recent years, the Riksbank has been criticized for their underperforming forecasts of Swedish exchange rates. This thesis examines whether the random walk (RW) is the most successful forecasting model when forecasting the exchange rate (SEK / USD) or whether alternative economic forecasting models (AR, VAR and VECM) can estimate future exchange rates more accurately. Both in the short and medium term, one respectively four quarters ahead. In these forecast models, five Swedish macroeconomic variables are treated as endogenous; CPI, GDP, unemployment, three-month Treasury-bonds (T-Bonds), and an exogenous variable, US GDP. The data used is quarterly data from the first quarter of 1993 to the second quarter of 2020 for each variable. Results from the study show that RW is more accurate than the multivariate models (VAR and VECM) in both the short and medium term. The residuals are evaluated by looking at root mean square error (RMSE) from the respective forecast.
|
274 |
Estimation of sorghum supply elasticity in South AfricaMojapelo, Motsipiri Calvin January 2019 (has links)
Thesis (M.Sc. Agriculture (Agricultural Economics) -- University of Limpopo, 2019 / Studies have indicated that sorghum hectares in South Africa have been decreasing over the past decades. This has resulted in a huge importation of the grain sorghum by the country. This study was undertaken due to sorghum production variability in South Africa. The objectives of this study were to estimate elasticity of sorghum production to changes in price and non-price factors, as well as estimating the short-run and long-run sorghum price elasticity. The study used time series data spanning from 1998 to 2016. This data was obtained from the abstracts of agricultural statistics and verified by South African Grain Information Services. Variance Error Correction Model (VECM) was employed to address both objectives. A number of diagnostic tests were performed to ensure that the study does not produce spurious regression results.
This study estimated sorghum supply elasticity using two dependent variables being the area and yield response functions as model one and two respectively. The results have shown that area response function was found to be a robust model as most of the variables were significant, responsive and elastic. Maize price as a competing crop of sorghum negatively influenced the area allocation; however, the remaining variables positively influenced the area allocation in the long-run. In this model, all variables were statistically significant at 10% and 1% in the short and long-run respectively.
In the yield function, most of the variables were insignificant, not responsive and inelastic, therefore, this model was found not to be robust and hence not adopted. Thus, it was concluded that sorghum output in South Africa is less sensitive to changes in price and nonprice factors.
The findings further indicated that error correction term for area was -1.55 and -1.30 for yield response function. This indicated that the two models were able to revert to equilibrium. Therefore, it was concluded that the area response function was more robust, while the yield response function was not. Furthermore, it was concluded that sorghum production was more responsive to area allocation than yield function.
Based on the findings, the study recommends that amongst other methods to enhance sorghum output, producers could use improved varieties or hybrids, as this action would result in allocation of more land to sorghum production, following price change.
|
275 |
Grammatical Error Correction for Learners of Swedish as a Second LanguageNyberg, Martina January 2022 (has links)
Grammatical Error Correction refers to the task of automatically correcting errors in written text, typically with respect to texts written by learners of a second language. The work in this thesis implements and evaluates two methods to Grammatical Error Correction for Swedish. In addition, the proposed methods are compared to an existing, rule-based system. Previous research on GEC for the Swedish language is limited and has not yet utilized the potential of neural networks. The first method implemented in this work is based on a neural machine translation approach, training a Transformer model to translate erroneous text into a corrected version. A parallel dataset containing artificially generated errors is created to train the model. The second method utilizes a Swedish version of the pre-trained language model BERT to estimate the likelihood of potential corrections in an erroneous text. Employing the SweLL gold corpus consisting of essays written by learners of Swedish, the proposed methods are evaluated using GLEU and through a manual evaluation based on the types of errors and their corresponding corrections found in the essays. The results show that the two methods correct approximately the same amount of errors, while differing in terms of which error types that are best handled. Specifically, the translation approach has a wider coverage of error types and is superior for syntactical and punctuation errors. In contrast, the language model approach yields consistently higher recall and outperforms the translation approach with regards to lexical and morphological errors. To improve the results, future work could investigate the effect of increased model size and amount of training data, as well as the potential in combining the two methods.
|
276 |
Anhang zur Dissertation: Zur Rolle von Fehlerkorrekturen im L2-SchreiberwerbKlemm, Albrecht 12 January 2018 (has links)
Schriftliche Fehlerkorrekturen sind fester Bestandteil des fremdsprachlichen Unterrichts. Sie sollen Fremdsprachenlernende dazu anregen, über ihre Fehler und deren Ursachen zu reflektieren und dadurch fehlerhafte Lernhypothesen über die Zielsprache zu revidieren. Obwohl in einschlägigen didaktischen Abhandlungen insbesondere indirekten Korrekturen mit Korrekturzeichen dieses Potential zugeschrieben wird, liegen für den Bereich Deutsch als Fremd- und Zweitsprache (DaF/DaZ) kaum empirische Studien zum Thema vor.
Ausgehend von dieser Forschungslücke wird in der vorliegenden Studie der Frage nachgegangen, welches erwerbsfördernde Potential schriftliche Grammatikkorrekturen bei DaF-Lernenden auf Mittelstufenniveau entfalten können. Basierend auf den Ergebnissen einer detaillierten Schreibprozessanalyse wird im Rahmen von Einzelfallanalysen aufgezeigt, welche Faktoren den Nutzen der Fehlerkorrekturen beeinflussen. Anschließend werden konkrete Vorschläge unterbreitet, wie im Schreibunterricht ausgehend vom individuellen Erwerbsstand der DaF-Lernenden Feedback gegeben werden kann.
|
277 |
Physical Information Theoretic Bounds on Energy Costs for Error CorrectionGanesh, Natesh 01 January 2011 (has links) (PDF)
With diminishing returns in performance with scaling of traditional transistor devices, there is a growing need to understand and improve potential replacements technologies. Sufficient reliability has not been established in these devices and additional redundancy through use of fault tolerance and error correction codes are necessary. There is a price to pay in terms of energy and area, with this additional redundancy. It is of utmost importance to determine this energy cost and relate it to the increased reliability offered by the use of error correction codes. In this thesis, we have determined the lower bound for energy dissipation associated with error correction using a linear (n,k) block code. The bound obtained is implementation independent and is derived from fundamental considerations and it allows for quantum effects in the channel and decoder. We have also developed information theoretic efficacy measures that can quantify the performance of the error correction and their relationship to the corresponding energy cost.
|
278 |
The Effects of Manageable Corrective Feedback on ESL Writing AccuracyHartshorn, K James 18 July 2008 (has links) (PDF)
The purpose of this study was to test the effect of one approach to writing pedagogy on second-language (L2) writing accuracy. This study used two groups of L2 writers who were learning English as a second language: a control group (n = 19) who were taught with traditional process writing methods and a treatment group (n = 28) who were taught with an innovative approach to L2 writing pedagogy. The methodology for the treatment group was designed to improve L2 writing accuracy by raising the linguistic awareness of the learners through error correction. Central to the instructional methodology were four essential characteristics of error correction including feedback that was manageable, meaningful, timely, and constant. Core components of the treatment included having students write a 10-minute composition each day, and having teachers provide students with coded feedback on their daily writing, help students to use a variety of resources to track their progress, and encourage students to apply what they learned in subsequent writing. Fourteen repeated measures tests using a mixed model ANOVA suggest that the treatment improved mechanical accuracy, lexical accuracy, and certain categories of grammatical accuracy. Though the treatment had a negligible effect on rhetorical competence and writing fluency, findings suggest a small to moderate effect favoring the control group in the development of writing complexity. These findings seem to contradict claims from researchers such as Truscott (2007) who have maintained that error correction is not helpful for improving the grammatical accuracy of L2 writing. The positive results of this study are largely attributed to the innovative methodology for teaching and learning L2 writing that emphasizes linguistic accuracy rather than restricting instruction and learning to other dimensions of writing such as rhetorical competence. The limitations and pedagogical implications of this study are also examined.
|
279 |
Modelling, analysis and experimentation of a simple feedback scheme for error correction controlFlärdh, Oscar January 2007 (has links)
Data networks are an important part in an increasing number of applications with real-time and reliability requirements. To meet these demands a variety of approaches have been proposed. Forward error correction, which adds redundancy to the communicated data, is one of them. However, the redundancy occupies communication bandwidth, so it is desirable to control the amount of redundancy in order to achieve high reliability without adding excessive communication delay. The main contribution of the thesis is to formulate the problem of adjusting the redundancy in a control framework, which enables the dynamic properties of error correction control to be analyzed using control theory. The trade-off between application quality and resource usage is captured by introducing an optimal control problem. Its dependence on the knowledge of the network state at the transmission side is discussed. An error correction controller that optimizes the amount of redundancy without relying on network state information is presented. This is achieved by utilizing an extremum seeking control algorithm to optimize the cost function. Models with varying complexity of the resulting feedback system are presented and analyzed. Conditions for convergence are given. Multiple-input describing function analysis is used to examine periodic solutions. The results are illustrated through computer simulations and experiments on a wireless sensor network. / QC 20101105
|
280 |
Coherence protection by random coding.Brion, E., Akulin, V.M., Dumer, I., Harel, Gil, Kurizki, G. January 2005 (has links)
No / We show that the multidimensional Zeno effect combined with non-holonomic control allows one to efficiently protect quantum systems from decoherence by a method similar to classical random coding. The method is applicable to arbitrary error-inducing Hamiltonians and general quantum systems. The quantum encoding approaches the Hamming upper bound for large dimension increases. Applicability of the method is demonstrated with a seven-qubit toy computer.
|
Page generated in 0.0952 seconds