431 |
L2 English spelling error analysis : An investigation of English spelling errors made by Swedish senior high school students / Felstavningsanalys i engelska som andraspråk : En undersökning av stavfel i engelska gjorda av svenska gymnasieeleverKusuran, Amir January 2017 (has links)
Proper spelling is important for efficient communication between people with different first languages in the 21st century. While Swedish functions as an intranational language within Sweden, it sees little to no use outside of Scandinavia. English fills the role as a second language that all Swedish students must learn, yet more focus appears to be given to grammar rather than spelling. Spelling is important and knowing the kinds of spelling errors Swedish learners of English tend to make can help educators improve the spelling proficiency of their students. The aim of this study is to investigate the spelling errors made by senior high school students in Sweden by analyzing a collection of essays written by students and gathered in the Uppsala Learner English Corpus (ULEC). The results of this study show that spelling proficiency nearly doubled for students in their third year in senior high school compared to their first year, yet the distribution of spelling errors remained the same. Additionally, some particular sounds that appear to be especially problematic for Swedish spellers were identified, such as /ə/, /l/, /s/ and /k/. / Korrekt stavning är viktig för effektiv kommunikation mellan människor med olika modersmål i tjugohundratalet. Medans svenska fungerar som ett språk mellan människor inom Sverige, ser det lite till ingen nytta utanför Skandinavien. Engelska fyller rollen som andraspråk som alla svenska elever måste lära sig, ändå sätts mer fokus på grammatik över stavning. Stavning är viktig och att veta vilka typer av stavfel som svenska elever brukar göra på engelska kan hjälpa lärare förbättra elevernas stavningskunskaper. Syftet med den här studien är att undersöka svenska gymnasielevers felstavningar i Engelska genom att analysera en samlig essäer skrivna av studenter och samlade i Uppsala Learner English Corpus (ULEC). Resultaten från den här studien visar att stavningskunskaperna hos eleverna hade nästan fördubblats när de gick det tredje år på gymnasiet jämfört med när de gick det första, men att fördelningen av stavfel förblev densamma. Dessutom har vissa ljud identifierats som verkar vara särskilt problematiska för svenska elever att stava, såsom /ə/, /l/, /s/ and /k/.
|
432 |
Surface Conductance of Five Different Crops Based on 10 Years of Eddy-Covariance MeasurementsSpank, Uwe, Köstner, Barbara, Moderow, Uta, Grünwald, Thomas, Bernhofer, Christian 16 January 2017 (has links)
The Penman-Monteith (PM) equation is a state-of-the-art modelling approach to simulate evapotranspiration (ET) at site and local scale. However, its practical application is often restricted by the availability and quality of required parameters. One of these parameters is the canopy conductance. Long term measurements of evapotranspiration by the eddy-covariance method provide an improved data basis to determine this parameter by inverse modelling. Because this approach may also include evaporation from the soil, not only the ‘actual’ canopy conductance but the whole surface conductance (gc) is addressed. Two full cycles of crop rotation with five different crop types (winter barley, winter rape seed, winter wheat, silage maize, and spring barley) have been continuously monitored for 10 years. These data form the basis for this study. As estimates of gc are obtained on basis of measurements, we investigated the impact of measurements uncertainties on obtained values of gc. Here, two different foci were inspected more in detail. Firstly, the effect of the energy balance closure gap (EBCG) on obtained values of gc was analysed. Secondly, the common hydrological practice to use vegetation height (hc) to determine the period of highest plant activity (i.e., times with maximum gc concerning CO2-exchange and transpiration) was critically reviewed. The results showed that hc and gc do only agree at the beginning of the growing season but increasingly differ during the rest of the growing season. Thus, the utilisation of hc as a proxy to assess maximum gc (gc,max) can lead to inaccurate estimates of gc,max which in turn can cause serious shortcomings in simulated ET. The light use efficiency (LUE) is superior to hc as a proxy to determine periods with maximum gc. Based on this proxy, crop specific estimates of gc,maxcould be determined for the first (and the second) cycle of crop rotation: winter barley, 19.2 mm s−1 (16.0 mm s−1); winter rape seed, 12.3 mm s−1 (13.1 mm s−1); winter wheat, 16.5 mm s−1 (11.2 mm s−1); silage maize, 7.4 mm s−1 (8.5 mm s−1); and spring barley, 7.0 mm s−1 (6.2 mm s−1).
|
433 |
Análisis del español de los estudiantes francófonos de ELE : el caso del sistema preposicionalMaloof Avendaño, César Enrique 12 1900 (has links)
Cette recherche s’inscrit dans le domaine de la linguistique appliquée à l'apprentissage des langues et se concentre sur l'usage des prépositions par des apprenants francophones de l’espagnol comme langue étrangère (ELE). Plus précisément, le travail visait à caractériser l'utilisation du système prépositionnel à différentes étapes de l'apprentissage et à s'enquérir des processus sous-jacents aux usages de prépositions documentés. La matière première pour mener à bien cette recherche est composée d'un corpus de productions écrites rédigées par des participants ayant une compétence communicative correspondant au niveau A1, A2, B1 ou B2, selon l'échelle proposée par le Cadre européen commun de référence pour les langues. Tous les participants avaient le français comme langue maternelle ou langue dominante. Les données obtenues à travers le corpus ont été abordées à partir de la perspective du paradigme connu sous le nom d'analyse de la performance. Ce travail décrit à la fois les utilisations normatives et non normatives des prépositions de contenu lexical (spatial, temporel ou notionnel) et celles qui ont une valeur principalement grammaticale. Les résultats obtenus suggèrent que ce dernier type de prépositions pose un plus grand défi pour les apprenants, un effet qui tend à persister même à des niveaux plus avancés. En ce qui concerne les processus sous-jacents aux usages des prépositions, il a été observé comment le transfert de la L1 facilite l'utilisation normative de certaines prépositions lorsqu’il y a convergence entre les marques et notions prépositionnelles exprimées en français et en espagnol. En revanche, lorsque les marques prépositionnelles dans les deux langues coïncident, mais pas les valeurs qu'elles expriment, une augmentation des utilisations inappropriées des prépositions a été identifiée en raison, en partie, d'un processus de transfert négatif ou d'interférence de la L1. Comme nous le verrons au cours de ce travail, ce processus d'interférence ou de transfert négatif de la L1 interagit souvent avec les interférences d'une autre L2 (anglais) et avec des processus intralinguistiques, tels que la surgénéralisation des règles de la langue cible (espagnol). Pour finir, cette thèse souligne que les processus linguistiques mentionnés ci-dessus se voient renforcés par un processus pédagogique, autrement dit, par la manière dont le matériel didactique utilisé par les participants approche l’enseignement du système prépositionnel de l’espagnol. Mots-clés : Espagnol langue étrangère (ELE), Linguistique appliquée, Interlangue, Analyse de l’interlangue, Analyse d’erreurs, prépositions, apprenants francophones. / This thesis, within the field of applied linguistics, focuses on the usage of prepositions by French-speaking learners of Spanish as a foreign language (ELE). Particularly, this study aims to characterize the use of the Spanish prepositional system throughout different stages of learning and to shed light on the processes that underlie the observed phenomena. The source material for this research came from a corpus composed of texts written by four groups of participants of levels A1 trough B2, as proposed by the Common European Framework of Reference for Languages scale. All participants’ native or dominant language was French. The data obtained through the corpus were approached from the perspective of the L2 language research paradigm known as performance analysis. This study describes both normative and non-normative uses of prepositions of lexical content (spatial, temporal or notional), as well as those that carry primarily grammatical value. The results obtained suggest that the latter type of prepositions posed a greater challenge for the learners, which proved to be an area of difficulty that tended to persist, even at more advanced levels. With regard to the processes underlying the usage of prepositions, our findings support the idea that language transfer from the participants’ L1 facilitated the appropriate use of certain prepositions in those cases in which the prepositional marks and notions expressed by these in French and in Spanish converged. In contrast, an increase in inappropriate uses of the prepositions was identified when the prepositional marks in both languages coincided, but not the values they expressed. In part, this was due to a process of negative transfer from the students’ L1. It also became apparent that this process of interference from the L1 often interacted with interference from another L2 (English) and with intralinguistic processes, such as the overgeneralization of rules appertaining the target language (Spanish). Last but not least, this research also found evidence that the aforementioned linguistic mechanisms were reinforced by instruction, that is to say, the way in which the textbook used by the students approached the teaching of the Spanish prepositional system. Keywords: Spanish as a Foreign Language (ELE), Applied linguistics, Interlanguage, Interlanguage Analysis, Performance Analysis, Error Analysis, Prepositions, French-speaking learners. / La presente investigación, enmarcada en la lingüística aplicada al aprendizaje de lenguas, se centra en el uso de las preposiciones por parte de un grupo de estudiantes francófonos de español como lengua extranjera (ELE). En concreto, el trabajo se trazó como objetivo caracterizar el uso del sistema preposicional en diferentes etapas del aprendizaje e indagar acerca de los procesos que subyacen a los usos preposicionales documentados. La materia prima para llevar a cabo esta investigación está compuesta de un corpus de producciones escritas redactadas por participantes con niveles de competencia comunicativa A1, A2, B1 y B2, según la escala propuesta por el Marco común europeo de referencia para las lenguas. La totalidad de participantes tenía el francés como lengua materna o dominante. Los datos obtenidos a través del corpus se abordaron desde la perspectiva del paradigma conocido como análisis de la actuación. Este trabajo describe los usos tanto normativos como no normativos de las preposiciones de contenido léxico (espaciales, temporales o nocionales) y las que comportan un valor primordialmente gramatical. Los resultados obtenidos sugieren que este último tipo de preposiciones supone un mayor desafío para los aprendientes y muestran una tendencia hacia la persistencia en niveles más avanzados. En lo que concierne a los procesos que subyacen a los usos preposicionales, se observó cómo la transferencia a partir de la L1 facilitó la utilización de ciertas preposiciones en conformidad con la norma en determinados casos en los cuales las marcas preposicionales y nociones expresadas por estas convergen en francés y en español. En contraste, cuando coinciden las marcas preposicionales en ambos idiomas, mas no los valores que expresan, se identificó un incremento de usos inadecuados de las preposiciones debido, en parte, a un proceso de transferencia negativa o interferencia de la L1. Como veremos en el transcurso del trabajo, este proceso de interferencia o transferencia negativa de la L1 interactúa, a menudo, con la interferencia proveniente de otra L2 (inglés) y con procesos intralingüísticos, tales como la sobregeneralización de reglas de la lengua meta (español). Por último, la tesis pone de relieve que los procesos lingüísticos antes mencionados vienen a ser reforzados a través de un proceso de instrucción, dicho de otra forma, la manera como se aborda la enseñanza del sistema preposicional del español en el material didáctico empleado por los participantes. Palabras clave: Español lengua extranjera (ELE), Lingüística aplicada, Interlengua, Análisis de la interlengua, análisis de la actuación, análisis de errores, preposiciones, aprendices francófonos.
|
434 |
Design, Analysis, and Applications of Approximate Arithmetic ModulesUllah, Salim 06 April 2022 (has links)
From the initial computing machines, Colossus of 1943 and ENIAC of 1945, to modern high-performance data centers and Internet of Things (IOTs), four design goals, i.e., high-performance, energy-efficiency, resource utilization, and ease of programmability, have remained a beacon of development for the computing industry. During this period, the computing industry has exploited the advantages of technology scaling and microarchitectural enhancements to achieve these goals. However, with the end of Dennard scaling, these techniques have diminishing energy and performance advantages. Therefore, it is necessary to explore alternative techniques for satisfying the computational and energy requirements of modern applications. Towards this end, one promising technique is analyzing and surrendering the strict notion of correctness in various layers of the computation stack. Most modern applications across the computing spectrum---from data centers to IoTs---interact and analyze real-world data and take decisions accordingly. These applications are broadly classified as Recognition, Mining, and Synthesis (RMS). Instead of producing a single golden answer, these applications produce several feasible answers. These applications possess an inherent error-resilience to the inexactness of processed data and corresponding operations. Utilizing these applications' inherent error-resilience, the paradigm of Approximate Computing relaxes the strict notion of computation correctness to realize high-performance and energy-efficient systems with acceptable quality outputs.
The prior works on circuit-level approximations have mainly focused on Application-specific Integrated Circuits (ASICs). However, ASIC-based solutions suffer from long time-to-market and high-cost developing cycles. These limitations of ASICs can be overcome by utilizing the reconfigurable nature of Field Programmable Gate Arrays (FPGAs). However, due to architectural differences between ASICs and FPGAs, the utilization of ASIC-based approximation techniques for FPGA-based systems does not result in proportional performance and energy gains. Therefore, to exploit the principles of approximate computing for FPGA-based hardware accelerators for error-resilient applications, FPGA-optimized approximation techniques are required. Further, most state-of-the-art approximate arithmetic operators do not have a generic approximation methodology to implement new approximate designs for an application's changing accuracy and performance requirements. These works also lack a methodology where a machine learning model can be used to correlate an approximate operator with its impact on the output quality of an application. This thesis focuses on these research challenges by designing and exploring FPGA-optimized logic-based approximate arithmetic operators. As multiplication operation is one of the computationally complex and most frequently used arithmetic operations in various modern applications, such as Artificial Neural Networks (ANNs), we have, therefore, considered it for most of the proposed approximation techniques in this thesis.
The primary focus of the work is to provide a framework for generating FPGA-optimized approximate arithmetic operators and efficient techniques to explore approximate operators for implementing hardware accelerators for error-resilient applications.
Towards this end, we first present various designs of resource-optimized, high-performance, and energy-efficient accurate multipliers. Although modern FPGAs host high-performance DSP blocks to perform multiplication and other arithmetic operations, our analysis and results show that the orthogonal approach of having resource-efficient and high-performance multipliers is necessary for implementing high-performance accelerators. Due to the differences in the type of data processed by various applications, the thesis presents individual designs for unsigned, signed, and constant multipliers. Compared to the multiplier IPs provided by the FPGA Synthesis tool, our proposed designs provide significant performance gains. We then explore the designed accurate multipliers and provide a library of approximate unsigned/signed multipliers. The proposed approximations target the reduction in the total utilized resources, critical path delay, and energy consumption of the multipliers. We have explored various statistical error metrics to characterize the approximation-induced accuracy degradation of the approximate multipliers. We have also utilized the designed multipliers in various error-resilient applications to evaluate their impact on applications' output quality and performance.
Based on our analysis of the designed approximate multipliers, we identify the need for a framework to design application-specific approximate arithmetic operators. An application-specific approximate arithmetic operator intends to implement only the logic that can satisfy the application's overall output accuracy and performance constraints.
Towards this end, we present a generic design methodology for implementing FPGA-based application-specific approximate arithmetic operators from their accurate implementations according to the applications' accuracy and performance requirements. In this regard, we utilize various machine learning models to identify feasible approximate arithmetic configurations for various applications. We also utilize different machine learning models and optimization techniques to efficiently explore the large design space of individual operators and their utilization in various applications. In this thesis, we have used the proposed methodology to design approximate adders and multipliers.
This thesis also explores other layers of the computation stack (cross-layer) for possible approximations to satisfy an application's accuracy and performance requirements. Towards this end, we first present a low bit-width and highly accurate quantization scheme for pre-trained Deep Neural Networks (DNNs). The proposed quantization scheme does not require re-training (fine-tuning the parameters) after quantization. We also present a resource-efficient FPGA-based multiplier that utilizes our proposed quantization scheme. Finally, we present a framework to allow the intelligent exploration and highly accurate identification of the feasible design points in the large design space enabled by cross-layer approximations. The proposed framework utilizes a novel Polynomial Regression (PR)-based method to model approximate arithmetic operators. The PR-based representation enables machine learning models to better correlate an approximate operator's coefficients with their impact on an application's output quality.:1. Introduction
1.1 Inherent Error Resilience of Applications
1.2 Approximate Computing Paradigm
1.2.1 Software Layer Approximation
1.2.2 Architecture Layer Approximation
1.2.3 Circuit Layer Approximation
1.3 Problem Statement
1.4 Focus of the Thesis
1.5 Key Contributions and Thesis Overview
2. Preliminaries
2.1 Xilinx FPGA Slice Structure
2.2 Multiplication Algorithms
2.2.1 Baugh-Wooley’s Multiplication Algorithm
2.2.2 Booth’s Multiplication Algorithm
2.2.3 Sign Extension for Booth’s Multiplier
2.3 Statistical Error Metrics
2.4 Design Space Exploration and Optimization Techniques
2.4.1 Genetic Algorithm
2.4.2 Bayesian Optimization
2.5 Artificial Neural Networks
3. Accurate Multipliers
3.1 Introduction
3.2 Related Work
3.3 Unsigned Multiplier Architecture
3.4 Motivation for Signed Multipliers
3.5 Baugh-Wooley’s Multiplier
3.6 Booth’s Algorithm-based Signed Multipliers
3.6.1 Booth-Mult Design
3.6.2 Booth-Opt Design
3.6.3 Booth-Par Design
3.7 Constant Multipliers
3.8 Results and Discussion
3.8.1 Experimental Setup and Tool Flow
3.8.2 Performance comparison of the proposed accurate unsigned multiplier
3.8.3 Performance comparison of the proposed accurate signed multiplier with the state-of-the-art accurate multipliers
3.8.4 Performance comparison of the proposed constant multiplier with the state-of-the-art accurate multipliers
3.9 Conclusion
4. Approximate Multipliers
4.1 Introduction
4.2 Related Work
4.3 Unsigned Approximate Multipliers
4.3.1 Approximate 4 × 4 Multiplier (Approx-1)
4.3.2 Approximate 4 × 4 Multiplier (Approx-2)
4.3.3 Approximate 4 × 4 Multiplier (Approx-3)
4.4 Designing Higher Order Approximate Unsigned Multipliers
4.4.1 Accurate Adders for Implementing 8 × 8 Approximate Multipliers from 4 × 4 Approximate Multipliers
4.4.2 Approximate Adders for Implementing Higher-order Approximate Multipliers
4.5 Approximate Signed Multipliers (Booth-Approx)
4.6 Results and Discussion
4.6.1 Experimental Setup and Tool Flow
4.6.2 Evaluation of the Proposed Approximate Unsigned Multipliers
4.6.3 Evaluation of the Proposed Approximate Signed Multiplier
4.7 Conclusion
5. Designing Application-specific Approximate Operators
5.1 Introduction
5.2 Related Work
5.3 Modeling Approximate Arithmetic Operators
5.3.1 Accurate Multiplier Design
5.3.2 Approximation Methodology
5.3.3 Approximate Adders
5.4 DSE for FPGA-based Approximate Operators Synthesis
5.4.1 DSE using Bayesian Optimization
5.4.2 MOEA-based Optimization
5.4.3 Machine Learning Models for DSE
5.5 Results and Discussion
5.5.1 Experimental Setup and Tool Flow
5.5.2 Accuracy-Performance Analysis of Approximate Adders
5.5.3 Accuracy-Performance Analysis of Approximate Multipliers
5.5.4 AppAxO MBO
5.5.5 ML Modeling
5.5.6 DSE using ML Models
5.5.7 Proposed Approximate Operators
5.6 Conclusion
6. Quantization of Pre-trained Deep Neural Networks
6.1 Introduction
6.2 Related Work
6.2.1 Commonly Used Quantization Techniques
6.3 Proposed Quantization Techniques
6.3.1 L2L: Log_2_Lead Quantization
6.3.2 ALigN: Adaptive Log_2_Lead Quantization
6.3.3 Quantitative Analysis of the Proposed Quantization Schemes
6.3.4 Proposed Quantization Technique-based Multiplier
6.4 Results and Discussion
6.4.1 Experimental Setup and Tool Flow
6.4.2 Image Classification
6.4.3 Semantic Segmentation
6.4.4 Hardware Implementation Results
6.5 Conclusion
7. A Framework for Cross-layer Approximations
7.1 Introduction
7.2 Related Work
7.3 Error-analysis of approximate arithmetic units
7.3.1 Application Independent Error-analysis of Approximate Multipliers
7.3.2 Application Specific Error Analysis
7.4 Accelerator Performance Estimation
7.5 DSE Methodology
7.6 Results and Discussion
7.6.1 Experimental Setup and Tool Flow
7.6.2 Behavioral Analysis
7.6.3 Accelerator Performance Estimation
7.6.4 DSE Performance
7.7 Conclusion
8. Conclusions and Future Work
|
435 |
Plant error compensation and jerk control for adaptive cruise control systemsMeadows, Alexander David 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Some problems of complex systems are internal to the system whereas other problems exist peripherally; two such problems will be explored in this thesis. First, is the issue of excessive jerk from instantaneous velocity demand changes produced by an adaptive cruise control system. Calculations will be demonstrated and an example control solution will be proposed in Chapter 3. Second, is the issue of a non-perfect plant, called an uncertain or corrupted plant. In initial control analysis, the adaptive cruise control systems are assumed to have a perfect plant; that is to say, the plant always behaves as commanded. In reality, this is seldom the case. Plant corruption may come from a variation in performance through use or misuse, or from noise or imperfections in the sensor signal data. A model for plant corruption is introduced and methods for analysis and compensation are explored in Chapter 4. To facilitate analysis, Chapter 2 discusses the concept of system identification, an order reduction tool which is employed herein. Adaptive cruise control systems are also discussed with special emphasis on the situations most likely to employ jerk limitation.
|
436 |
On efficient a posteriori error analysis for variational inequalitiesKöhler, Karoline Sophie 14 November 2016 (has links)
Effiziente und zuverlässige a posteriori Fehlerabschätzungen sind eine Hauptzutat für die effiziente numerische Berechnung von Lösungen zu Variationsungleichungen durch die Finite-Elemente-Methode. Die vorliegende Arbeit untersucht zuverlässige und effiziente Fehlerabschätzungen für beliebige Finite-Elemente-Methoden und drei Variationsungleichungen, nämlich dem Hindernisproblem, dem Signorini Problem und dem Bingham Problem in zwei Raumdimensionen. Die Fehlerabschätzungen hängen vom zum Problem gehörenden Lagrange Multiplikator ab, der eine Verbindung zwischen der Variationsungleichung und dem zugehörigen linearen Problem darstellt. Effizienz und Zuverlässigkeit werden bezüglich eines totalen Fehlers gezeigt. Die Fehleranschätzungen fordern minimale Regularität. Die Approximation der exakten Lösung erfüllt die Dirichlet Randbedingungen und die Approximation des Lagrange Multiplikators ist nicht-positiv im Falle des Hindernis- und Signoriniproblems, und hat Betrag kleiner gleich 1 für das Bingham Problem. Dieses allgemeine Vorgehen ermöglicht das Einbinden nicht-exakter diskreter Lösungen, welche im Kontext dieser Ungleichungen auftreten. Aus dem Blickwinkel der Anwendungen ist Effizienz und Zuverlässigkeit im Bezug auf den Fehler der primalen Variablen in der Energienorm von großem Interesse. Solche Abschätzungen hängen von der Wahl eines effizienten diskreten Lagrange Multiplikators ab. Im Falle des Hindernis- und Signorini Problems werden postive Beispiele für drei Finite-Elemente Methoden, der konformen Courant Methode, der nicht-konformen Crouzeix-Raviart Methode und der gemischten Raviart-Thomas Methode niedrigster Ordnung hergeleitet. Partielle Resultate liegen im Fall des Bingham Problems vor. Numerischer Experimente heben die theoretischen Ergebnisse hervor und zeigen Effizienz und Zuverlässigkeit. Die numerischen Tests legen nahe, dass der aus den Abschätzungen resultierende adaptive Algorithmus mit optimaler Konvergenzrate konvergiert. / Efficient and reliable a posteriori error estimates are a key ingredient for the efficient numerical computation of solutions for variational inequalities by the finite element method. This thesis studies such reliable and efficient error estimates for arbitrary finite element methods and three representative variational inequalities, namely the obstacle problem, the Signorini problem, and the Bingham problem in two space dimensions. The error estimates rely on a problem connected Lagrange multiplier, which presents a connection between the variational inequality and the corresponding linear problem. Reliability and efficiency are shown with respect to some total error. Reliability and efficiency are shown under minimal regularity assumptions. The approximation to the exact solution satisfies the Dirichlet boundary conditions, and an approximation of the Lagrange multiplier is non-positive in the case of the obstacle and Signorini problem and has an absolute value smaller than 1 for the Bingham flow problem. These general assumptions allow for reliable and efficient a posteriori error analysis even in the presence of inexact solve, which naturally occurs in the context of variational inequalities. From the point of view of the applications, reliability and efficiency with respect to the error of the primal variable in the energy norm is of great interest. Such estimates depend on the efficient design of a discrete Lagrange multiplier. Affirmative examples of discrete Lagrange multipliers are presented for the obstacle and Signorini problem and three different first-order finite element methods, namely the conforming Courant, the non-conforming Crouzeix-Raviart, and the mixed Raviart-Thomas FEM. Partial results exist for the Bingham flow problem. Numerical experiments highlight the theoretical results, and show efficiency and reliability. The numerical tests suggest that the resulting adaptive algorithms converge with optimal convergence rates.
|
437 |
Analysis and design of efficient passive components for the millimeter-wave and THz bandsBerenguer Verdú, Antonio José 29 June 2017 (has links)
This thesis tackles issues of particular interest regarding analysis and design of passive components at the mm-wave and Terahertz (THz) bands. Innovative analysis techniques and modeling of complex structures, design procedures, and practical implementation of advanced passive devices are presented.
The first part of the thesis is dedicated to THz passive components. These days, THz technology suffers from the lack of suitable waveguiding structures since both, metals and dielectric, are lossy at THz frequencies. This implies that neither conventional closed metallic structures used at microwave frequencies, nor dielectric waveguides used in the optical regime, are adequate solutions. Among a variety of new proposals, the Single Wire Waveguide (SWW) stands out due to its low attenuation and dispersion. However, this surface waveguide presents difficult excitation and strong radiation on bends. A Dielectric-Coated Single Wire Waveguide (DCSWW) can be used to alleviate these problems, but advantages of the SWW are lost and new problems arise.
Until now, literature has not given proper solution to radiation on bends and, on the other hand, rigorous characterization of these waveguides lacks these days. This thesis provides, for the first time, a complete modal analysis of both waveguides, appropriated for THz frequencies. This analysis is later applied to solve the problem of radiation on bends. Several structures and design procedures to alleviate radiation losses are presented and experimentally validated.
The second part of the thesis is dedicated to mm-wave passive components. These days, when implementing passive components to operate at such small, millimetric wavelengths, to ensure proper metallic contact and alignment between parts results challenging. In addition, dielectric absorption becomes significant at mm-wave frequencies. Consequently, conventional hollow metallic waveguides and planar transmission lines present high attenuation so that new topologies are being considered. Gap Waveguides (GWs), based on a periodic structure introducing an Electromagnetic Bandgap effect, result very suitable since they do not require metallic contacts and avoid dielectric losses.
However, although GWs have great potential, several issues prevent GW technology from becoming consolidated and universally used. On the one hand, the topological complexity of GWs difficulties the design process since full-wave simulations are time-costly and there is a lack of appropriate analysis methods and suitable synthesis procedures. On the other hand, benefits of using GWs instead of conventional structures are required to be more clearly evidenced with high-performance GW components and proper comparatives with conventional structures. This thesis introduces several efficient analysis methods, models, and synthesis techniques that will allow engineers without significant background in GWs to straightforwardly implement GW devices. In addition, several high-performance narrow-band filters operating at Ka-band and V-band, as well as a rigorous comparative with rectangular waveguide topology, are presented. / Esta tesis aborda problemas actuales en el análisis y diseño de componentes pasivos en las bandas de onda milimétrica y Terahercios (THz). Se presentan nuevas técnicas de análisis y modelado de estructuras complejas, procedimientos de diseño, e implementación práctica de dispositivos pasivos avanzados.
La primera parte de la tesis se dedica a componentes pasivos de THz. Actualmente no se disponen de guías de onda adecuadas a THz debido a que ambos, metales y dieléctricos, introducen grandes pérdidas. En consecuencia, no es adecuado escalar las estructuras metálicas cerradas usadas en microondas, ni las guías dieléctricas usadas a frecuencias ópticas. Entre un gran número de recientes propuestas, la Single Wire Waveguide (SWW) destaca por su baja atenuación y casi nula dispersión. No obstante, como guía superficial, la SWW presenta difícil excitación y radiación en curvas. El uso de un recubrimiento dieléctrico, creando la Dielecric-Coated Single Wire Waveguide (DCSWW), alivia estos inconvenientes, pero las ventajas anteriores se pierden y nuevos problemas aparecen.
Hasta la fecha, no se han encontrado soluciones adecuadas para la radiación en curvas de la SWW. Además, se echa en falta una caracterización rigurosa de ambas guías. Esta tesis presenta, por primera vez, un análisis modal completo de SWW y DCSWW, adecuado a la banda de THz. Este análisis es aplicado posteriormente para evitar el problema de la radiación en curvas. Se presentan y validan experimentalmente diversas estructuras y procedimientos de diseño.
La segunda parte de la tesis abarca componentes pasivos de ondas milimétricas. Actualmente, estos componentes sufren una importante degradación de su respuesta debido a que resulta difícil asegurar contacto metálico y alineamiento adecuados para la operación a longitudes de onda tan pequeñas. Además, la absorción dieléctrica incrementa notablemente a estas frecuencias. En consecuencia, tanto guías metálicas huecas como líneas de transmisión planares convencionales presentan gran atenuación, siendo necesario considerar topologías alternativas. Las Gap Waveguides (GWs), basadas en una estructura periódica que introduce un efecto de Electromagnetic Bandgap, resultan muy adecuadas puesto que no requieren contacto entre partes metálicas y evitan las pérdidas en dieléctricos.
No obstante, a pesar del potencial de las GWs, varias barreras impiden la consolidación y uso universal de esta tecnología. Por una parte, la compleja topología de las GWs dificulta el proceso de diseño dado que las simulaciones de onda completa consumen mucho tiempo y no existen actualmente métodos de análisis y diseño apropiados. Por otra parte, es necesario evidenciar el beneficio de usar GWs mediante dispositivos GW de altas prestaciones y comparativas adecuadas con estructuras convencionales. Esta tesis presenta diversos métodos de análisis eficientes, modelos, y técnicas de diseño que permitirán la síntesis de dispositivos GW sin necesidad de un conocimiento profundo de esta tecnología. Asimismo, se presentan varios filtros de banda estrecha operando en las bandas Ka y V con altas prestaciones, así como una comparativa rigurosa con la guía rectangular. / Aquesta tesi aborda problemes actuals en relació a l'anàlisi i disseny de components passius en les bandes d'ona mil·limètrica i Terahercis. Es presenten noves tècniques d'anàlisi i modelatge d'estructures complexes, procediments de disseny, i implementació pràctica de dispositius passius avançats.
La primera part de la tesi es focalitza en components passius de THz. Actualment no es disposen de guies d'ona adequades a THz causa que tots dos, metalls i dielèctrics, introdueixen grans pèrdues. En conseqüència, no és adequat escalar les estructures metál·liques tancades usades en microones, ni les guies dielèctriques usades a freqüències òptiques. Entre un gran nombre de propostes recents, la Single Wire Waveguide (SWW) destaca per la seua baixa atenuació i quasi nul·la dispersió. No obstant això, com a guia superficial, la SWW presenta difícil excitació i radiació en corbes. L'ús d'un recobriment dielèctric, creant la Dielecric-Coated Single Wire Waveguide (DCSWW), alleuja aquests inconvenients, però els avantatges anteriors es perden i nous problemes apareixen.
Fins a la data, no s'han trobat solucions adequades per a la radiació en corbes de la SWW. A més, es troba a faltar una caracterització rigorosa d'ambdues guies. Aquesta tesi presenta, per primera vegada, un anàlisi modal complet de SWW i DCSWW, adequat a la banda de THz. Aquest anàlisi és aplicat posteriorment per evitar el problema de la radiació en corbes. Es presenten i validen experimentalment diverses estructures i procediments de disseny.
La segona part de la tesi es centra en components passius d'ones mil·limètriques. Actualment, aquests components pateixen una important degradació de la seua resposta a causa de que resulta difícil assegurar contacte metàl·lic i alineament adequats per a l'operació a longituds d'ona tan menudes. A més, l'absorció dielèctrica incrementa notablement a aquestes freqüències. En conseqüència, tant guies metàl·liques buides com línies de transmissió planars convencionals presenten gran atenuació, sent necessari considerar topologies alternatives. Les Gap Waveguides (GWs), basades en una estructura periòdica que introdueix un efecte de Electromagnetic Bandgap, resulten molt adequades ja que no requereixen contacte entre parts metàl·liques i eviten les pèrdues en dielèctrics.
No obstant, tot i el potencial de les GWs, diverses barreres impedixen la consolidació i ús universal d'aquesta tecnologia. D'una banda, la complexa topologia de les GWs dificulta el procés de disseny atés que les simulacions d'ona completa consumeixen molt de temps i no existeixen actualment mètodes d'anàlisi i disseny apropiats. D'altra banda, és necessari evidenciar el benefici d'utilitzar GWs mitjançant dispositius GW d'altes prestacions i comparatives adequades amb estructures convencionals. Aquesta tesi presenta diversos mètodes d'anàlisi eficients, models, i tècniques de disseny que permetran la síntesi de dispositius GW sense necessitat d'un coneixement profund d'aquesta tecnologia. Així mateix, es presenten diversos filtres de banda estreta operant en les bandes Ka i V amb altes prestacions, així com una comparativa rigorosa amb la guia rectangular. / Berenguer Verdú, AJ. (2017). Analysis and design of efficient passive components for the millimeter-wave and THz bands [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/84004
|
438 |
An analytical perspective on language learning in adult basic education and training programmesVaccarino, Franco Angelo 01 1900 (has links)
The Directorate of Adult Education and Training of the national Department of Education
views Adult Basic Education and Training (ABET) not merely as literacy, but as the general
conceptual foundation towards lifelong learning and development. This includes knowledge,
skills, and attitudes which are needed for social, economic and political participation and
transformation. These skills will assist learners in becoming more active participants in their
communities, their workplaces and contribute towards the development of South Africa.
This study aims to examine whether ABET programmes prepare learners to acquire the
language which is needed to achieve this objective. It falls within one of the eight learning
areas defined by the National Qualifications Framework (NQF), namely the language,
literacy and communication learning area. In order to research the effectiveness of learning
within this area, it is important to analyse the interaction which takes place within a
classroom; the type of questions both educators and learners ask; the type of errors learners
make in the classroom; and how the educators treat these errors. What is also of paramount
importance is whether the language skills learnt in the classroom are transferred to outside
the classroom.
To examme this, various authors' views on classroom interaction; questions; errors;
treatment of errors; and evaluating the effectiveness of learning are presented. Instruments
were designed to analyse these aspects within an ABET programme, and include:
• the framework used to undertake the classroom interaction analysis,
• the instrument used to explore the type of questions educators and learners ask in
the classroom,
• how an error analysis is used to identify typical learners' errors which occur
frequently,
• the methodology used to uncover how educators treat their learners' errors, and
• the various stakeholders' questionnaires which were used to ascertain the
effectiveness of learning at an ABET Centre.
The research findings are presented and interpreted in order to provide recommendations for
the development of language learning and teaching within the ABET field. The findings also
gave rise to recommendations for classroom practices for ABET educators, and particularly
the need for educator training and development. Recommendations for curriculum designers
of ABET materials are also presented. / Educational Studies / D. Ed. (Philosophy of Education)
|
439 |
Die fehleranalytische Relevanz der prädominanten Spracherwerbshypothesen / Untersuchung des Fehlererklärungspotentials der Kontrastiv-, der Identitäts- und der Interlanguagehypothese auf Grundlage einer Analyse linguistischer Fehlleistungen deutscher Muttersprachler beim Erwerb des Englischen / The error analytical applicability of the predominant language acquisition hypotheses / Comparative examination of the error explanation potential of the contrastive, identity and interlanguage hypotheses based on the analysis of linguistic errors made by native speakers of German when acquiring the English languageAchten, Michael 24 July 2006 (has links)
No description available.
|
440 |
An analytical perspective on language learning in adult basic education and training programmesVaccarino, Franco Angelo 01 1900 (has links)
The Directorate of Adult Education and Training of the national Department of Education
views Adult Basic Education and Training (ABET) not merely as literacy, but as the general
conceptual foundation towards lifelong learning and development. This includes knowledge,
skills, and attitudes which are needed for social, economic and political participation and
transformation. These skills will assist learners in becoming more active participants in their
communities, their workplaces and contribute towards the development of South Africa.
This study aims to examine whether ABET programmes prepare learners to acquire the
language which is needed to achieve this objective. It falls within one of the eight learning
areas defined by the National Qualifications Framework (NQF), namely the language,
literacy and communication learning area. In order to research the effectiveness of learning
within this area, it is important to analyse the interaction which takes place within a
classroom; the type of questions both educators and learners ask; the type of errors learners
make in the classroom; and how the educators treat these errors. What is also of paramount
importance is whether the language skills learnt in the classroom are transferred to outside
the classroom.
To examme this, various authors' views on classroom interaction; questions; errors;
treatment of errors; and evaluating the effectiveness of learning are presented. Instruments
were designed to analyse these aspects within an ABET programme, and include:
• the framework used to undertake the classroom interaction analysis,
• the instrument used to explore the type of questions educators and learners ask in
the classroom,
• how an error analysis is used to identify typical learners' errors which occur
frequently,
• the methodology used to uncover how educators treat their learners' errors, and
• the various stakeholders' questionnaires which were used to ascertain the
effectiveness of learning at an ABET Centre.
The research findings are presented and interpreted in order to provide recommendations for
the development of language learning and teaching within the ABET field. The findings also
gave rise to recommendations for classroom practices for ABET educators, and particularly
the need for educator training and development. Recommendations for curriculum designers
of ABET materials are also presented. / Educational Studies / D. Ed. (Philosophy of Education)
|
Page generated in 0.0577 seconds