• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 38
  • 34
  • 8
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 1
  • Tagged with
  • 205
  • 32
  • 24
  • 24
  • 22
  • 22
  • 18
  • 18
  • 17
  • 16
  • 16
  • 14
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

A comparative analysis of stylistic devices in Shakespeare’s plays, Julius Caesar and Macbeth and their xitsonga translations

Baloyi, Mafemani Joseph 06 1900 (has links)
The study adopts a theory of Descriptive Translation Studies to undertake a comparative analysis of stylistic devices in Shakespeare’s two plays, Julius Caesar and Macbeth and their Xitsonga translations. It contextualises its research aim and objectives after outlining a sequential account of theory development in the discipline of translation; and arrives at the desired and suitable tools for data collection and analysis.Through textual observation and notes of reading, the current study argues that researchers and scholars in the discipline converge when it comes to a dire need for translation strategies, but diverge in their classification and particular application for convenience in translating and translation. This study maintains that the translation strategies should be grouped into explicitation, normalisation and simplification, where each is assigned specific translation procedures. The study demonstrates that explicitation and normalisation translation strategies are best suited in dealing with translation constraints at a microtextual level. The sampled excerpts from both plays were examined on the preference for the analytical framework based on subjective sameness within a Skopos theory. The current study acknowledges that there is no single way of translating a play from one culture to the other. It also acknowledges that there appears to be no way the translator can refrain from the influence of the source text, as an inherent cultural feature that makes it unique. With no sure way of managing stylistic devices as translation constraints, translation as a problem-solving process requires creativity, a demonstration of mastery of language and style of the author of the source text, as well as a power drive characterised by the aspects of interlingual psychological balance of power and knowledge power. These aspects will help the translator to manage whatever translation brief(s) better, and arrive at a product that is accessible, accurate and acceptable to the target readership. They will also ensure that the translator maintains a balance between the two languages in contact, in order to guard against domination of one language over the other. The current study concludes that the Skopos theory has a larger influence in dealing with anticipating the context of the target readership as a factor that can introduce high risk when assessing the communicability conditions for the translated message. Contrariwise, when dealing with stylistic devices and employ literal translation as a translation procedure to simplification, the translator only aims at simplifying the language and making it accessible for the sake of ‘accessibility’ as it remains a product with communicative inadequacies. The study also concludes by maintaining that translation is not only transcoding, but the activity that calls for the translator’s creativity in order to identify and analyse the constraints encountered and decide on the corresponding translation strategies. / African Languages / D. Litt. et Phil. (African Languages)
192

Pravděpodobnostní neuronové sítě pro speciální úlohy v elektromagnetismu / Probabilistic Neural Networks for Special Tasks in Electromagnetics

Koudelka, Vlastimil January 2014 (has links)
Tato práce pojednává o technikách behaviorálního modelování pro speciální úlohy v elektromagnetismu, které je možno formulovat jako problém aproximace, klasifikace, odhadu hustoty pravděpodobnosti nebo kombinatorické optimalizace. Zkoumané methody se dotýkají dvou základních problémů ze strojového učení a combinatorické optimalizace: ”bias vs. variance dilema” a NP výpočetní komplexity. Boltzmanův stroj je v práci navržen ke zjednodušování komplexních impedančních sítí. Bayesovský přístup ke strojovému učení je upraven pro regularizaci Parzenova okna se snahou o vytvoření obecného kritéria pro regularizaci pravděpodobnostní a regresní neuronové sítě.
193

Progressive Meshes / Progressive Meshes

Valachová, Michaela January 2012 (has links)
This thesis introduces a representation of graphical data, progressive meshes, and its fields of usage. The main part of this work is mathematical representation of progressive meshes and the simplification algorithm, which leads to this representation. Examples of changes in progressive mesh representation are also part of this thesis, along with few examples. The result is an application that implements the calculation of the Progressive Meshes model representation
194

Bikubische Interpolation - Didaktische Potenzen des mathematischen Gegenstandes

Kamprath, Neidhart 24 June 2013 (has links)
Der Vortrag zeigt, wie aus einem mathematisch-technischen Sachverhalt ein didaktisch begründetes Unterrichtsbeispiel abgeleitet werden kann und stellt die unterrichtlichen Nutzungsmöglichkeiten vor. Für die digitale Bildbearbeitung spielt die Interpolation eine wichtige Rolle und dient hierbei als Berechnungsverfahren für die Bildgrößenänderung. Interpolation ist ein Approximationsverfahren, bei dem z.B. zu Punkten mit bekannten Koordinaten eine Funktion berechnet wird, die alle diese Punkte erfüllt. Mit dieser Funktion können dann beliebige Zwischenwerte berechnet werden. Dabei bestimmt die Zahl der Datenpunkte die Zahl der notwendigen Polynomterme. Wegen ihrer mathematischen Eigenschaften werden häufig Polynome benutzt. Die Lösung der Aufgabe führt über ein lineares Gleichungssystem zur Bestimmung der Koeffizienten des Polynoms. Der erste Teil des Vortrages befasst sich mit der beispielhaften Darstellung der bikubischen Interpolation und deren Realisierung mittels MathCAD. Es wird gezeigt, wie aus den konkreten Schwärzungswerten eines Digitalbildes für eine Bildvergrößerung ein zu interpolierender Zwischenwert für einen neuen Bildpunkt berechnet wird. Der MathCAD-Wortschatz wird angegeben und notwendige didaktische Vereinfachungen werden beschrieben. Im zweiten Teil werden die Nutzung des Themas als Unterrichtsgegenstand in der Sekundarstufe II in seiner Wechselwirkung zwischen digitaler Bildbearbeitung, Mathematik und Informatik (Nutzung von MathCAD) erläutert, die thematischen Verflechtungsmöglichkeiten aufgezeigt und das didaktische Potential beleuchtet.
195

Exploring Automatic Synonym Generation for Lexical Simplification of Swedish Electronic Health Records

Jänich, Anna January 2023 (has links)
Electronic health records (EHRs) are used in Sweden's healthcare systems to store patients' medical information. Patients in Sweden have the right to access and read their health records. Unfortunately, the language used in EHRs is very complex and presents a challenge for readers who lack medical knowledge. Simplifying the language used in EHRs could facilitate the transfer of information between medical staff and patients. This project investigates the possibility of generating Swedish medical synonyms automatically. These synonyms are intended to be used in future systems for lexical simplification that can enhance the readability of Swedish EHRs and simplify medical terminology. Current publicly available Swedish corpora that provide synonyms for medical terminology are insufficient in size to be utilized in a system for lexical simplification. To overcome the obstacle of insufficient corpora, machine learning models are trained to generate synonyms and terms that convey medical concepts in a more understandable way. With the purpose of establishing a foundation for analyzing complex medical terms, a simple mechanism for Complex Word Identification (CWI) is implemented. The mechanism relies on matching strings and substrings from a pre-existing corpus containing hand-curated medical terms in Swedish. To find a suitable strategy for generating medical synonyms automatically, seven different machine learning models are queried for synonym suggestions for 50 complex sample terms. To explore the effect of different input data, we trained our models on different datasets with varying sizes. Three of the seven models are based on BERT and four of them are based on Word2Vec. For each model, results for the 50 complex sample terms are generated and raters with medical knowledge are asked to assess whether the automatically generated suggestions could be considered synonyms. The results vary between the different models and seem to be connected to the amount and quality of the data they have been trained on. Furthermore, the raters involved in judging the synonyms exhibit great disagreement, revealing the complexity and subjectivity of the task to find suitable and widely accepted medical synonyms. The method and models applied in this project do not succeed in creating a stable source of suitable synonyms. The chosen BERT approach based on Masked Language Modelling cannot reliably generate suitable synonyms due to the limitation of generating one term per synonym suggestion only. The Word2Vec models demonstrate some weaknesses due to the lack of context consideration. Despite the fact that the current performance of our models in generating automatic synonym suggestions is not entirely satisfactory, we have observed a promising number of accurate suggestions. This gives us reason to believe that with enhanced training and a larger amount of input data consisting of Swedish medical text, the models could be improved and eventually effectively applied.
196

EXAMINATION OF A PRIORI SIMULATION PROCESS ESTIMATION ON STRUCTURAL ANALYSIS CASE

Matthew R Spinazzola (14221838) 07 December 2022 (has links)
<p>  </p> <p>In the field of Engineering Analysis and Simulation, part simplification is often used to reduce the computational time and requirements of finite element solvers. Reducing the complexity of the model through simplification introduces error into the analysis, the amount of which depends on the engineering scenario, CAD model, and method of simplification. Expert Analysts utilize their experience and understanding to mitigate the error in analysis through intelligent simplification method selection, however, there is no formalized system of selection. Artificial Intelligence, specifically through the use of Machine Learning algorithms, has been explored as a method of capturing and automating upon this informal knowledge. One existing method which found success only explored Computational Fluid Dynamics simulations without validating the method on other kinds of engineering analysis cases. This study attempts to validate this a priori method on a new situation and directly compare the results between studies. To accomplish this, a new CAD Assembly model database was generated of over 300 simplified and non-simplified examples. Afterwards, the models were subjected to a Structural Analysis simulation, where analysis data could be generated and stored. Finally, a Regression Neural Network was utilized to create Machine Learning models to predict analysis result errors. This study examines the question of how minimal a neural network architecture will be able to make predictions with a comparable accuracy to that of the previous studies.   </p>
197

Prepositional Errors in Swedish Upper Secondary School Students’ English Written Production

Billingfors, Caroline January 2024 (has links)
The aim of the study is to find out to what extent Swedish learners of English, in the first year of upper secondary school, make prepositional errors in their written production, and to what extent these errors can be attributed to negative transfer, overgeneralization and simplification by conducting an Error Analysis. A comparison between gender and type of program, academic and vocational, is made to find out in which type of program most errors appear and if there is any difference in terms of gender.  The data is annotated from the Swedish Learner English Corpus (SLEC), which consists of argumentative essays written by Swedish learners of English, and it consists of 24 randomly selected texts based on the variables binary gender, type of program, Swedish as their L1, school year, and English course. All the texts selected are written by students in the first year of upper secondary school studying the course English 5. The results of the study reveal that Swedish learners of English struggle with prepositional usage. In total, 649 prepositions were identified in the 24 texts. Out of these, 72 (11.09%) were used incorrectly. The most frequently used prepositions involved in these errors are of, for, in, to, and with. Most errors appear when prepositional phrases function as post-modifiers in noun phrases. Substitution is, by far, the most common type of error found, meaning that the students replace the correct preposition with an incorrect one. The results thus show that the students seem to be aware that a preposition should be used although they fail to choose the correct one. Female students make more prepositional errors than male students; similarly, students attending vocational programs make more prepositional errors than students attending academic programs. Most errors are cases of overgeneralizations, followed by negative transfer from Swedish, and simplification. However, many of the errors can still be attributed to negative transfer which suggests that, even though Swedish and English are similar languages which could lead to positive transfer, this does not seem to fully apply to prepositions.
198

An investigation into the solving of polynomial equations and the implications for secondary school mathematics

Maharaj, Aneshkumar 06 1900 (has links)
This study investigates the possibilities and implications for the teaching of the solving of polynomial equations. It is historically directed and also focusses on the working procedures in algebra which target the cognitive and affective domains. The teaching implications of the development of representational styles of equations and their solving procedures are noted. Since concepts in algebra can be conceived as processes or objects this leads to cognitive obstacles, for example: a limited view of the equal sign, which result in learning and reasoning problems. The roles of sense-making, visual imagery, mental schemata and networks in promoting meaningful understanding are scrutinised. Questions and problems to solve are formulated to promote the processes associated with the solving of polynomial equations, and the solving procedures used by a group of college students are analysed. A teaching model/method, which targets the cognitive and affective domains, is presented. / Mathematics Education / M.A. (Mathematics Education)
199

Spectral Simplification In Scalar And Dipolar Coupled Spins Using Multiple Quantum NMR : Developments Of Novel Methodologies

Baishya, Bikash 05 1900 (has links)
Spin selective MQ-SQ correlation has been demonstrated by either selective pulses in homo-nuclear spin systems in isotropic and weakly orienting chiral media or by nonselective pulses in hetero-nuclear spin systems in strongly aligned media. As a consequence of the spin selective correlation, the coherence transfer pathway from MQ to SQ is spin state selective. This two dimensional approach enables the utilization of the passive couplings (remote couplings) to break a complex one dimensional spectrum into many sub spectra. Each sub spectrum contains fewer transitions and hence fewer couplings (active couplings). The role of the passive couplings is to displace the sub spectra and measurement of the displacements taking into account their relative tilt provides the magnitude of the passive couplings along with relative signs. Further possibility of a spin state selective MQ-SQ resolved experiment to determine very small remote couplings otherwise buried within linewidth in one dimensional spectrum has been demonstrated. The resolution of the multiple quantum spectrum in indirect dimension has also been exploited to separate the sub spectra. The technique renders the analysis of complex spectrum in isotropic system much simpler. The potentialities of the technique have also been demonstrated for discrimination of optical enantiomers and derivation of the residual dipolar couplings from very complicated spectrum. The second order spectrum in strongly aligned media restrict selective excitation, however in hetero-nuclear spin system the nonselective pulses on protons do not interact with the hetero-nuclear spins. Thus the weakly coupled part of a strongly coupled spectrum has been exploited for simplifying the second order spectrum and thereby its analysis. Thus several methodologies derived from spin selective correlation has been demonstrated. Enantiopure spectrum has been recorded from a mixture of R and S enantiomers by a novel pulse scheme called Double Quantum Selective Refocusing Experiment. The dipolar coupled methyl protons in weakly orienting media are utilized. The selective excitation of double quantum coherence reduces the three spin system into a two spin system and remote couplings are refocused which otherwise leads to broadening. The sum of passive couplings being different for the enantiomers resolution in the DQ dimension is enhanced and thereby their discrimination. Finally several decoupling schemes has been compared in the indirect dimension of HSQC experiment to resolve 13C satellite spectra otherwise buried within line width for increased confidence in determining hetero-nuclear framework information.
200

Probabilistic Sequence Models with Speech and Language Applications

Henter, Gustav Eje January 2013 (has links)
Series data, sequences of measured values, are ubiquitous. Whenever observations are made along a path in space or time, a data sequence results. To comprehend nature and shape it to our will, or to make informed decisions based on what we know, we need methods to make sense of such data. Of particular interest are probabilistic descriptions, which enable us to represent uncertainty and random variation inherent to the world around us. This thesis presents and expands upon some tools for creating probabilistic models of sequences, with an eye towards applications involving speech and language. Modelling speech and language is not only of use for creating listening, reading, talking, and writing machines---for instance allowing human-friendly interfaces to future computational intelligences and smart devices of today---but probabilistic models may also ultimately tell us something about ourselves and the world we occupy. The central theme of the thesis is the creation of new or improved models more appropriate for our intended applications, by weakening limiting and questionable assumptions made by standard modelling techniques. One contribution of this thesis examines causal-state splitting reconstruction (CSSR), an algorithm for learning discrete-valued sequence models whose states are minimal sufficient statistics for prediction. Unlike many traditional techniques, CSSR does not require the number of process states to be specified a priori, but builds a pattern vocabulary from data alone, making it applicable for language acquisition and the identification of stochastic grammars. A paper in the thesis shows that CSSR handles noise and errors expected in natural data poorly, but that the learner can be extended in a simple manner to yield more robust and stable results also in the presence of corruptions. Even when the complexities of language are put aside, challenges remain. The seemingly simple task of accurately describing human speech signals, so that natural synthetic speech can be generated, has proved difficult, as humans are highly attuned to what speech should sound like. Two papers in the thesis therefore study nonparametric techniques suitable for improved acoustic modelling of speech for synthesis applications. Each of the two papers targets a known-incorrect assumption of established methods, based on the hypothesis that nonparametric techniques can better represent and recreate essential characteristics of natural speech. In the first paper of the pair, Gaussian process dynamical models (GPDMs), nonlinear, continuous state-space dynamical models based on Gaussian processes, are shown to better replicate voiced speech, without traditional dynamical features or assumptions that cepstral parameters follow linear autoregressive processes. Additional dimensions of the state-space are able to represent other salient signal aspects such as prosodic variation. The second paper, meanwhile, introduces KDE-HMMs, asymptotically-consistent Markov models for continuous-valued data based on kernel density estimation, that additionally have been extended with a fixed-cardinality discrete hidden state. This construction is shown to provide improved probabilistic descriptions of nonlinear time series, compared to reference models from different paradigms. The hidden state can be used to control process output, making KDE-HMMs compelling as a probabilistic alternative to hybrid speech-synthesis approaches. A final paper of the thesis discusses how models can be improved even when one is restricted to a fundamentally imperfect model class. Minimum entropy rate simplification (MERS), an information-theoretic scheme for postprocessing models for generative applications involving both speech and text, is introduced. MERS reduces the entropy rate of a model while remaining as close as possible to the starting model. This is shown to produce simplified models that concentrate on the most common and characteristic behaviours, and provides a continuum of simplifications between the original model and zero-entropy, completely predictable output. As the tails of fitted distributions may be inflated by noise or empirical variability that a model has failed to capture, MERS's ability to concentrate on high-probability output is also demonstrated to be useful for denoising models trained on disturbed data. / <p>QC 20131128</p> / ACORNS: Acquisition of Communication and Recognition Skills / LISTA – The Listening Talker

Page generated in 0.1049 seconds