• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 324
  • 235
  • 71
  • 40
  • 35
  • 20
  • 9
  • 6
  • 6
  • 6
  • 5
  • 4
  • 2
  • 2
  • 2
  • Tagged with
  • 909
  • 198
  • 155
  • 126
  • 103
  • 101
  • 89
  • 79
  • 77
  • 76
  • 58
  • 53
  • 48
  • 48
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
851

A comparative analysis of stylistic devices in Shakespeare’s plays, Julius Caesar and Macbeth and their xitsonga translations

Baloyi, Mafemani Joseph 06 1900 (has links)
The study adopts a theory of Descriptive Translation Studies to undertake a comparative analysis of stylistic devices in Shakespeare’s two plays, Julius Caesar and Macbeth and their Xitsonga translations. It contextualises its research aim and objectives after outlining a sequential account of theory development in the discipline of translation; and arrives at the desired and suitable tools for data collection and analysis.Through textual observation and notes of reading, the current study argues that researchers and scholars in the discipline converge when it comes to a dire need for translation strategies, but diverge in their classification and particular application for convenience in translating and translation. This study maintains that the translation strategies should be grouped into explicitation, normalisation and simplification, where each is assigned specific translation procedures. The study demonstrates that explicitation and normalisation translation strategies are best suited in dealing with translation constraints at a microtextual level. The sampled excerpts from both plays were examined on the preference for the analytical framework based on subjective sameness within a Skopos theory. The current study acknowledges that there is no single way of translating a play from one culture to the other. It also acknowledges that there appears to be no way the translator can refrain from the influence of the source text, as an inherent cultural feature that makes it unique. With no sure way of managing stylistic devices as translation constraints, translation as a problem-solving process requires creativity, a demonstration of mastery of language and style of the author of the source text, as well as a power drive characterised by the aspects of interlingual psychological balance of power and knowledge power. These aspects will help the translator to manage whatever translation brief(s) better, and arrive at a product that is accessible, accurate and acceptable to the target readership. They will also ensure that the translator maintains a balance between the two languages in contact, in order to guard against domination of one language over the other. The current study concludes that the Skopos theory has a larger influence in dealing with anticipating the context of the target readership as a factor that can introduce high risk when assessing the communicability conditions for the translated message. Contrariwise, when dealing with stylistic devices and employ literal translation as a translation procedure to simplification, the translator only aims at simplifying the language and making it accessible for the sake of ‘accessibility’ as it remains a product with communicative inadequacies. The study also concludes by maintaining that translation is not only transcoding, but the activity that calls for the translator’s creativity in order to identify and analyse the constraints encountered and decide on the corresponding translation strategies. / African Languages / D. Litt. et Phil. (African Languages)
852

Automatická verifikace v procesu soubežného návrhu hardware a software / Automated Verification in HW/SW Co-design

Charvát, Lukáš Unknown Date (has links)
Předmětem dizertační práce je návrh nových technik pro verifikaci hardwaru, které jsou optimalizovány pro použití v procesu souběžného vývoje hardwaru a softwaru. V rámci tohoto typu vývoje je hardware spolu se software vyvíjen paralelně s cílem urychlit vývoj nových systémů. Současné nástroje pro tvorbu mikroprocesorů stavějící na tomto stylu vývoje obvykle umožňují vývojářům ověřit jejich návrh využitím různých simulačních technik a/nebo za pomoci tzv. funkční verifikace. Společnou nevýhodou těchto přístupů je, že se zaměřují pouze na hledání chyb. Výsledný produkt tedy může stále obsahovat nenalezené netriviální defekty. Z tohoto důvodu se v posledních letech stává stále více žádané nasazení formálních metod. Na rozdíl od výše uvedených přístupů založených na hledání chyb, se formální verifikace zaměřuje na dodání rigorózního důkazu, že daný systém skutečně splňuje požadované vlastnosti. I když bylo v uplynulých letech v této oblasti dosaženo značného pokroku, tak aktuální formální přístupy nemají zdaleka schopnost plně automaticky prověřit všechny relevantní vlastnosti verifikovaného návrhu bez výrazného a často nákladného zapojení lidí v rámci verifikačního procesu. Tato práce se snaží řešit problém s automatizací verifikačního procesu jejím zaměřením na verifikační techniky, ve kterých je záměrně kladen menší důraz na jejich přesnost a obecnost, za cenu dosažení plné automatizace (např. vyloučením potřeby ručně vytvářet modely prostředí). Dále se práce také zaměřuje na efektivitu navrhovaných technik a jejich schopnost poskytovat nepřetržitou zpětnou vazbu o verifikačním procesu (např. v podobě podání informace o aktuálním stavu pokrytí). Zvláštní pozornost je pak věnována vývoji formálních metod ověřujících ekvivalenci návrhů mikroprocesorů na různých úrovních abstrakce. Tyto návrhy se mohou lišit ve způsobu, jakým jsou vnitřně zpracovány programové instrukce, nicméně z vnějšího pohledu (daného např. obsahem registrů viditelných z pozice programátora) musí být jejich chování při provádění stejného vstupního programu shodné. Kromě těchto témat se práce také zabývá problematikou návrhu metod pro verifikaci správnosti mechanismů zabraňujících výskytu datových a řídících hazardů v rámci linky zřetězeného zpracování instrukcí. Veškeré metody popsané v této práci byly implementovány ve formě několika nástrojů. Aplikací těchto nástrojů pro verifikaci návrhů netriviálních procesorů bylo dosaženo slibných experimentálních výsledků.
853

Redukce nedeterministických konečných automatů / Reduction of the Nondeterministic Finite Automata

Procházka, Lukáš January 2011 (has links)
Nondeterministic finite automaton is an important tool, which is used to process strings in many different areas of programming. It is important to try to reduce its size for increasing programs' effectiveness. However, this problem is computationally hard, so we need to search for new techniques. Basics of finite automata are described in this work. Some methods for their reduction are then introduced. Usable reduction algorithms are described in greater detail. Then they are implemented and tested. The test results are finally evaluated.
854

Explicit Calculations of Siu’s Effective Termination of Kohn’s Algorithm and the Hachtroudi-Chern-Moser Tensors in CR Geometry / Calculs explicites pour la terminaison effective de l'algorithme de Kohn d'après Siu, et tenseurs de Hachtroudi-Chern-Moser en géométrie CR

Foo, Wei Guo 14 March 2018 (has links)
La première partie présente des calculs explicites de terminaison effective de l'algorithme de Kohn proposée par Siu. Dans la deuxième partie, nous étudions la géométrie des hypersurfaces réelles dans Cⁿ, et nous calculons des invariants explicites avec la méthode d'équivalences de Cartan pour déterminer les lieux CR-ombilics. / The first part of the thesis consists of calculations around Siu's effective termination of Kohn's algorithm. The second part of the thesis studies the CR real hypersurfaces in complex spaces and calculates various explicit invariants using Cartan's equivalence method to study CR-umbilical points.
855

Talets och undertextens olika nyanser : En undersökning av strykningar utav modalitet och uttryck för värderingar i adaptionen från tal till undertext / The different nuances of speech and subtitles : An analysis of omissions regarding modality and expressions of valuation in the shift between speech and subtitles

Adolfsson, Linnea January 2020 (has links)
The communication regarding the new corona-virus raises questions about availability and plain language. In Sweden, approximately 1.5 million people are in need of this communication through written language due to, inter alia, hearing impairment. Although subtitles are considered one of the most read genres today, it has received little focus in Swedish linguistic research, especially when it comes to intralingual subtitles. However, the communication through subtitles is limited and can lead to information loss. In Sweden, SVT as a public service channel has special requirements to maintain good quality in subtitles but nonetheless omissions are a prerequisite. However, this must never affect the loss of important information. Modality is considered within the Systemic-Functional Linguistics as linguistic tools to create opportunities to shift communication in different degrees and directions. A similar shift in degree can occur in expressions of valuation and opinions. Therefore, modality and valuation are interesting and important to study in a well-known TV- show that communicates, informs and debates about a social-crisis like the Corona-virus. This paper examines modality and expressions of valuation in the adaption from speech to subtitles in the Swedish news-program Agenda (SVT) reporting on the Corona-virus in Sweden and the world. Through the subtitling-shift model created by Sahlin (2001), I examine the omissions of modality and the expressions of valuation. The results show that the communication in subtitles have a weaker emphasis of conflicting opinions, are more objectively constructed and that there is a shift in nuances.
856

Sur quelques problèmes de reconstruction en imagerie MA-TIRF et en optimisation parcimonieuse par relaxation continue exacte de critères pénalisés en norme-l0 / On some reconstruction problems in MA-TIRF imaging and in sparse optimization using continuous exact relaxation of l0-penalized criteria

Soubies, Emmanuel 14 October 2016 (has links)
Cette thèse s'intéresse à deux problèmes rencontrés en traitement du signal et des images. Le premierconcerne la reconstruction 3D de structures biologiques à partir d'acquisitions multi-angles enmicroscopie par réflexion totale interne (MA-TIRF). Dans ce contexte, nous proposons de résoudre leproblème inverse avec une approche variationnelle et étudions l'effet de la régularisation. Une batteried'expériences, simples à mettre en oeuvre, sont ensuite proposées pour étalonner le système et valider lemodèle utilisé. La méthode proposée s'est montrée être en mesure de reconstruire avec précision unéchantillon phantom de géométrie connue sur une épaisseur de 400 nm, de co-localiser deux moléculesfluorescentes marquant les mêmes structures biologiques et d'observer des phénomènes biologiquesconnus, le tout avec une résolution axiale de l'ordre de 20 nm. La deuxième partie de cette thèseconsidère plus précisément la régularisation l0 et la minimisation du critère moindres carrés pénalisé (l2-l0) dans le contexte des relaxations continues exactes de cette fonctionnelle. Nous proposons dans unpremier temps la pénalité CEL0 (Continuous Exact l0) résultant en une relaxation de la fonctionnelle l2-l0 préservant ses minimiseurs globaux et pour laquelle de tout minimiseur local on peut définir unminimiseur local de l2-l0 par un simple seuillage. Par ailleurs, nous montrons que cette relaxation éliminedes minimiseurs locaux de la fonctionnelle initiale. La minimisation de cette fonctionnelle avec desalgorithmes d'optimisation non-convexe est ensuite utilisée pour différentes applications montrantl'intérêt de la minimisation de la relaxation par rapport à une minimisation directe du critère l2-l0. Enfin,une vue unifiée des pénalités continues de la littérature est proposée dans ce contexte de reformulationexacte du problème / This thesis is devoted to two problems encountered in signal and image processing. The first oneconcerns the 3D reconstruction of biological structures from multi-angle total interval reflectionfluorescence microscopy (MA-TIRF). Within this context, we propose to tackle the inverse problem byusing a variational approach and we analyze the effect of the regularization. A set of simple experimentsis then proposed to both calibrate the system and validate the used model. The proposed method hasbeen shown to be able to reconstruct precisely a phantom sample of known geometry on a 400 nmdepth layer, to co-localize two fluorescent molecules used to mark the same biological structures andalso to observe known biological phenomena, everything with an axial resolution of 20 nm. The secondpart of this thesis considers more precisely the l0 regularization and the minimization of the penalizedleast squares criteria (l2-l0) within the context of exact continuous relaxations of this functional. Firstly,we propose the Continuous Exact l0 (CEL0) penalty leading to a relaxation of the l2-l0 functional whichpreserves its global minimizers and for which from each local minimizer we can define a local minimizerof l2-l0 by a simple thresholding. Moreover, we show that this relaxed functional eliminates some localminimizers of the initial functional. The minimization of this functional with nonsmooth nonconvexalgorithms is then used on various applications showing the interest of minimizing the relaxation incontrast to a direct minimization of the l2-l0 criteria. Finally we propose a unified view of continuouspenalties of the literature within this exact problem reformulation framework
857

Sebekontrola žáků jako pedagogické téma: problematika měření pomocí sebehodnoticích dotazníků / Student self-control as an educational topic: the issue of measurement using self-report questionnaires

Papajoanu, Ondřej January 2019 (has links)
Self-control is a key non-cognitive skill, which is frequently measured using self-report questionnaires containing items with rating scales. Such data, however, can be hindered by the differences in scale usage among respondents. This may lead to erroneous conclusions when comparing different groups of respondents. The aim of the thesis is to analyze the differences in self-control among students from different Czech upper-secondary schools based on their (unadjusted) self-reports and self-reports adjusted for the differences in scale usage using the anchoring vignette method. The empirical part of the thesis comprises two studies. In the first (pilot) study, we translate a scale to measure self-control, create anchoring vignettes and focus on the verification of the method's basic assumptions using the data from questionnaires and interviews with students. In the second (main) study, we compare the (un-)adjusted self-reports of self-control and the assessment of the vignettes between students from the selected upper- secondary schools of a different type in Prague (N = 312). We found differences in evaluation standards between students from different types of schools. Differences in scale usage among respondents indeed represent a real threat when comparing student self-reports of self-control....
858

Metoda ukotvujících vinět a její využití pro zvyšování komparability sebehodnocení vědomostí a dovedností v oblasti ICT / The Anchoring Vignette Method and its use for increasing the comparability of self-assessments of ICT knowledge and skills

Hrabák, Jan January 2020 (has links)
This dissertation deals with the use of the Anchoring Vignette Method in educational research carried out to establish the level of information and communication technology (ICT) knowledge and skills, with the focus on Czech upper-secondary school students. The theoretical part describes curricular documents that define the ICT knowledge and skills. In the Czech Republic that means mainly the Framework Educational Programmes. Attention is also paid to the international document DigComp. The Anchoring Vignette Method with the focus on nonparametric approach of this method is described in detail too. The author also provides an overview of available Czech and foreign bibliography on ICT knowledge and skills research - including the International Computer and Information Literacy (ICILS, 2013 and 2018) - and on the use of the Anchoring Vignette Method in educational research. The empirical part includes the description of the steps taken in formulating anchoring vignettes, this being an integral part of the Anchoring Vignette Method, in accordance with the curricular documents of the Czech Republic, and the steps taken while formulating anchoring vignettes on the basis of the international document DigComp. The verification of the fulfilment of the Anchoring Vignette Method assumptions (vignette...
859

Digitalisering som verktyg för likvärdighet? : En studie om rektorers resonemang om den digitala satsningen utifrån begreppet likvärdighet.

Bellman, Angelica, Yoder, Sara January 2020 (has links)
Syftet med studien är att beskriva och problematisera hur rektorer verksamma inom år f-6 resonerar om satsningen på digitalisering utifrån begreppet likvärdighet. Åtta rektorer på olika skolor är intervjuade. Resultatet visade att rektorerna på olika sätt bedriver skolutveckling inom området digitalisering för att uppnå likvärdighet inom verksamheten. I resultatet framkommer dock skillnader i hur långt verksamheterna kommit både inom skolor och mellan skolor utifrån; digitalisering, digital kompetens och likvärdighet.  Rektorerna lyfter olika lokala orsaker till att verksamheternas digitala implementering kommit olika långt. En slutsats som går att dra efter resultat är vilka komplexa förutsättningar en rektor har att arbeta efter för att uppnå likvärdighet genom digitalisering. Rektorerna lyder under ett dualistiskt ledarskap, både från kommun och stat. Rektorerna är således klämda mellan yttre och inre ramar som påverkar verksamheten. I studiens resultat kommer detta till uttryck genom att samtliga rektorer upplever att de måste välja och prioritera olika skolutvecklingsområden. Det kan leda till att det uppstår en ojämlikhet inom området digitalisering. / The purpose of the study has been to describe and problematize how principals are perceiving equity in digitalization based on the concepts in K-6. Eight principals at different schools from the same municipality have been interviewed. The results show that principals use various methods to conduct school development of ​​digitalization in order to achieve equality within the school. Inparticular the results show that there are differences in how far the development of digitalization, digital skills and equivalence has reached both within schools and between schools. The principals have different points of focus in managing the digital implementation. One conclusion based on the analysis is the complex conditions a principal must work under to achieve equality through digitization. The principals are caught between leadership from the municipality and the state. This study shows that all principals must choose individually an area to develop within their school. This can lead to inequalities in the field of digitalization.
860

Complexity and Conflict: Modeling Contests with Exogenous and Endogenous Noise

Richard Mickelsen (12476793) 28 April 2022 (has links)
<p>Contest outcomes often involve some mix of skill and chance.  In three essays, I vary the sources of noise and show how player actions either influence, or are influenced by, noise.  I begin with a classic multi-battle contest, the Colonel Blotto game.  Due to his disadvantage in resources, the weak player in this contest stochastically distributes resources to a subset of battlefields while neglecting all others in an attempt to achieve a positive payoff.  In contrast, the strong player evenly distributes his resources in order to defend all battlefields, while randomly assigning extra resources to some.  Because the weak player benefits from randomizing over larger numbers of battlefields, a strong player has incentive to decrease the range over which the weak player can randomize.  When battlefields are exogenously partitioned into subsets, or \textit{fronts}, he is able to do this by decentralizing his forces to each front in a stage prior to the distribution of forces to battlefields and actual conflict.  These allocations are permanent, and each subset of battlefields effectively becomes its own, independent Blotto subgame.  I show that there exist parameter regions in which the strong player's unique equilibrium payoffs with decentralization are strictly higher than the unique equilibrium payoffs without decentralization.</p> <p><br></p> <p>In my second paper, I show how sources of exogenous noise, what Clausewitz referred to as the ``fog of war," obscure developments on the battlefield from the view of a military leader, while individual inexperience and lack of expertise in a particular situation influence his decisionmaking.  I model both forms of uncertainty using the decentralized Colonel Blotto game from the first chapter.  To do so, I first test the robustness of allocation-stage subgame perfect equilibria by changing the contest success function to a lottery, then I find the players' quantal response equilibria (QRE) to show how individual decision-making is impacted by bounded rationality and noisy best responses, represented by a range of psi values in the logit QRE.  I find that player actions rely significantly less on decentralization strategies under the lottery CSF compared to the case of the all-pay auction, owing mainly to the increased exogenous noise.  Moreover, agent QRE and heterogeneous QRE approximate subgame perfect equilibria for high values of psi in the case of an all-pay auction, but under the lottery CSF, QRE is largely unresponsive to changes in psi due to the increase in exogenous noise.</p> <p><br></p> <p>Finally, I examine a potential method for introducing noise into the all-pay auction (APA) contest success function (CSF) utilized in the Colonel Blotto games of the first two chapters.  Many contests are fundamentally structured as APA, yet there is a tendency in the empirical literature to utilize a lottery CSF when stochastic outcomes are possible or the tractability of pure strategy equilibria is desired.  However, previous literature has shown that using a lottery CSF sacrifices multiple distinguishing characteristics of the APA, such as the mixed strategy equilibria described by Baye, Kovenock, and de Vries (1996), the exclusion principle of Baye, Kovenock, and de Vries (1993), and the caps on lobbying principle of Che and Gale (1998).  I overcome this by formulating an APA that incorporates noise and retains the defining characteristics of an auction by forming a convex combination of the APA and fair lottery with the risk parameter lambda.  I prove that equilibria hold by following the proofs of Baye et al. (1996, 1993) and Che and Gale (1998), and I show the new CSF satisfies the axioms of Skaperdas (1996).  While player and auctioneer actions, payments, and revenues in the noisy APA adhere closely to the those of the APA for low levels of noise, the effect of discounted expected payoffs results in lower aggregate payments and payoffs when noise is high.  Finally, I show the noisy APA is only noise equivalent to the lottery CSF when lambda = 0, i.e., the fair lottery.</p>

Page generated in 0.0641 seconds