Spelling suggestions: "subject:"equivalence"" "subject:"quivalence""
851 |
Hierarchical clustering using equivalence test : application on automatic segmentation of dynamic contrast enhanced image sequence / Clustering hiérarchique en utilisant le test d’équivalent : application à la segmentation automatique des séries dynamiques de perfusionLiu, Fuchen 11 July 2017 (has links)
L'imagerie de perfusion permet un accès non invasif à la micro-vascularisation tissulaire. Elle apparaît comme un outil prometteur pour la construction de biomarqueurs d'imagerie pour le diagnostic, le pronostic ou le suivi de traitement anti-angiogénique du cancer. Cependant, l'analyse quantitative des séries dynamiques de perfusion souffre d'un faible rapport signal sur bruit (SNR). Le SNR peut être amélioré en faisant la moyenne de l'information fonctionnelle dans de grandes régions d'intérêt, qui doivent néanmoins être fonctionnellement homogènes. Pour ce faire, nous proposons une nouvelle méthode pour la segmentation automatique des séries dynamiques de perfusion en régions fonctionnellement homogènes, appelée DCE-HiSET. Au coeur de cette méthode, HiSET (Hierarchical Segmentation using Equivalence Test ou Segmentation hiérarchique par test d'équivalence) propose de segmenter des caractéristiques fonctionnelles ou signaux (indexées par le temps par exemple) observées discrètement et de façon bruité sur un espace métrique fini, considéré comme un paysage, avec un bruit sur les observations indépendant Gaussien de variance connue. HiSET est un algorithme de clustering hiérarchique qui utilise la p-valeur d'un test d'équivalence multiple comme mesure de dissimilarité et se compose de deux étapes. La première exploite la structure de voisinage spatial pour préserver les propriétés locales de l'espace métrique, et la seconde récupère les structures homogènes spatialement déconnectées à une échelle globale plus grande. Etant donné un écart d'homogénéité $\delta$ attendu pour le test d'équivalence multiple, les deux étapes s'arrêtent automatiquement par un contrôle de l'erreur de type I, fournissant un choix adaptatif du nombre de régions. Le paramètre $\delta$ apparaît alors comme paramètre de réglage contrôlant la taille et la complexité de la segmentation. Théoriquement, nous prouvons que, si le paysage est fonctionnellement constant par morceaux avec des caractéristiques fonctionnelles bien séparées entre les morceaux, HiSET est capable de retrouver la partition exacte avec grande probabilité quand le nombre de temps d'observation est assez grand. Pour les séries dynamiques de perfusion, les hypothèses, dont dépend HiSET, sont obtenues à l'aide d'une modélisation des intensités (signaux) et une stabilisation de la variance qui dépend d'un paramètre supplémentaire $a$ et est justifiée a posteriori. Ainsi, DCE-HiSET est la combinaison d'une modélisation adaptée des séries dynamiques de perfusion avec l'algorithme HiSET. A l'aide de séries dynamiques de perfusion synthétiques en deux dimensions, nous avons montré que DCE-HiSET se révèle plus performant que de nombreuses méthodes de pointe de clustering. En terme d'application clinique de DCE-HiSET, nous avons proposé une stratégie pour affiner une région d'intérêt grossièrement délimitée par un clinicien sur une série dynamique de perfusion, afin d'améliorer la précision de la frontière des régions d'intérêt et la robustesse de l'analyse basée sur ces régions tout en diminuant le temps de délimitation. La stratégie de raffinement automatique proposée est basée sur une segmentation par DCE-HiSET suivie d'une série d'opérations de type érosion et dilatation. Sa robustesse et son efficacité sont vérifiées grâce à la comparaison des résultats de classification, réalisée sur la base des séries dynamiques associées, de 99 tumeurs ovariennes et avec les résultats de l'anapathologie sur biopsie utilisés comme référence. Finalement, dans le contexte des séries d'images 3D, nous avons étudié deux stratégies, utilisant des structures de voisinage des coupes transversales différentes, basée sur DCE-HiSET pour obtenir la segmentation de séries dynamiques de perfusion en trois dimensions. (...) / Dynamical contrast enhanced (DCE) imaging allows non invasive access to tissue micro-vascularization. It appears as a promising tool to build imaging biomarker for diagnostic, prognosis or anti-angiogenesis treatment monitoring of cancer. However, quantitative analysis of DCE image sequences suffers from low signal to noise ratio (SNR). SNR may be improved by averaging functional information in large regions of interest, which however need to be functionally homogeneous. To achieve SNR improvement, we propose a novel method for automatic segmentation of DCE image sequence into functionally homogeneous regions, called DCE-HiSET. As the core of the proposed method, HiSET (Hierarchical Segmentation using Equivalence Test) aims to cluster functional (e.g. with respect to time) features or signals discretely observed with noise on a finite metric space considered to be a landscape. HiSET assumes independent Gaussian noise with known constant level on the observations. It uses the p-value of a multiple equivalence test as dissimilarity measure and consists of two steps. The first exploits the spatial neighborhood structure to preserve the local property of the metric space, and the second recovers (spatially) disconnected homogeneous structures at a larger (global) scale. Given an expected homogeneity discrepancy $\delta$ for the multiple equivalence test, both steps stop automatically through a control of the type I error, providing an adaptive choice of the number of clusters. Parameter $\delta$ appears as the tuning parameter controlling the size and the complexity of the segmentation. Assuming that the landscape is functionally piecewise constant with well separated functional features, we prove that HiSET will retrieve the exact partition with high probability when the number of observation times is large enough. In the application for DCE image sequence, the assumption is achieved by the modeling of the observed intensity in the sequence through a proper variance stabilization, which depends only on one additional parameter $a$. Therefore, DCE-HiSET is the combination of this DCE imaging modeling step with our statistical core, HiSET. Through a comparison on synthetic 2D DCE image sequence, DCE-HiSET has been proven to outperform other state-of-the-art clustering-based methods. As a clinical application of DCE-HiSET, we proposed a strategy to refine a roughly manually delineated ROI on DCE image sequence, in order to improve the precision at the border of ROIs and the robustness of DCE analysis based on ROIs, while decreasing the delineation time. The automatic refinement strategy is based on the segmentation through DCE-HiSET and a series of erosion-dilation operations. The robustness and efficiency of the proposed strategy are verified by the comparison of the classification of 99 ovarian tumors based on their associated DCE-MR image sequences with the results of biopsy anapathology used as benchmark. Furthermore, DCE-HiSET is also adapted to the segmentation of 3D DCE image sequence through two different strategies with distinct considerations regarding the neighborhood structure cross slices. This PhD thesis has been supported by contract CIFRE of the ANRT (Association Nationale de la Recherche et de la Technologie) with a french company INTRASENSE, which designs, develops and markets medical imaging visualization and analysis solutions including Myrian®. DCE-HiSET has been integrated into Myrian® and tested to be fully functional.
|
852 |
A comparative analysis of stylistic devices in Shakespeare’s plays, Julius Caesar and Macbeth and their xitsonga translationsBaloyi, Mafemani Joseph 06 1900 (has links)
The study adopts a theory of Descriptive Translation Studies to undertake a comparative analysis of stylistic devices in Shakespeare’s two plays, Julius Caesar and Macbeth and their Xitsonga translations. It contextualises its research aim and objectives after outlining a sequential account of theory development in the discipline of translation; and arrives at the desired and suitable tools for data collection and analysis.Through textual observation and notes of reading, the current study argues that researchers and scholars in the discipline converge when it comes to a dire need for translation strategies, but diverge in their classification and particular application for convenience in translating and translation. This study maintains that the translation strategies should be grouped into explicitation, normalisation and simplification, where each is assigned specific translation procedures. The study demonstrates that explicitation and normalisation translation strategies are best suited in dealing with translation constraints at a microtextual level.
The sampled excerpts from both plays were examined on the preference for the analytical framework based on subjective sameness within a Skopos theory. The current study acknowledges that there is no single way of translating a play from one culture to the other. It also acknowledges that there appears to be no way the translator can refrain from the influence of the source text, as an inherent cultural feature that makes it unique. With no sure way of managing stylistic devices as translation constraints, translation as a problem-solving process requires creativity, a demonstration of mastery of language and style of the author of the source text, as well as a power drive characterised by the aspects of interlingual psychological balance of power and knowledge power. These aspects will help the translator to manage whatever translation brief(s) better, and arrive at a product that is accessible, accurate and acceptable to the target readership. They will also ensure that the translator maintains a balance between the two languages in contact, in order to guard against domination of one language over the other.
The current study concludes that the Skopos theory has a larger influence in dealing with anticipating the context of the target readership as a factor that can introduce high risk when assessing the communicability conditions for the translated message. Contrariwise, when dealing with stylistic devices and employ literal translation as a translation procedure to simplification, the translator only aims at simplifying the language and making it accessible for the sake of ‘accessibility’ as it remains a product with communicative inadequacies. The study also concludes by maintaining that translation is not only transcoding, but the activity that calls for the translator’s creativity in order to identify and analyse the constraints encountered and decide on the corresponding translation strategies. / African Languages / D. Litt. et Phil. (African Languages)
|
853 |
Automatická verifikace v procesu soubežného návrhu hardware a software / Automated Verification in HW/SW Co-designCharvát, Lukáš Unknown Date (has links)
Předmětem dizertační práce je návrh nových technik pro verifikaci hardwaru, které jsou optimalizovány pro použití v procesu souběžného vývoje hardwaru a softwaru. V rámci tohoto typu vývoje je hardware spolu se software vyvíjen paralelně s cílem urychlit vývoj nových systémů. Současné nástroje pro tvorbu mikroprocesorů stavějící na tomto stylu vývoje obvykle umožňují vývojářům ověřit jejich návrh využitím různých simulačních technik a/nebo za pomoci tzv. funkční verifikace. Společnou nevýhodou těchto přístupů je, že se zaměřují pouze na hledání chyb. Výsledný produkt tedy může stále obsahovat nenalezené netriviální defekty. Z tohoto důvodu se v posledních letech stává stále více žádané nasazení formálních metod. Na rozdíl od výše uvedených přístupů založených na hledání chyb, se formální verifikace zaměřuje na dodání rigorózního důkazu, že daný systém skutečně splňuje požadované vlastnosti. I když bylo v uplynulých letech v této oblasti dosaženo značného pokroku, tak aktuální formální přístupy nemají zdaleka schopnost plně automaticky prověřit všechny relevantní vlastnosti verifikovaného návrhu bez výrazného a často nákladného zapojení lidí v rámci verifikačního procesu. Tato práce se snaží řešit problém s automatizací verifikačního procesu jejím zaměřením na verifikační techniky, ve kterých je záměrně kladen menší důraz na jejich přesnost a obecnost, za cenu dosažení plné automatizace (např. vyloučením potřeby ručně vytvářet modely prostředí). Dále se práce také zaměřuje na efektivitu navrhovaných technik a jejich schopnost poskytovat nepřetržitou zpětnou vazbu o verifikačním procesu (např. v podobě podání informace o aktuálním stavu pokrytí). Zvláštní pozornost je pak věnována vývoji formálních metod ověřujících ekvivalenci návrhů mikroprocesorů na různých úrovních abstrakce. Tyto návrhy se mohou lišit ve způsobu, jakým jsou vnitřně zpracovány programové instrukce, nicméně z vnějšího pohledu (daného např. obsahem registrů viditelných z pozice programátora) musí být jejich chování při provádění stejného vstupního programu shodné. Kromě těchto témat se práce také zabývá problematikou návrhu metod pro verifikaci správnosti mechanismů zabraňujících výskytu datových a řídících hazardů v rámci linky zřetězeného zpracování instrukcí. Veškeré metody popsané v této práci byly implementovány ve formě několika nástrojů. Aplikací těchto nástrojů pro verifikaci návrhů netriviálních procesorů bylo dosaženo slibných experimentálních výsledků.
|
854 |
High-frequency statistics for Gaussian processes from a Le Cam perspectiveHoltz, Sebastian 04 March 2020 (has links)
Diese Arbeit untersucht Inferenz für Streuungsparameter bedingter Gaußprozesse anhand diskreter verrauschter Beobachtungen in einem Hochfrequenz-Setting. Unser Ziel dabei ist es, eine asymptotische Charakterisierung von effizienter Schätzung in einem allgemeine Gaußschen Rahmen zu finden.
Für ein parametrisches Fundamentalmodell wird ein Hájek-Le Cam-Faltungssatz hergeleitet, welcher eine exakte asymptotische untere Schranke für Schätzmethoden liefert. Dazu passende obere Schranken werden konstruiert und die Bedeutung des Satzes wird verdeutlicht anhand zahlreicher Beispiele wie der (fraktionellen) Brownschen Bewegung, dem Ornstein-Uhlenbeck-Prozess oder integrierten Prozessen. Die Herleitung der Effizienzresultate basiert auf asymptotischen Äquivalenzen und kann für verschiedene Verallgemeinerungen des parametrischen Fundamentalmodells verwendet werden.
Als eine solche Erweiterung betrachten wir das Schätzen der quadrierten Kovariation eines stetigen Martingals anhand verrauschter asynchroner Beobachtungen, welches ein fundamentales Schätzproblem in der Öknometrie ist. Für dieses Modell erhalten wir einen semi-parametrischen Faltungssatz, welcher bisherige Resultate im Sinne von Multidimensionalität, Asynchronität und Annahmen verallgemeinert.
Basierend auf den vorhergehenden Herleitungen entwickeln wir einen statistischen Test für den Hurst-Parameter einer fraktionellen Brownschen Bewegung. Ein Score- und ein Likelihood-Quotienten-Test werden implementiert sowie analysiert und erste empirische Eindrücke vermittelt. / This work studies inference on scaling parameters of a conditionally Gaussian process under discrete noisy observations in a high-frequency regime. Our aim is to find an asymptotic characterisation of efficient estimation for a general Gaussian framework.
For a parametric basic case model a Hájek-Le Cam convolution theorem is derived, yielding an exact asymptotic lower bound for estimators. Matching upper bounds are constructed and the importance of the theorem is illustrated by various examples of interest such as the (fractional) Brownian motion, the Ornstein-Uhlenbeck process or integrated processes. The derivation of the efficiency result is based on asymptotic equivalences and can be employed for several generalisations of the parametric basic case model.
As such an extension we consider estimation of the quadratic covariation of a continuous martingale from noisy asynchronous observations, which is a fundamental estimation problem in econometrics. For this model, a semi-parametric convolution theorem is obtained which generalises existing results in terms of multidimensionality, asynchronicity and assumptions.
Based on the previous derivations, we develop statistical tests on the Hurst parameter of a fractional Brownian motion. A score test and a likelihood ratio type test are implemented as well as analysed and first empirical impressions are given.
|
855 |
Redukce nedeterministických konečných automatů / Reduction of the Nondeterministic Finite AutomataProcházka, Lukáš January 2011 (has links)
Nondeterministic finite automaton is an important tool, which is used to process strings in many different areas of programming. It is important to try to reduce its size for increasing programs' effectiveness. However, this problem is computationally hard, so we need to search for new techniques. Basics of finite automata are described in this work. Some methods for their reduction are then introduced. Usable reduction algorithms are described in greater detail. Then they are implemented and tested. The test results are finally evaluated.
|
856 |
Explicit Calculations of Siu’s Effective Termination of Kohn’s Algorithm and the Hachtroudi-Chern-Moser Tensors in CR Geometry / Calculs explicites pour la terminaison effective de l'algorithme de Kohn d'après Siu, et tenseurs de Hachtroudi-Chern-Moser en géométrie CRFoo, Wei Guo 14 March 2018 (has links)
La première partie présente des calculs explicites de terminaison effective de l'algorithme de Kohn proposée par Siu. Dans la deuxième partie, nous étudions la géométrie des hypersurfaces réelles dans Cⁿ, et nous calculons des invariants explicites avec la méthode d'équivalences de Cartan pour déterminer les lieux CR-ombilics. / The first part of the thesis consists of calculations around Siu's effective termination of Kohn's algorithm. The second part of the thesis studies the CR real hypersurfaces in complex spaces and calculates various explicit invariants using Cartan's equivalence method to study CR-umbilical points.
|
857 |
Talets och undertextens olika nyanser : En undersökning av strykningar utav modalitet och uttryck för värderingar i adaptionen från tal till undertext / The different nuances of speech and subtitles : An analysis of omissions regarding modality and expressions of valuation in the shift between speech and subtitlesAdolfsson, Linnea January 2020 (has links)
The communication regarding the new corona-virus raises questions about availability and plain language. In Sweden, approximately 1.5 million people are in need of this communication through written language due to, inter alia, hearing impairment. Although subtitles are considered one of the most read genres today, it has received little focus in Swedish linguistic research, especially when it comes to intralingual subtitles. However, the communication through subtitles is limited and can lead to information loss. In Sweden, SVT as a public service channel has special requirements to maintain good quality in subtitles but nonetheless omissions are a prerequisite. However, this must never affect the loss of important information. Modality is considered within the Systemic-Functional Linguistics as linguistic tools to create opportunities to shift communication in different degrees and directions. A similar shift in degree can occur in expressions of valuation and opinions. Therefore, modality and valuation are interesting and important to study in a well-known TV- show that communicates, informs and debates about a social-crisis like the Corona-virus. This paper examines modality and expressions of valuation in the adaption from speech to subtitles in the Swedish news-program Agenda (SVT) reporting on the Corona-virus in Sweden and the world. Through the subtitling-shift model created by Sahlin (2001), I examine the omissions of modality and the expressions of valuation. The results show that the communication in subtitles have a weaker emphasis of conflicting opinions, are more objectively constructed and that there is a shift in nuances.
|
858 |
Sur quelques problèmes de reconstruction en imagerie MA-TIRF et en optimisation parcimonieuse par relaxation continue exacte de critères pénalisés en norme-l0 / On some reconstruction problems in MA-TIRF imaging and in sparse optimization using continuous exact relaxation of l0-penalized criteriaSoubies, Emmanuel 14 October 2016 (has links)
Cette thèse s'intéresse à deux problèmes rencontrés en traitement du signal et des images. Le premierconcerne la reconstruction 3D de structures biologiques à partir d'acquisitions multi-angles enmicroscopie par réflexion totale interne (MA-TIRF). Dans ce contexte, nous proposons de résoudre leproblème inverse avec une approche variationnelle et étudions l'effet de la régularisation. Une batteried'expériences, simples à mettre en oeuvre, sont ensuite proposées pour étalonner le système et valider lemodèle utilisé. La méthode proposée s'est montrée être en mesure de reconstruire avec précision unéchantillon phantom de géométrie connue sur une épaisseur de 400 nm, de co-localiser deux moléculesfluorescentes marquant les mêmes structures biologiques et d'observer des phénomènes biologiquesconnus, le tout avec une résolution axiale de l'ordre de 20 nm. La deuxième partie de cette thèseconsidère plus précisément la régularisation l0 et la minimisation du critère moindres carrés pénalisé (l2-l0) dans le contexte des relaxations continues exactes de cette fonctionnelle. Nous proposons dans unpremier temps la pénalité CEL0 (Continuous Exact l0) résultant en une relaxation de la fonctionnelle l2-l0 préservant ses minimiseurs globaux et pour laquelle de tout minimiseur local on peut définir unminimiseur local de l2-l0 par un simple seuillage. Par ailleurs, nous montrons que cette relaxation éliminedes minimiseurs locaux de la fonctionnelle initiale. La minimisation de cette fonctionnelle avec desalgorithmes d'optimisation non-convexe est ensuite utilisée pour différentes applications montrantl'intérêt de la minimisation de la relaxation par rapport à une minimisation directe du critère l2-l0. Enfin,une vue unifiée des pénalités continues de la littérature est proposée dans ce contexte de reformulationexacte du problème / This thesis is devoted to two problems encountered in signal and image processing. The first oneconcerns the 3D reconstruction of biological structures from multi-angle total interval reflectionfluorescence microscopy (MA-TIRF). Within this context, we propose to tackle the inverse problem byusing a variational approach and we analyze the effect of the regularization. A set of simple experimentsis then proposed to both calibrate the system and validate the used model. The proposed method hasbeen shown to be able to reconstruct precisely a phantom sample of known geometry on a 400 nmdepth layer, to co-localize two fluorescent molecules used to mark the same biological structures andalso to observe known biological phenomena, everything with an axial resolution of 20 nm. The secondpart of this thesis considers more precisely the l0 regularization and the minimization of the penalizedleast squares criteria (l2-l0) within the context of exact continuous relaxations of this functional. Firstly,we propose the Continuous Exact l0 (CEL0) penalty leading to a relaxation of the l2-l0 functional whichpreserves its global minimizers and for which from each local minimizer we can define a local minimizerof l2-l0 by a simple thresholding. Moreover, we show that this relaxed functional eliminates some localminimizers of the initial functional. The minimization of this functional with nonsmooth nonconvexalgorithms is then used on various applications showing the interest of minimizing the relaxation incontrast to a direct minimization of the l2-l0 criteria. Finally we propose a unified view of continuouspenalties of the literature within this exact problem reformulation framework
|
859 |
Sebekontrola žáků jako pedagogické téma: problematika měření pomocí sebehodnoticích dotazníků / Student self-control as an educational topic: the issue of measurement using self-report questionnairesPapajoanu, Ondřej January 2019 (has links)
Self-control is a key non-cognitive skill, which is frequently measured using self-report questionnaires containing items with rating scales. Such data, however, can be hindered by the differences in scale usage among respondents. This may lead to erroneous conclusions when comparing different groups of respondents. The aim of the thesis is to analyze the differences in self-control among students from different Czech upper-secondary schools based on their (unadjusted) self-reports and self-reports adjusted for the differences in scale usage using the anchoring vignette method. The empirical part of the thesis comprises two studies. In the first (pilot) study, we translate a scale to measure self-control, create anchoring vignettes and focus on the verification of the method's basic assumptions using the data from questionnaires and interviews with students. In the second (main) study, we compare the (un-)adjusted self-reports of self-control and the assessment of the vignettes between students from the selected upper- secondary schools of a different type in Prague (N = 312). We found differences in evaluation standards between students from different types of schools. Differences in scale usage among respondents indeed represent a real threat when comparing student self-reports of self-control....
|
860 |
Metoda ukotvujících vinět a její využití pro zvyšování komparability sebehodnocení vědomostí a dovedností v oblasti ICT / The Anchoring Vignette Method and its use for increasing the comparability of self-assessments of ICT knowledge and skillsHrabák, Jan January 2020 (has links)
This dissertation deals with the use of the Anchoring Vignette Method in educational research carried out to establish the level of information and communication technology (ICT) knowledge and skills, with the focus on Czech upper-secondary school students. The theoretical part describes curricular documents that define the ICT knowledge and skills. In the Czech Republic that means mainly the Framework Educational Programmes. Attention is also paid to the international document DigComp. The Anchoring Vignette Method with the focus on nonparametric approach of this method is described in detail too. The author also provides an overview of available Czech and foreign bibliography on ICT knowledge and skills research - including the International Computer and Information Literacy (ICILS, 2013 and 2018) - and on the use of the Anchoring Vignette Method in educational research. The empirical part includes the description of the steps taken in formulating anchoring vignettes, this being an integral part of the Anchoring Vignette Method, in accordance with the curricular documents of the Czech Republic, and the steps taken while formulating anchoring vignettes on the basis of the international document DigComp. The verification of the fulfilment of the Anchoring Vignette Method assumptions (vignette...
|
Page generated in 0.0678 seconds