• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 8
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Bringing the history of fashion up-to-date; towards a model for temporal adatation in translation.

Svanberg, Kerstin January 2012 (has links)
In cultural adaptation, the translator has a solid theoretical ground to stand upon; scholars have elaborated strategies that are helpful to this effect. However, there is little research, if any, to rely upon in the matter of temporal adaptation. The aim of this paper is to fill this gap. The primary data used in this translational study consists of an English source text that was published in 2008 and the resulting target text, translated to Swedish in 2012. Hence, in order for the target text to function in its time, there was a four-year long time gap to fill with accurate and relevant data and in a style that would not deviate from the author’s original intentions; the target text needed to be temporally adapted. In what follows, I will suggest a set of strategies for temporal adaptation. The model is elaborated with strategies for cultural adaptation as a starting point and based upon measures taken to relocate the target text to 2012. The suggested strategies are time bridging, updating, adjustment and omission. These four strategies make up the model that I put forward to bridge the theoretical gap that seems to prevail in the matter of temporal adaptation. However, considering that the data used in this study was relatively limited, the applicability of the strategies may be the scope of future studies.
2

Automated Configuration of Time-Critical Multi-Configuration AUTOSAR Systems

Chandmare, Kunal 28 September 2017 (has links) (PDF)
The vision of automated driving demands a highly available system, especially in safety-critical functionalities. In automated driving when a driver is not binding to be a part of the control loop, the system needs to be operational even after failure of a critical component until driver regain the control of vehicle. In pursuit of such a fail-operational behavior, the developed design process with software redundancy in contrast to conventional dedicated backup requires the support of automatic configurator for scheduling relevant parameters to ensure real-time behavior of the system. Multiple implementation methods are introduced to provide an automatic service which also considers task criticality before assigning task to the processor. Also, a generic method is developed to generate adaptation plans automatically for an already monitoring and reconfiguration service to handle fault occurring environment.
3

Techniques for Efficient Execution of Large-Scale Scientific Workflows in Distributed Environments

Kalayci, Selim 14 November 2014 (has links)
Scientific exploration demands heavy usage of computational resources for large-scale and deep analysis in many different fields. The complexity or the sheer scale of the computational studies can sometimes be encapsulated in the form of a workflow that is made up of numerous dependent components. Due to its decomposable and parallelizable nature, different components of a scientific workflow may be mapped over a distributed resource infrastructure to reduce time to results. However, the resource infrastructure may be heterogeneous, dynamic, and under diverse administrative control. Workflow management tools are utilized to help manage and deal with various aspects in the lifecycle of such complex applications. One particular and fundamental aspect that has to be dealt with as smooth and efficient as possible is the run-time coordination of workflow activities (i.e. workflow orchestration). Our efforts in this study are focused on improving the workflow orchestration process in such dynamic and distributed resource environments. We tackle three main aspects of this process and provide contributions in each of them. Our first contribution involves increasing the scalability and site autonomy in situations where the mapped components of a workflow span across several heterogeneous administrative domains. We devise and implement a generic decentralization framework for orchestration of workflows under such conditions. Our second contribution is involved with addressing the issues that arise due to the dynamic nature of such environments. We provide generic adaptation mechanisms that are highly transparent and also substantially less intrusive with respect to the rest of the workflow in execution. Our third contribution is to improve the efficiency of orchestration of large-scale parameter-sweep workflows. By exploiting their specific characteristics, we provide generic optimization patterns that are applicable to most instances of such workflows. We also discuss implementation issues and details that arise as we provide our contributions in each situation.
4

Automated Configuration of Time-Critical Multi-Configuration AUTOSAR Systems

Chandmare, Kunal 28 September 2017 (has links)
The vision of automated driving demands a highly available system, especially in safety-critical functionalities. In automated driving when a driver is not binding to be a part of the control loop, the system needs to be operational even after failure of a critical component until driver regain the control of vehicle. In pursuit of such a fail-operational behavior, the developed design process with software redundancy in contrast to conventional dedicated backup requires the support of automatic configurator for scheduling relevant parameters to ensure real-time behavior of the system. Multiple implementation methods are introduced to provide an automatic service which also considers task criticality before assigning task to the processor. Also, a generic method is developed to generate adaptation plans automatically for an already monitoring and reconfiguration service to handle fault occurring environment.
5

Deep Learning Approaches to Low-level Vision Problems

Liu, Huan January 2022 (has links)
Recent years have witnessed tremendous success in using deep learning approaches to handle low-level vision problems. Most of the deep learning based methods address the low-level vision problem by training a neural network to approximate the mapping from the inputs to the desired ground truths. However, directly learning this mapping is usually difficult and cannot achieve ideal performance. Besides, under the setting of unsupervised learning, the general deep learning approach cannot be used. In this thesis, we investigate and address several problems in low-level vision using the proposed approaches. To learn a better mapping using the existing data, an indirect domain shift mechanism is proposed to add explicit constraints inside the neural network for single image dehazing. This allows the neural network to be optimized across several identified neighbours, resulting in a better performance. Despite the success of the proposed approaches in learning an improved mapping from the inputs to the targets, three problems of unsupervised learning is also investigated. For unsupervised monocular depth estimation, a teacher-student network is introduced to strategically integrate both supervised and unsupervised learning benefits. The teacher network is formed by learning under the binocular depth estimation setting, and the student network is constructed as the primary network for monocular depth estimation. In observing that the performance of the teacher network is far better than that of the student network, a knowledge distillation approach is proposed to help improve the mapping learned by the student. For single image dehazing, the current network cannot handle different types of haze patterns as it is trained on a particular dataset. The problem is formulated as a multi-domain dehazing problem. To address this issue, a test-time training approach is proposed to leverage a helper network in assisting the dehazing network adapting to a particular domain using self-supervision. In lossy compression system, the target distribution can be different from that of the source and ground truths are not available for reference. Thus, the objective is to transform the source to target under a rate constraint, which generalizes the optimal transport. To address this problem, theoretical analyses on the trade-off between compression rate and minimal achievable distortion are studied under the cases with and without common randomness. A deep learning approach is also developed using our theoretical results for addressing super-resolution and denoising tasks. Extensive experiments and analyses have been conducted to prove the effectiveness of the proposed deep learning based methods in handling the problems in low-level vision. / Thesis / Doctor of Philosophy (PhD)
6

Label-Efficient Visual Understanding with Consistency Constraints

Zou, Yuliang 24 May 2022 (has links)
Modern deep neural networks are proficient at solving various visual recognition and understanding tasks, as long as a sufficiently large labeled dataset is available during the training time. However, the progress of these visual tasks is limited by the number of manual annotations. On the other hand, it is usually time-consuming and error-prone to annotate visual data, rendering the challenge of scaling up human labeling for many visual tasks. Fortunately, it is easy to collect large-scale, diverse unlabeled visual data from the Internet. And we can acquire a large amount of synthetic visual data with annotations from game engines effortlessly. In this dissertation, we explore how to utilize the unlabeled data and synthetic labeled data for various visual tasks, aiming to replace or reduce the direct supervision from the manual annotations. The key idea is to encourage deep neural networks to produce consistent predictions across different transformations (\eg geometry, temporal, photometric, etc.). We organize the dissertation as follows. In Part I, we propose to use the consistency over different geometric formulations and a cycle consistency over time to tackle the low-level scene geometry perception tasks in a self-supervised learning setting. In Part II, we tackle the high-level semantic understanding tasks in a semi-supervised learning setting, with the constraint that different augmented views of the same visual input maintain consistent semantic information. In Part III, we tackle the cross-domain image segmentation problem. By encouraging an adaptive segmentation model to output consistent results for a diverse set of strongly-augmented synthetic data, the model learns to perform test-time adaptation on unseen target domains with one single forward pass, without model training or optimization at the inference time. / Doctor of Philosophy / Recently, deep learning has emerged as one of the most powerful tools to solve various visual understanding tasks. However, the development of deep learning methods is significantly limited by the amount of manually labeled data. On the other hand, it is usually time-consuming and error-prone to annotate visual data, making the human labeling process not easily scalable. Fortunately, it is easy to collect large-scale, diverse raw visual data from the Internet (\eg search engines, YouTube, Instagram, etc.). And we can acquire a large amount of synthetic visual data with annotations from game engines effortlessly. In this dissertation, we explore how we can utilize the raw visual data and synthetic data for various visual tasks, aiming to replace or reduce the direct supervision from the manual annotations. The key idea behind this is to encourage deep neural networks to produce consistent predictions of the same visual input across different transformations (\eg geometry, temporal, photometric, etc.). We organize the dissertation as follows. In Part I, we propose using the consistency over different geometric formulations and a forward-backward cycle consistency over time to tackle the low-level scene geometry perception tasks, using unlabeled visual data only. In Part II, we tackle the high-level semantic understanding tasks using both a small amount of labeled data and a large amount of unlabeled data jointly, with the constraint that different augmented views of the same visual input maintain consistent semantic information. In Part III, we tackle the cross-domain image segmentation problem. By encouraging an adaptive segmentation model to output consistent results for a diverse set of strongly-augmented synthetic data, the model learns to perform test-time adaptation on unseen target domains.
7

Modélisation de l’engagement et de la charge mentale de travail dans les Systèmes Tutoriels Intelligents

Chaouachi, Maher 09 1900 (has links)
Les récents avancements en sciences cognitives, psychologie et neurosciences, ont démontré que les émotions et les processus cognitifs sont intimement reliés. Ce constat a donné lieu à une nouvelle génération de Systèmes Tutoriels Intelligents (STI) dont la logique d’adaptation repose sur une considération de la dimension émotionnelle et affective de l’apprenant. Ces systèmes, connus sous le nom de Systèmes Tutoriels Émotionnellement Intelligents (STEI), cherchent à se doter des facultés des tuteurs humains dans leurs capacités à détecter, comprendre et s’adapter intuitivement en fonction de l’état émotionnel des apprenants. Toutefois, en dépit du nombre important de travaux portant sur la modélisation émotionnelle, les différents résultats empiriques ont démontré que les STEI actuels n’arrivent pas à avoir un impact significatif sur les performances et les réactions émotionnelles des apprenants. Ces limites sont principalement dues à la complexité du concept émotionnel qui rend sa modélisation difficile et son interprétation ambiguë. Dans cette thèse, nous proposons d’augmenter les STEI des indicateurs d’états mentaux d’engagement et de charge mentale de travail. Ces états mentaux ont l’avantage d’englober à la fois une dimension affective et cognitive. Pour cela, nous allons, dans une première partie, présenter une approche de modélisation de ces indicateurs à partir des données de l’activité cérébrale des apprenants. Dans une seconde partie, nous allons intégrer ces modèles dans un STEI capable d’adapter en temps réel le processus d’apprentissage en fonction de ces indicateurs. / Recent advances in cognitive science, psychology and neuroscience have shown that emotions and cognitive processes are closely intertwined. This fact has given rise to a new generation of Intelligent Tutoring Systems (ITS) whose adaptive logic is based on the consideration of the learner’s emotional and affective dimension. These systems, known as Emotionally Intelligent Tutoring Systems (EITS), seek to acquire the human tutors’ ability in detecting, understanding and adapting to the learners’ emotional state. However, despite the large body of work on emotional modeling, several empirical results showed that current EITS fail to have a significant impact on the learners’ performance and emotional reactions. These limitations are mainly due to the complexity of the emotional concept which makes its modeling difficult and its interpretation ambiguous. In this thesis we propose to increase EITS with mental state indicators of engagement and mental workload. These mental states have the advantage to include both affective and cognitive dimensions. To this end, we first present an approach to modeling these indicators from the learners’ brain activity data. In the second part, we will integrate these models into an EITS able to adapt in real time the learning process according to these indicators.
8

Adaptation anisotrope précise en espace et temps et méthodes d’éléments finis stabilisées pour la résolution de problèmes de mécanique des fluides instationnaires / Space-Time accurate anisotropic adaptation and stabilized finite element methods for the resolution of unsteady CFD problems

El Jannoun, Ghina 22 September 2014 (has links)
Aujourd'hui, avec l'amélioration des puissances de calcul informatique, la simulation numérique est devenue un outil essentiel pour la prédiction des phénomènes physiques et l'optimisation des procédés industriels. La modélisation de ces phénomènes pose des difficultés scientifiques car leur résolution implique des temps de calcul très longs malgré l'utilisation d'importantes ressources informatiques.Dans cette thèse, on s'intéresse à la résolution de problèmes complexes couplant écoulements et transferts thermiques. Les problèmes physiques étant fortement anisotropes, il est nécessaire d'avoir un maillage avec une résolution très élevée pour obtenir un bon niveau de précision. Cela implique de longs temps de calcul. Ainsi il faut trouver un compromis entre précision et efficacité. Le développement de méthodes d'adaptation en temps et en espace est motivé par la volonté de faire des applications réelles et de limiter les inconvénients inhérents aux méthodes de résolution non adaptatives en terme de précision et d'efficacité. La résolution de problèmes multi-échelles instationnaires sur un maillage uniforme avec un nombre de degrés de liberté limité est souvent incapable de capturer les petites échelles, nécessite des temps de calcul longs et peut aboutir à des résultats incorrects. Ces difficultés ont motivé le développement de méthodes de raffinement local avec une meilleure précision aux endroits adéquats. L'adaptation en temps et en espace peut donc être considérée comme une composante essentielle de ces méthodes.L'approche choisie dans cette thèse consiste en l'utilisation de méthodes éléments finis stabilisées et le développement d'outils d'adaptation espace-temps pour améliorer la précision et l'efficacité des simulations numériques.Le développement de la méthode adaptative est basé sur un estimateur d'erreur sur les arrêtes du maillage afin de localiser les régions du domaine de calcul présentant de forts gradients ainsi que les couches limites. Ensuite une métrique décrivant la taille de maille en chaque noeud dans les différentes directions est calculée. Afin d'améliorer l'efficacité des calculs la construction de cette métrique prend en compte un nombre fixe de noeuds et aboutit à une répartition et une orientation optimale des éléments du maillage. Cette approche est étendue à une formulation espace-temps où les maillages et les pas de temps optimaux sont prédits sur des intervalles de temps en vue de contrôler l'erreur d'interpolation sur la domaine de calcul. / Nowadays, with the increase in computational power, numerical modeling has become an intrinsic tool for predicting physical phenomena and developing engineering designs. The modeling of these phenomena poses scientific complexities the resolution of which requires considerable computational resources and long lasting calculations.In this thesis, we are interested in the resolution of complex long time and large scale heat transfer and fluid flow problems. When the physical phenomena exhibit sharp anisotropic features, a good level of accuracy requires a high mesh resolution, hence hindering the efficiency of the simulation. Therefore a compromise between accuracy and efficiency shall be adopted. The development of space and time adaptive adaptation techniques was motivated by the desire to devise realistic configurations and to limit the shortcomings of the traditional non-adaptive resolutions in terms of lack of solution's accuracy and computational efficiency. Indeed, the resolution of unsteady problems with multi-scale features on a prescribed uniform mesh with a limited number of degrees of freedom often fails to capture the fine scale physical features, have excessive computational cost and might produce incorrect results. These difficulties brought forth investigations towards generating meshes with local refinements where higher resolution was needed. Space and time adaptations can thus be regarded as essential ingredients in this recipe.The approach followed in this work consists in applying stabilized finite element methods and the development of space and time adaptive tools to enhance the accuracy and efficiency of the numerical simulations.The derivation process starts with an edge-based error estimation for locating the regions, in the computational domain, presenting sharp gradients, inner and boundary layers. This is followed by the construction of nodal metric tensors that prescribe, at each node in the spatial mesh, mesh sizes and the directions along which these sizes are to be imposed. In order to improve the efficiency of computations, this construction takes into account a fixed number of nodes and generates an optimal distribution and orientation of the mesh elements. The approach is extended to a space-time adaptation framework, whereby optimal meshes and time-step sizes for slabs of time are constructed in the view of controlling the global interpolation error over the computation domain.

Page generated in 0.1267 seconds