• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 9
  • 6
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 96
  • 22
  • 11
  • 10
  • 10
  • 10
  • 9
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Individual behaviours when facing health risk and their aggregate impacts on the society / Les choix individuels en face du risque de santé et leur impact agrégé sur l’ensemble de l'économie

Shang, Ze Zhong 03 May 2019 (has links)
La santé joue un rôle de plus en plus important dans l'économie dans les dernière années: d'une part, on observe une amélioration significative du niveau moyen de la durée de vie, d'autre part, il y a également une forte croissance sur les dépenses de santé. Dans cette thèse, on commence par présenter ces 2 faits stylisés mais on présente également que même s'il y a une amélioration du niveau moyen de la durée de vie, l'inégalité de la santé est toujours rapportée est en effet a tendance d'accroître, on présente aussi que les pays dont les dépenses de santé sont assez importantes en général n'ont pas un système de santé efficace. Par conséquence, cette amélioration de la santé humaine semble de bénéficier plus les gens qui peuvent payer le coût. Afin de trouver les raisons qui cause ce phénomène et proposer des solutions pratiques qui permet de résoudre ce problème, dans cette thèse, on prend 2 approches : premièrement, on commence par l'approche déterministe et également l'approche plus théorique, dans cette approche, on présente notre modèle basé sur le modèle du capital de santé du Grossman et on examine comment réagissent les agent face à la fluctuation de santé, puis on tourne vers le deuxième approche, qui est l'approche stochastique et également l'approche plus pratique, dans cette approche on utilise la chaîne de Markov pour simuler un vrai risque de santé et examine quelle seront les décisions d'agent de différents SES dans cette situation, on agrège ensuite ces décisions pour voir l’impact agrégé qui peuvent être généré sur l'ensemble de l'économie, finalement, on examine comment ces décision peuvent être affectées par des politiques publics. / During the past 2 decades, health has become a more and more important role in our economy life: on the one hand, we observe a significant improvement of average lifespan across the globe, on the other hand, the health expenditures is also increasing enormously, which has become a shake to the public health system of many countries. In this dissertation, we begin with these 2 stylized facts but we also show that there is more to it: though the average level of health has been improved significantly, health related inequalities are still being reported and actually tend to increase, plus, we also show that countries with important health expenditures performs generally poorly in terms of health system efficiency, in short, this improvement of human health we are talking about seems to benefit more those who can pay the bills. In order to figure out what causes this phenomenon and eventually propose practical solutions that help solve the problem, in this dissertation we take 2 approaches: first we start with the deterministic approach and also the more theoretical approach, in this approach we build our model based on the Grossman health capital model and we examine how people would behave when their health fluctuates, then we turn to the second approach, which is the stochastic approach and also the more practical approach, in this approach we use the Markov chain to simulate the real health risk and examine the behaviors of individuals of different social-economic status(SES) under this circumstance, we also aggregate these behaviors to see what impact could be generated on the whole society and we test reversely how these behavior would affected by public policies.
62

Diferenciace výuky anglického jazyka na prvním stupni základní školy / Differentiation in English language teaching at the elementary school

Juránková, Martina January 2019 (has links)
- v anglickém jazyce This diploma thesis called 'Differentiation in English language teaching at the elementary school' deals with employing differentiated instruction in a fifth-grade class at an elementary school in order to make learning more effective by balancing the stronger and the weaker students' needs. The theoretical part focuses on topics of learning and effective learning with an emphasis on English language teaching, mixed ability classes, learner differences and differentiated instruction. The final chapter of this part proposes a set of example activities and ideas that might be used in mixed-ability classes. The practical part uses action research to determine whether differentiate instruction was employed successfully. The outcomes of the research are discussed at the end of this part. The assumption is that after the research is finished, the situation in the class will improve since all students will feel adequately engaged in the lessons.
63

Automatic Parallelization for Heterogeneous Embedded Systems / Parallélisation automatique pour systèmes hétérogènes embarqués

Diarra, Rokiatou 25 November 2019 (has links)
L'utilisation d'architectures hétérogènes, combinant des processeurs multicoeurs avec des accélérateurs tels que les GPU, FPGA et Intel Xeon Phi, a augmenté ces dernières années. Les GPUs peuvent atteindre des performances significatives pour certaines catégories d'applications. Néanmoins, pour atteindre ces performances avec des API de bas niveau comme CUDA et OpenCL, il est nécessaire de réécrire le code séquentiel, de bien connaître l’architecture des GPUs et d’appliquer des optimisations complexes, parfois non portables. D'autre part, les modèles de programmation basés sur des directives (par exemple, OpenACC, OpenMP) offrent une abstraction de haut niveau du matériel sous-jacent, simplifiant ainsi la maintenance du code et améliorant la productivité. Ils permettent aux utilisateurs d’accélérer leurs codes séquentiels sur les GPUs en insérant simplement des directives. Les compilateurs d'OpenACC/OpenMP ont la lourde tâche d'appliquer les optimisations nécessaires à partir des directives fournies par l'utilisateur et de générer des codes exploitant efficacement l'architecture sous-jacente. Bien que les compilateurs d'OpenACC/OpenMP soient matures et puissent appliquer certaines optimisations automatiquement, le code généré peut ne pas atteindre l'accélération prévue, car les compilateurs ne disposent pas d'une vue complète de l'ensemble de l'application. Ainsi, il existe généralement un écart de performance important entre les codes accélérés avec OpenACC/OpenMP et ceux optimisés manuellement avec CUDA/OpenCL. Afin d'aider les programmeurs à accélérer efficacement leurs codes séquentiels sur GPU avec les modèles basés sur des directives et à élargir l'impact d'OpenMP/OpenACC dans le monde universitaire et industrielle, cette thèse aborde plusieurs problématiques de recherche. Nous avons étudié les modèles de programmation OpenACC et OpenMP et proposé une méthodologie efficace de parallélisation d'applications avec les approches de programmation basées sur des directives. Notre expérience de portage d'applications a révélé qu'il était insuffisant d'insérer simplement des directives de déchargement OpenMP/OpenACC pour informer le compilateur qu'une région de code particulière devait être compilée pour être exécutée sur la GPU. Il est essentiel de combiner les directives de déchargement avec celles de parallélisation de boucle. Bien que les compilateurs actuels soient matures et effectuent plusieurs optimisations, l'utilisateur peut leur fournir davantage d'informations par le biais des clauses des directives de parallélisation de boucle afin d'obtenir un code mieux optimisé. Nous avons également révélé le défi consistant à choisir le bon nombre de threads devant exécuter une boucle. Le nombre de threads choisi par défaut par le compilateur peut ne pas produire les meilleures performances. L'utilisateur doit donc essayer manuellement différents nombres de threads pour améliorer les performances. Nous démontrons que les modèles de programmation OpenMP et OpenACC peuvent atteindre de meilleures performances avec un effort de programmation moindre, mais les compilateurs OpenMP/OpenACC atteignent rapidement leur limite lorsque le code de région déchargée a une forte intensité arithmétique, nécessite un nombre très élevé d'accès à la mémoire globale et contient plusieurs boucles imbriquées. Dans de tels cas, des langages de bas niveau doivent être utilisés. Nous discutons également du problème d'alias des pointeurs dans les codes GPU et proposons deux outils d'analyse statiques qui permettent d'insérer automatiquement les qualificateurs de type et le remplacement par scalaire dans le code source. / Recent years have seen an increase of heterogeneous architectures combining multi-core CPUs with accelerators such as GPU, FPGA, and Intel Xeon Phi. GPU can achieve significant performance for certain categories of application. Nevertheless, achieving this performance with low-level APIs (e.g. CUDA, OpenCL) requires to rewrite the sequential code, to have a good knowledge of GPU architecture, and to apply complex optimizations that are sometimes not portable. On the other hand, directive-based programming models (e.g. OpenACC, OpenMP) offer a high-level abstraction of the underlying hardware, thus simplifying the code maintenance and improving productivity. They allow users to accelerate their sequential codes on GPU by simply inserting directives. OpenACC/OpenMP compilers have the daunting task of applying the necessary optimizations from the user-provided directives and generating efficient codes that take advantage of the GPU architecture. Although the OpenACC / OpenMP compilers are mature and able to apply some optimizations automatically, the generated code may not achieve the expected speedup as the compilers do not have a full view of the whole application. Thus, there is generally a significant performance gap between the codes accelerated with OpenACC/OpenMP and those hand-optimized with CUDA/OpenCL. To help programmers for speeding up efficiently their legacy sequential codes on GPU with directive-based models and broaden OpenMP/OpenACC impact in both academia and industry, several research issues are discussed in this dissertation. We investigated OpenACC and OpenMP programming models and proposed an effective application parallelization methodology with directive-based programming approaches. Our application porting experience revealed that it is insufficient to simply insert OpenMP/OpenACC offloading directives to inform the compiler that a particular code region must be compiled for GPU execution. It is highly essential to combine offloading directives with loop parallelization constructs. Although current compilers are mature and perform several optimizations, the user may provide them more information through loop parallelization constructs clauses in order to get an optimized code. We have also revealed the challenge of choosing good loop schedules. The default loop schedule chosen by the compiler may not produce the best performance, so the user has to manually try different loop schedules to improve the performance. We demonstrate that OpenMP and OpenACC programming models can achieve best performance with lesser programming effort, but OpenMP/OpenACC compilers quickly reach their limit when the offloaded region code is computed/memory bound and contain several nested loops. In such cases, low-level languages may be used. We also discuss pointers aliasing problem in GPU codes and propose two static analysis tools that perform automatically at source level type qualifier insertion and scalar promotion to solve aliasing issues.
64

An Analytical Nodal Discrete Ordinates Solution to the Transport Equation in Cartesian Geometry

Rocheleau, Joshua 07 October 2020 (has links)
No description available.
65

Spectroscopic Studies and Reaction Mechanisms of Small Molecule Oxidation over Metal Oxide-Supported Catalysts

Sapienza, Nicholas Severino 02 January 2024 (has links)
Chemical warfare agents are a toxic class of compounds that are incredibly harmful to human health. Methods of detoxification and decontamination currently exist, however they all suffer from problems that involve logistical transport or involve technologies that directly address liquid threats instead of vapors. One promising method of detoxification involves the oxidation of these compounds into less-harmful species. The relatively large chemical size and complexity of modern-day chemical warfare agents, however, precludes a straightforward analysis of the chemical transformations that take place on novel decontaminating materials. Additionally, a fundamental understanding of reaction mechanisms that occur on novel material surfaces is required before improved materials can be developed. To this end, the oxidation of three simpler, smaller organic molecules were studied over a variety of materials in order to build up a chemical understanding of the systems under study. The photoepoxidation of propene into propene oxide was observed to readily occur over an in-house developed dual titania-silica catalyst created by atomic layer deposition. The subsequent photoinduced degradation of produced propene oxide was observed to occur over the novel catalyst. Next, the oxidation of CO was studied over a Pt/TiO2 catalyst while in the presence of humidity. The addition of water was shown to enable an alternative, low energy pathway that closely followed the water gas shift, but ended upon the production of stable surface-bound formates. Gaseous oxygen was found to subsequently oxidize these surface formates into the full oxidation product, CO2. Next, the oxidation of methanol was studied over the same Pt/TiO2 catalyst. It was discovered that the water produced when methanol initially adsorbs to the catalyst surface is responsible for unlocking the oxidative capacity of the material. Finally, a custom packedbed reactor was designed and built that enabled unique experimental capabilities not yet available in commercial systems, and will be used in the future to directly test the oxidative capabilities of novel materials for chemical warfare agent destruction. / Doctor of Philosophy / The chemical interactions and reactions that occur between gases and surfaces are incredibly important for a multitude of technologies employed by governments, militaries, and citizens alike. The precise methods in which these gases interact with materials of interest determine whether said material can be used in a catalytic fashion. Much like how an automobile catalytic converter does not have to be replaced each time the vehicle is started; a catalyst is able to be used repeatedly without loss of function. Catalysts in general are unique in that they function to create or allow for chemical reactions to proceed through alternative, lower energy pathways that are more likely to occur under milder environmental conditions. In order to understand the chemical reactions that occur on a catalyst, a combination of specialized spectroscopic methods was used that allowed for tracking the precise chemical bonds that were formed or broken during reaction. A few different model chemical reactions are explored in this work, ranging from the conversion of carbon monoxide into CO2, and the oxidation of methanol, a small alcohol commonly found in fuel cells. The experimental techniques employed herein allowed for precise chemical mechanisms to be tracked, and the information gained will certainly be useful for the design of next-generation materials by future research.
66

Rhéologie de suspensions hétérogènes concentrées : applications aux bols alimentaires et aux jus gastriques d'aliments solides. / Rheology of concentrated heterogeneous suspensions : applications to food bolus and gastric juices from solid meal

Patarin, Jeremy 12 December 2014 (has links)
L'enjeu des travaux contenus dans cette thèse est la caractérisation rhéologique de suspensions concentrées d'objets viscoplastiques suspendus dans une matrice viscoplastique. Trois contraintes physiques se présentent : l'hétérogénéité des objets, la taille des objets au regard de la taille du système étudié, et la proximité entre la consistance de la phase suspendante et celle des objets. Dans notre contexte alimentaire, il faut ajouter des contraintes d'ordre biologique : les faibles durées de vie des produits, les prélèvements in vivo et la conservation des conditions de températures et d'humidité. L'ensemble de ces contraintes aboutit à une rhéométrie spécifique, aux frontières de la mécanique des milieux continues, avec pour objectif de mesurer des propriétés rhéologiques pertinentes. Pour cela, deux instrumentations originales ont été construites pour effectuer la caractérisation rhéologique le plus vite et au plus près de la génération des échantillons.Appliquée aux bols alimentaires issus de fromage, leur rhéologie vient gouverner les écoulements en bouche et induire la création de surface d'échange en bouche, au travers de la mastication. Au travers de ces interfaces, les arômes et les sapides sont libérés et transportés vers les récepteurs sensoriels. Les résultats montrent le lien entre le seuil de contrainte à l'écoulement du bol et la libération d'arômes de différents hydrophobicités, fonction de la fermeté et du taux de matières grasses du fromage. Plus particulièrement, la phase suspendante fluide, semble jouer un rôle majeur dans la création et la persistance des surfaces d'échanges. Appliquée aux jus gastriques issus de pain, l'enjeu est de savoir si des modifications des contenus en glucides pouvaient impacter la rhéologie des jus gastriques, modifier les cinétiques de vidange en sortie d'estomac, et diminuer l'excursion glycémique. Les résultats montrent l'effet d'un ajout d'amylose sur l'augmentation de la viscosité aux temps de digestion longs, lorsque la rhéologie n'est plus gouvernée par l'encombrement des particules de pain gonflées d'eau. Cependant, l'amylose ne semble pas avoir d'impact sur les cinétiques de vidange, ce qui vient conforter que le débit en sortie d'estomac est régulé par la charge énergétique contenue dans le repas. / The challenge of this work is to carry out the rheological characterization of concentrated suspensions of viscoplastic objects suspended in a viscoelastic matrix. Three structural constraints are faced: the heterogeneity of the particles, the size of particle in relation to the size of the whole system studied, and the proximity between the consistency of the suspending phase and that the consistency of the particles. In the food context, some biological constraints are added: small lifetimes of bolus, in vivo sampling and test conditions of temperature and humidity. All these constraints lead to a compromise in rheometry, at the edge of continuum mechanics, aiming to measure useful rheological properties. To carry out the rheological characterizations quickly and efficiently, two original devices has been designed trough this work. Applied to food bolus from cheese, rheology comes to govern flows in the mouth and induce the creation of exchange area in the mouth, during the chewing process. Through these interfaces, the savors and flavors are released and transported to the sensory receptors. The results show the relationship between the yield stress of the bolus and flavors release of different hydrophobic aromas, depending on the firmness and fat content of the cheeses. Specifically, fluid suspending matrix appears to play a major role in the creation and persistence of exchange area.Applied to gastric juice from bread, the issue is whether changes in carbohydrate content could impact the rheology of gastric juice, modify the kinetics of stomach flow rate, and decrease the glycemic excursion. The results show the effect of amylose addition on increasing the viscosity for long digestion time, when the rheology is no longer governed by the jamming effect of the particles made of water-swollen bread. However, amylose does not seem to have any impact on the kinetics of emptying, which reinforces that the stomach flow rate is regulated by the caloric charge contained in the meal.
67

A crise do Sistema Interamericano de Direitos Humanos: dinâmicas na posição brasileira sobre o caso / The Inter-american Human Rights System crisis: dynamics on the Brazilian position on the case

Araujo, Rodrigo de Souza 18 May 2017 (has links)
Esse trabalho busca rever e analisar diversas narrativas sobre a chamada \"crise do Sistema Interamericano de Direitos Humanos\" e a posição brasileira na relação com esse sistema. A perspectiva de vários atores envolvidos mostra suas concepções de si, dos demais agentes e da própria crise, bem como as mudanças de suas estratégias, possibilitando um vislumbre das dinâmicas políticas presentes em casos de \"alto perfil político\". Faz-se um breve histórico das relações entre o Brasil e o SIDH, e dos precedentes da crise. Foram entrevistados atores governamentais, da sociedade civil e da Comissão Interamericana, de modo a conferir um amplo panorama do caso e de seus múltiplos pontos de vista. O texto pretende discutir essas perspectivas, as conseqüências da crise sobre cada um dos atores analisados, bem como, propor a desconstrução do conceito de Estado monolítico para uma melhor compreensão de suas relações com outras instituições. / This article seeks to review and analyze the various accounts on the \"Inter-American Human Rights System crisis\" and the Brazilian positioning on it. The point of view of the many agents involved reveals their conceptions of themselves, the others and the crisis itself, as well as the changes in their own strategies, which allows a glimpse on the politic dynamics working in the so-called \"high-profile cases\". A brief history of the relations between Brazil and the IAHRS is presented, as well as the precedent cases of the crisis. Agents from the Brazilian governments, the civil society and the Inter-American Commission were interviewed in order to provide a wide outlook on the case and its multiple perspectives. The text intends to discuss these perspectives, the effects of the crisis on each of the studied actors, and to purpose a deconstruction of the monolithic concept of State to understand its relations with other institutions.
68

Ein generisches Abbildungsmodell für Stereokamerasysteme

Luber, Andreas 19 January 2015 (has links)
In den letzten Jahren kommen immer mehr nicht perspektivische Kamerasysteme beim maschinellen Sehen zur Anwendung, die vor allem ein deutlich erweitertes Blickfeld bieten. Das klassische perspektivische Abbildungsmodell lässt sich hier häufig nicht mehr erfolgreich anwenden. In dieser Arbeit wird ein generisches Abbildungsmodell vorgestellt, welches übliche Kamerasysteme akkurat modellieren kann. Solche Kamerasysteme schließen insbesondere klassische perspektivische Systeme, aber auch Fischaugen- und Spiegellinsen-Kamerasysteme ein. Die Nutzung eines einheitlichen Abbildungsmodells ermöglicht schließlich eine einfache Verwendung und Kalibrierung von heterogenen Stereokamerasystemen, also einer Kombination von unterschiedlichen Kameratypen, die vorteilhafte Eigenschaften gegenüber klassischen Stereosystemen bieten. Nicht zuletzt trägt die in dieser Arbeit vorgestellte einheitliche Modellierung und Kalibrierung von Mono- und Stereokamerasystemen dazu bei, Fehler durch falschen Umgang oder falsche Wahl von Methoden der Modellierung oder Kalibrierung zu vermeiden und den Kamerakalibrierprozess insgesamt zu vereinfachen. In dieser Arbeit wurden verschiedene Ansätze der Modellierung untersucht und evaluiert. Es wurde eine generische Modellierung vorgeschlagen, die die untersuchten spezifischen Abbildungsmodelle vollständig ersetzen kann. Für die Kalibrierung nicht linearer Abbildungsmodelle wurde eine einheitliche Methode zur Startwertbestimmung vorgeschlagen und evaluiert. Die Genauigkeit der Kalibrierung mittels einheitlicher Methoden wurde anhand diverser realer Kamerasysteme untersucht und bewertet. Es konnte gezeigt werden, dass die dabei auftretenden Fehler deutlich im Subpixelbereich liegen. Durch Erweiterung des klassischen Konzepts der Epipolargeometrie um die generische Abbildungsmodellierung konnten schließlich heterogene Stereokamerasysteme kalibriert und genaue Stereomodelle abgeleitet werden. / The application of perspective camera systems in photogrammetry and computer vision is state of the art. In recent years non-perspective and especially omnidirectional camera systems have increasingly been used in close-range photogrammetry tasks. In general, the perspective camera model, i.e. pinhole model, cannot be applied when using non-perspective camera systems. However, several camera models for different omnidirectional camera systems are proposed in literature. Using different types of cameras in a heterogeneous camera system may lead to an advantageous combination. The advantages of different camera systems, e.g. field of view and resolution, result in a new enhanced camera system. If these different kinds of cameras can be modeled, using a unified camera model, the total calibration process can be simplified. Sometimes it is not possible to give the specific camera model in advance. In these cases a generic approach is helpful too. Furthermore, a simple stereo reconstruction becomes possible when using a fisheye and a perspective camera for example. In this work camera models for perspective, wide-angle and omnidirectional camera systems were evaluated. A generic camera model were introduced that fully substitutes specific camera models. The crucial initialization of the model''s parameters is conducted using a new generic method that is independent of the particular camera system. The accuracy of this generic camera calibration approach is validated by the calibration of a dozen of real camera systems up to subpixel accuracy. Finally, it has been shown that a unified method of modeling, parameter approximation and calibration of interior and exterior orientation can be applied to a generic stereo system to derive precise 3D object data.
69

Informacijos valdymo metodų analizė ir sprendimas informacijos paieškai naudojant ontologijas / Analysis of information control methods and solution to information search using ontologies

Nekroševičius, Marijonas 04 March 2009 (has links)
Šiuo metu informacija yra sukaupta paskirstytuose šaltiniuose, kurie kaip saugojimo priemonę naudoja duomenų bazes. Šios bazės yra skirtingų tipų. Vartotojo tikslas yra naudoti keletą nepriklausomų informacijos šaltinių kaip vieną. Dėl savo nepriklausomumo nuo DBVS ir platformos tipo, XML tapo ypatingai naudingas atviroms susisiekiančioms sistemoms, kuomet duomenys apsikeičiami tarp paskirstytų duomenų šaltinių. Šiame darbe pateikiamas ontologijų, kaip dalykinės srities apibrėžimo informacijos paieškai, panaudojimas. Tai leidžia optimizuoti reikiamos informacijos paieškos procesą ir išvengti užklausos rezultatų pertekliškumo. / The main problem in heterogeneous database integration is data incompatibility in different databases. XML is perfect solution in data exchange between different databases as it is independent from OS, applications or hardware. To implement XML in data exchange XML must be created corresponding to the databases. This work propose use of ontologies for information retrieving from heterogenous data bases. Such method let optimize user query to avoid wasted information.
70

Parallel Hardware- and Software Threads in a Dynamically Reconfigurable System on a Programmable Chip

Rößler, Marko 06 December 2013 (has links) (PDF)
Today’s embedded systems depend on the availability of hybrid platforms, that contain heterogeneous computing resources such as programmable processors units (CPU’s or DSP’s) and highly specialized hardware cores. These platforms have been scaled down to integrated embedded system-on-chip. Modern platform FPGAs enhance such systems by the flexibility of runtime configurable silicon. One of the major advantages that arises is the ability to use hardware (HW) and software (SW) resources in a time-shared manner. Though the ability to dynamically assign computing resources based on decisions taken at runtime is given.

Page generated in 0.0691 seconds