• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 462
  • 121
  • 57
  • 49
  • 36
  • 23
  • 23
  • 11
  • 10
  • 10
  • 8
  • 7
  • 7
  • 7
  • 7
  • Tagged with
  • 966
  • 423
  • 135
  • 89
  • 74
  • 72
  • 71
  • 68
  • 66
  • 58
  • 57
  • 55
  • 53
  • 50
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
821

Contribution to modeling and optimization of home healthcare / Contribution à la modélisation et l'optimisation d’hospitalisation à domicile

Bashir, Bushra 15 November 2013 (has links)
Résumé indisponible. / A healthcare network or health system consists of all organizations, actions and people who participate to promote, restore or maintain people’s health. The health care systems in many developed countries are facing increasing costs. The major reason is the changing age distribution of the population with more elderly people in need of support. Increasing healthcare costs has created new alternatives to traditional hospitalization in which one is Home Health Care (HHC). Home health care or domiciliary care is the provision of health care and assistance to people in their own homes, according to a formal assessment of their needs. HHC has attained a specific place in healthcare network. HHC programs have now been successfully implemented in many countries. The purpose of HHC is to provide the care and support needed to assist patients to live independently in their own homes. HHC is primarily performed by means of personal visitations of healthcare workers to patients in their homes, where they provide care assistance according to patients’ needs. In this thesis we have considered different aspects of planning problems for home health care services. The efficient use of resources is necessary in continuous healthcare services. To meet the increased demand of HHC, operation research specialist can play an important role by solving the various combinatorial optimization problems arising in HHC. These problems can be tactical, strategic or operational with respect to planning horizon. Strategic problems are those which help in attaining long term goals or objectives, e.g. higher level of quality for HHC patients and efficient use of resources. These strategic objectives can be achieved through tactical i.e. medium term panning and operational planning i.e. short term planning. The main purpose of our thesis is to identify these potential optimization problems and solve them via recent metaheuristics. HHC is an alternative to traditional hospitalization and has got a significant share in the organization of healthcare in developed countries. The change in aging demographics, recent development in technology and the increase in the demand of healthcare services are major reasons for this rapid growth. Some studies show HHC as a tool to reduce costs of care, which is a major preoccupation in developed countries. Some others reveal that it leads to the improvement of patients’ satisfaction without increasing the resources. Home health care, i.e. visiting and nursing patients in their homes, is a flourishing realm in the medical industry. The number of companies has grown largely both in public and private sectors. The staffing needs for HHC companies have been expanded as well. Also they face the problem of assigning geographically dispersed patients to home healthcare workers and preparing daily schedules for these workers. The challenge of this problem is to combine aspects of vehicle routing and staff rostering. Both of them are well known NP- hard combinatorial optimization problems, it means the amount of computational time required to find solution increases exponentially with problem size. Home healthcare workers scheduling problem is difficult to solve optimally due to presence of large number of constraints. These are two types of constraints: hard constraints and soft constraints. The hard constraints are the restrictions to be fulfilled for the schedules to be applicable and soft constraints are preferences to improve the quality of these schedules. (...)
822

Modélisation, caractérisation et optimisation des procédés de traitements thermiques pour la formation d’absorbeurs CIGS / Modelling, characterization and optimization of annealing processes in CIGS absorber manufacturing

Oliva, Florian 04 April 2014 (has links)
L’énergie photovoltaïque jouera un rôle déterminant dans la transition énergétique future. Bien que les cellules solaires à base de silicium dominent encore le marché, leur coût de fabrication et le poids des modules limitent leur développement. Depuis quelques années, les industriels s’intéressent de plus en plus aux dispositifs à base de couches minces en raison de leurs procédés de fabrication rapides et peu onéreux sur de larges substrats. Cette technologie utilise une large variété de matériaux; les chalcopyrites tels que Cu(In,Ga)Se2 sont les plus prometteurs. Le procédé de fabrication de couches chalcopyrites le plus répandu est la coévaporation mais l’utilisation de vides très poussés rende cette technique peu adaptée à la production à grande échelle de modules bon marché. La solution alternative décrite dans ce travail est un procédé en deux étapes basé sur le recuit sous atmosphère réactive de précurseurs métalliques électrodéposés. Le développement de cette technologie passe par une meilleure compréhension des mécanismes d’incorporation et d’homogénéisation du gallium dans les couches formées et par une optimisation des étapes de recuit. Le premier objectif de ce travail de thèse est une étude des mécanismes réactionnels mis en jeu lors du procédé de recuit à travers l’étude de différents types de précurseur. Par la suite ces connaissances sont utilisées pour modéliser et optimiser un recuit industriel innovant. Ce travail est réalisé à l’aide de plans d’expérience (DOE) où l’influence de certains paramètres, les plus critiques est mise en évidence. Des voies d’optimisation sont proposées et des hypothèses faites afin d’expliquer les phénomènes observés. / Solar energy is promised to be a major actor in the future of energy production. Even if silicon based solar cells remain the main product their fabrication is energy consuming and requires heavy cover glass for protection, which reduce their development. For several years, commercial interest has shifted towards thin-film cells for which manufacturing time, large scale production, fabrication costs and weight savings are the main advantages. For thin film technology, a wide variety of materials can be used but chalcopyrite such as Cu(In,Ga)Se2 is one of the most promising. The most current method used for chalcopyrite formation is co- evaporation but this process is very expensive and not well suitable for large scale production due to high vacuum requirements. One alternative solution described in this work consists of a two-step technology based on the sequential electro-deposition of a metallic precursor followed by a rapid reactive annealing. However to reach its full potential this technology needs a better understanding of the Ga incorporation mechanism and of the selenization/sulfurization step. This work focuses first on formation mechanisms through the study of several kinds of precursor. This knowledge is then used to explain and to optimize innovative annealing processes. This study is achieved by observing the impact of some process parameters using designs of experiment (DOE). A link between process parameters and properties of these thin films is obtained using electrical, structural and diffusion characterization of the devices. Finally we propose hypothesis to explain observed phenomena and also some improvements to meet the challenges of this process.
823

INFLUENCE OF IRRADIATION AND LASER WELDING ON DEFORMATION MECHANISMS IN AUSTENITIC STAINLESS STEELS

Keyou Mao (6848774) 02 August 2019 (has links)
<p> This dissertation describes the recent advancements in micromechanical testing that inform how deformation mechanisms in austenitic stainless steels (SS) are affected by the presence of irradiation-induced defects. Austenitic SS is one of the most widely utilized structural alloys in nuclear energy systems, but the role of irradiation on its underlying mechanisms of mechanical deformation remains poorly understood. Now, recent advancement of microscale mechanical testing in a scanning electron microscope (SEM), coupled with site-specific transmission electron microscopy (TEM), enables us to precisely determine deformation mechanisms as a function of plastic strain and grain orientation.</p> <p> </p> <p>We focus on AISI 304L SSs irradiated in EBR-II to ~1-28 displacements per atom (dpa) at ~415 °C and contains ~0.2-8 atomic parts per million (appm) He amounting to ~0.2-2.8% swelling. A portion of the specimen is laser welded in a hot cell; the laser weld heat affected zone (HAZ) is studied and considered to have undergone post-irradiation annealing (PIA). An archival, virgin specimen is also studied as a control. We conduct nanoindentation, then prepare TEM lamellae from the indent plastic zone. In the 3 appm He condition, TEM investigation reveals nucleation of deformation-induced <i>α</i>’ martensite in the irradiated specimen, and metastable <i>ε</i> martensite in the PIA specimen. Meanwhile, the unirradiated control specimen exhibits evidence only of dislocation slip and twinning; this is unsurprising given that alternative deformation mechanisms such as twinning and martensitic transformation are typically observed only near cryogenic temperatures in austenitic SS. Surface area of irradiation-produced cavities contribute enough free energy to accommodate the martensitic transformation. The lower population of cavities in the PIA material enables metastable <i>ε</i> martensite formation, while the higher cavity number density in the irradiated material causes direct <i>α</i>’ martensite formation. In the 0.2 appm He condition, SEM-based micropillar compression tests confirm nanoindentation results. A deformation transition map with corresponding criteria has been proposed for tailoring the plasticity of irradiated steels. Irradiation damage could enable fundamental, mechanistic studies of deformation mechanisms that are typically only accessible at extremely low temperatures. </p>
824

Élaboration d'une méthode tomographique de reconstruction 3D en vélocimétrie par image de particules basée sur les processus ponctuels marqués / Elaboration of 3D reconstruction tomographic method in particle image velocimetry based on marked point Process

Ben Salah, Riadh 03 September 2015 (has links)
Les travaux réalisés dans cette thèse s'inscrivent dans le cadre du développement de techniques de mesure optiques pour la mécanique des fluides visant la reconstruction de volumes de particules 3D pour ensuite en déduire leurs déplacements. Cette technique de mesure volumique appelée encore Tomo-PIV est apparue en 2006 et a fait l'objet d'une multitude de travaux ayant pour objectif l'amélioration de la reconstruction qui représente l'une des principales étapes de cette technique de mesure. Les méthodes proposées en littérature ne prennent pas forcément en compte la forme particulière des objets à reconstruire et ne sont pas suffisamment robustes pour faire face au bruit présent dans les images. Pour pallier à ce déficit, nous avons proposé une méthode de reconstruction tomographique, appelée (IOD-PVRMPP), qui se base sur les processus ponctuels marqués. Notre méthode permet de résoudre le problème de manière parcimonieuse. Elle facilite l'introduction de l'information à priori et résout les problèmes de mémoire liés aux approches dites "basées voxels". La reconstruction d'un ensemble de particules 3D est obtenue en minimisant une fonction d'énergie ce qui définit le processus ponctuel marqué. A cet effet, nous utilisons un algorithme de recuit simulé basé sur les méthodes de Monte-Carlo par Chaines de Markov à Saut Réversible (RJMCMC). Afin d'accélérer la convergence du recuit simulé, nous avons développé une méthode d'initialisation permettant de fournir une distribution initiale de particules 3D base sur la détection des particules 2D localisées dans les images de projections. Enfin cette méthode est appliquée à des écoulements fluides soit simulé, soit issu d'une expérience dans un canal turbulent à surface libre. L'analyse des résultats et la comparaison de cette méthode avec les méthodes classiques montrent tout l'intérêt de ces approches parcimonieuses. / The research work fulfilled in this thesis fit within the development of optical measurement techniques for fluid mechanics. They are particularly related to 3D particle volume reconstruction in order to infer their movement. This volumetric measurement technic, called Tomo-PIV has appeared on 2006 and has been the subject of several works to enhance the reconstruction, which represents one of the most important steps of this measurement technique. The proposed methods in Literature don't necessarily take into account the particular form of objects to reconstruct and they are not sufficiently robust to deal with noisy images. To deal with these challenges, we propose a tomographic reconstruction method, called (IOD-PVRMPP), and based on marked point processes. Our method allows solving the problem in a parsimonious way. It facilitates the introduction of prior knowledge and solves memory problem, which is inherent to voxel-based approaches. The reconstruction of a 3D particle set is obtained by minimizing an energy function, which defines the marked point process. To this aim, we use a simulated annealing algorithm based on Reversible Jump Markov Chain Monte Carlo (RJMCMC) method. To speed up the convergence of the simulated annealing, we develop an initialization method, which provides the initial distribution of 3D particles based on the detection of 2D particles located in projection images. Finally, this method is applied to simulated fluid flow or real one produced in an open channel flow behind a turbulent grid. The results and the comparisons of this method with classical ones show the great interest of this parsimonious approach.
825

Cellules photovoltaïques organiques souples à grande surface

Bailly, Loïc 03 September 2010 (has links)
Afin d’obtenir une approche où l’aspect industriel du projet est soutenu par les connaissances académiques et les capacités analytiques du monde de la recherche, ce travail portant sur les cellules photovoltaïques organiques souples grande surface commence par décrire l’énergie photovoltaïque dans son ensemble. Les tenants et aboutissants de son développement sont détaillés, ainsi que ses filières technologiques. Les semi-conducteurs organiques, les mécanismes physiques mis en jeu dans la production d’électricité d’origine photovoltaïque et les grandeurs électriques associées aux cellules photovoltaïques organiques ainsi que les différentes structures de celles-ci sont ensuite présentés. Les dispositifs réalisés dans le cadre de ce travail sur les cellules photovoltaïques organiques sont présentés. Les différentes techniques de dépôt de couches minces, aussi bien celles permettant la production en masse que celles permettant la production à plus petite échelle sont présentées. Cette présentation s’accompagne d’une recherche qui se veut exhaustive des publications relatant l’utilisation des ces techniques d’impression afin de créer des dispositifs photovoltaïques organiques. Une comparaison de ces différentes techniques est menée afin de déterminer les modes de production pertinents. Une étude bibliographique complète menée sur les cellules « grande surface » est présentée. Les cellules et modules réalisés grâce au procédé pilote d’enduction par héliogravure sont ensuite présentés. Le travail réalisé sur un autre procédé, le « doctor blade », est ensuite exposé. Enfin, la problématique du séchage et du recuit des couches minces déposées en continu est posée, et le traitement micro-onde proposé comme solution. / To obtain an approach where the industrial aspect of the project is supported by academic knowledge and the analytical capacities of research, this work concerning the large area flexible organic solar cells begins by describing the photovoltaic energy in general. The ins and outs of its development are detailed, as well as the different technologies involved. The organic semiconductors, the physical mechanisms involved in the photovoltaic electricity production and the physical values attached to the organic solar cells as well as the various structures of these cells are then presented. Devices realized within the framework of this work are then presented. The various techniques of depositing thin layers allowing the mass production as well as those allowing the smaller-scale production are presented. This presentation comes along with an exhaustive research of the publications telling the use of these techniques of printing to create organic photovoltaic devices. A comparison of those various techniques is led to determine the relevant means of production. A complete bibliographical study led on large area organic solar cells is presented. Cells and modules realized thanks to the experimental process of heliogravure coating are then presented. The work realized with another process called doctor blade is then exposed. Finally, the problem of the drying and annealing of the thin layers deposited continuously is raised, and the microwave treatment proposed as a possible solution.
826

Évolutions microstructurales et comportement en fluage à haute température d'un acier inoxydable austénitique / Microstructural evolutions and creep behaviour at high temperature of an austenitic stainless steel

Mateus Freire, Lucie 20 March 2018 (has links)
La thèse est inscrite au sein du projet ASTRID, qui est un démonstrateur technologique pour les réacteurs de quatrième génération (Gen-IV). Le premier matériau choisi pour constituer les gaines de cœur est un acier inoxydable austénitique stabilisé au titane (type 15Cr-15Ni Ti). L’écrouissage à froid des gaines permet la précipitation de nano-carbures de titane en service sur les dislocations, retardant ainsi les phénomènes de restauration par effet d’épinglage. En conditions accidentelles (T > 650°C), et plus particulièrement dans le cas d’une perte de réfrigérant primaire, le comportement en fluage de ces gaines est très mal connu. L’objectif des travaux de thèse est donc de déterminer les mécanismes de déformation et de rupture en fluage, entre 650°C et 950°C, de cet acier à l’état non irradié.Dans un premier temps, les microstructures d’échantillons après différents recuits ont été comparées afin d’étudier l’influence de la température sur les évolutions métallurgiques. L’étude de la précipitation et des cinétiques de restauration et de recristallisation, ont permis de dresser les évolutions microstructurales sans charge appliquée.En plus d’étudier le comportement en fluage uniaxial de l’acier à haute température, les caractérisations des éprouvettes après essais ont permis de déterminer les évolutions microstructurales au cours et après essais de fluage (contributions simultanées de la température et de la contrainte). La comparaison avec les microstructures obtenues après recuits a mis en évidence une accélération de la cinétique de recristallisation sous charge, rendant l’effet de la contrainte sur ces évolutions non négligeable.Après fluage sous air aux plus basses températures (650°C et 750°C), les fractographies présentent une rupture globalement transgranulaire avec certaines zones intergranulaires. Après fluage sous vide secondaire à plus hautes températures (850°C et 950°C), un fort amincissement des éprouvettes et une striction quasiment complète dans l’épaisseur ont été observés. Ce fort amincissement se traduit par un alignement de cupules, caractéristique de ruptures 100% ductiles à très haute température. / The ASTRID project aims at designing a fast-reactor prototype for the 4th generation of nuclear power plants. The material to be used for fuel cladding is a cold-worked austenitic stainless steel stabilized with titanium (15Cr-15Ni Ti type) and optimized in minor elements. This material was developed to limit recovery and irradiation-induced swelling and to improve microstructural stability and mechanical properties in normal operating conditions. In case of incidental situations (irradiation temperature > 650°C), the cladding might rapidly reach higher temperatures up to 950°C where its stability could be affected. The present work aims at improving knowledge and understanding of the microstructural evolution and creep behaviour of this steel at these temperatures (650°C-950°C).Microstructural characterizations of thermally-aged samples have been performed in order to study the effect of temperature on metallurgical evolutions (precipitation, recovery and recrystallization). A phenomenological model including recovery and recrystallization processes was set up to reproduce hardness measurements versus ageing time and temperatures.Isothermal creep tests up to 950°C under a wide range of stress levels allowed investigation of viscoplastic flow, microstructural evolution under stress and damage/failure processes. In order to evaluate the effect of high-temperature loading, microstructural characteristics of stress-free thermally-aged samples were compared with post-mortem examinations of creep specimens.At 650°C and 750°C the value of stress exponent is higher than 7. The main deformation mechanism during creep test is power-low creep, which is consistent with the results found in the literature.Beyond 850°C, the increase in dislocation mobility promotes recovery and recrystallization processes. As a consequence, a competition between work hardening due to viscoplastic deformation and softening due to dynamic recovery takes place. At 950°C, viscoplastic flow is strongly affected by recrystallization during creep test, especially in the tertiary stage. The comparison between microstructures of crept specimens and stress-free, thermally-aged samples leads to the conclusion that the recrystallization kinetics is accelerated by application of a mechanical loading.As for the fracture behaviour, creep tests under air environment at lower temperatures (650°C-750°C), led to predominating ductile fracture but some intergranular zones were observed on fracture surfaces. Creep tests under high vacuum at higher temperatures (850°C-950°C) lead to a high fracture elongation with a reduction of area up to 100%.
827

Metaheuristic based peer rewiring for semantic overlay networks / Métaheuristique pour la configuration dynamique de réseaux pair-à-pair dans le context des réseaux logiques sémantiques

Yang, Yulian 28 March 2014 (has links)
Nous considérons une plate-forme pair-à-pair pour la Recherche d'Information (RI) collaborative. Chaque pair héberge une collection de documents textuels qui traitent de ses sujets d'intérêt. En l'absence d'un mécanisme d'indexation global, les pairs indexent localement leurs documents et s'associent pour fournir un service distribué de réponse à des requêtes. Notre objectif est de concevoir un protocole décentralisé qui permette aux pairs de collaborer afin de transmettre une requête depuis son émetteur jusqu'aux pairs en possession de documents pertinents. Les réseaux logiques sémantiques (Semantic Overlay Networks, SON) représentent la solution de référence de l'état de l'art. Les pairs qui possèdent des ressources sémantiques similaires sont regroupés en clusters. Les opérations de RI seront alors efficaces puisqu'une requête sera transmise aux clusters de pairs qui hébergent les ressources pertinentes. La plupart des approches actuelles consistent en une reconfiguration dynamique du réseau de pairs (peer rewiring). Pour ce faire, chaque pair exécute périodiquement un algorithme de marche aléatoire ou gloutonne sur le réseau pair-à-pair afin de renouveler les pairs de son cluster. Ainsi, un réseau à la structure initialement aléatoire évolue progressivement vers un réseau logique sémantique. Jusqu'à présent, les approches existantes n'ont pas considéré que l'évolution de la topologie du réseau puisse influer sur les performances de l'algorithme de reconfiguration dynamique du réseau. Cependant, s'il est vrai que, pour une configuration initiale aléatoire des pairs, une marche aléatoire sera efficace pour découvrir les pairs similaires, lorsque des clusters commencent à émerger une approche gloutonne devient alors mieux adaptée. Ainsi, nous proposons une stratégie qui applique un algorithme de recuit simulé (Simulated Annealing, SA) afin de faire évoluer une stratégie de marche aléatoire vers une stratégie gloutonne lors de la construction du SON. Cette thèse contient plusieurs avancées concernant l'état de l'art dans ce domaine. D'abbord, nous modélisions formellement la reconfiguration dynamique d'un réseau en un SON. Nous identifions un schéma générique pour la reconfiguration d'un réseau pair-à-pair, et après le formalisons en une procédure constituée de trois étapes. Ce framework cohérent offre à ses utilisateurs de quoi le paramétrer. Ensuite, le problème de la construction d'un SON est modélisé sous la forme d'un problème d'optimisation combinatoire pour lequel les opérations de reconfiguration du réseau correspondent à la recherche décentralisée d'une solution locale. Fondée sur ce modèle, une solution concrète à base de recuit simulé est proposée. Nous menons une étude expérimentale poussée sur la construction du SON et la RI sur SONs, et validions notre approche. / A Peer-to-Peer (P2P) platform is considered for collaborative Information Retrieval (IR). Each peer hosts a collection of text documents with subjects related to its owner's interests. Without a global indexing mechanism, peers locally index their documents, and provide the service to answer queries. A decentralized protocol is designed, enabling the peers to collaboratively forward queries from the initiator to the peers with relevant documents. Semantic Overlay Network (SONs) is one the state of the art solutions, where peers with semantically similar resources are clustered. IR is efficiently performed by forwarding queries to the relevant peer clusters in an informed way. SONs are built and maintained mainly via peer rewiring. Specifically, each peer periodically sends walkers to its neighborhood. The walkers walk along peer connections, aiming at discovering more similar peers to replace less similar neighbors of its initiator. The P2P network then gradually evolves from a random overlay network to a SON. Random and greedy walk can be applied individually or integrated in peer rewiring as a constant strategy during the progress of network evolution. However, the evolution of the network topology may affect their performance. For example, when peers are randomly connected with each other, random walk performs better than greedy walk for exploring similar peers. But as peer clusters gradually emerge in the network, a walker can explore more similar peers by following a greedy strategy. This thesis proposes an evolving walking strategy based on Simulated Annealing (SA), which evolves from a random walk to a greedy walk along the progress of network evolution. According to the simulation results, SA-based strategy outperforms current approaches, both in the efficiency to build a SON and the effectiveness of the subsequent IR. This thesis contains several advancements with respect to the state of the art in this field. First of all, we identify a generic peer rewiring pattern and formalize it as a three-step procedure. Our technique provides a consistent framework for peer rewiring, while allowing enough flexibility for the users/designers to specify its properties. Secondly, we formalize SON construction as a combinatorial optimization problem, with peer rewiring as its decentralized local search solution. Based on this model, we propose a novel SA-based approach to peer rewiring. Our approach is validated via an extensive experimental study on the effect of network wiring on (1) SON building and (2) IR in SONs.
828

Films minces de copolymères à blocs pour la réalisation de gabarits à porosité contrôlée / Block copolymer thin film for polymer templates with controlled porosity

Nguyen, Thi Hoa 18 December 2012 (has links)
Des masques polymères à porosité contrôlée sont fabriqués à partir de films minces de copolymères à blocs auto-organisés polystyène-b-polylactide (PS-PLA) et polystyrène-b-poly(méthacrylate de méthyle) (PS-PMMA). Les films doivent être réorganisés grâce à une exposition aux vapeurs de solvant (THF ou DCE) pour obtenir des cylindres perpendiculaires à la surface, dans le volume et arrangés de façon hexagonale à longue distance. L’extraction sélective des domaines minoritaires conduit alors à des films minces poreux de PS. La mise en place de nouvelles techniques de caractérisation (analyse des images AFM, analyse MEB de répliques de système poreux, ellipso-porosimétrie) a permis d’évaluer l’influence de nombreux paramètres (nature du substrat, épaisseur du film, mode de dépôt, nature du solvant, temps et mode d’exposition aux vapeurs du solvant, …) sur la morphologie des films (surface, interface substrat/film, volume) et sur la cinétique de réorganisation. Des mélanges de copolymères à blocs A-B et d’homopolymères C (PS-PMMA/PLA, PS-PLA/PMMA, PS-PLA/PEO, PS-PEO/PLA, PS-PEO/PMMA, PS-PMMA/PEO) sont également étudiés. L’ajout d’homopolymères permet dans certains cas, d’améliorer la réorganisation des films de copolymères à blocs. Il permet également une augmentation des tailles caractéristiques du système. L’homopolymère C se localise au centre des domaines du bloc minoritaire B si χA-C> χB-C et χA-C > χA-B. Par exemple, dans le cas du mélange PS-PMMA/PLA, des cylindres perpendiculaires à la surface et organisés de façon hexagonale sont observés après exposition aux vapeurs de THF avec incorporation des domaines d’homopolymères PLA au centre des domaines de PMMA. / Polymer films with controlled porosity are obtained from block copolymer thin films of polystyrene-b-polylactide (PS-PLA) and polystyrene-b-poly(methylmethacrylate) (PS-PMMA). The morphology of these films must be reorganized by solvent annealing (THF or DCE) in order to obtain the cylinders perpendicular to the surface, in the volume and arranged in a hexagonal lattice over a long distance. The selective removal of the minority domains leads to a porous thin film of PS. New characterization techniques (AFM image analysis, SEM analysis of the silica replica, ellipsoporosimetry) are developed to evaluate the influence of various parameters (substrate nature, film thickness, method of deposition, duration and method of solvent vapors annealing …) on the morphology of the film (surface, interface and volume) and on the kinetics of reorganization. Blends of copolymer/homopolymer A-B/C (PS-PMMA/PLA, PS-PLA/PMMA, PS-PLA/PEO, PS-PEO/PLA, PS-PEO/PMMA, PS-PMMA/PEO) are also studied. In some cases, the addition of homopolymers can improve the reorganization of block copolymer films. It also allows an increase of the characteristics size of the system. Homopolymers C locate at the center of the minority B domains if χA-C> χB-C and χA-C> χA-B. For example, in the case of PS-PMMA/PLA blends, cylinders perpendicular to the surface and hexagonally arranged are observed after exposure to vapor of THF with incorporation of PLA homopolymer in central PMMA domains.
829

Um sistema computacional para apoio a projetos de redes de comunicação em sistemas centralizados de medição de consumo e tarifação de energia elétrica : desenvolvimento e implementação através de uma abordagem metaheurística

Martins, Eduardo Augusto 25 March 2013 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2015-06-19T15:40:42Z No. of bitstreams: 1 Eduardo Augusto Martins.pdf: 5700123 bytes, checksum: 4789112a51c3fa9b5a888aa5987298e9 (MD5) / Made available in DSpace on 2015-06-19T15:40:42Z (GMT). No. of bitstreams: 1 Eduardo Augusto Martins.pdf: 5700123 bytes, checksum: 4789112a51c3fa9b5a888aa5987298e9 (MD5) Previous issue date: 2013 / Nenhuma / Sistemas centralizados de medição de energia são atualmente uma escolha para automatizar redes e garantir o funcionamento do complexo sistema de distribuidoras de energia elétrica. Atualmente, estes sistemas compõem as chamadas smart grids, redes inteligentes de geração, transmissão e distribuição de energia, dotadas de dispositivos de comunicação de dados, possibilitando a aplicação de sistemas distribuídos, com base na troca de informações entre equipamentos, formando uma nova rede, também de alta complexidade, para interligação e controle de dados. No Brasil, um trabalho inédito vem sendo desenvolvido na criação deste tipo de rede: a utilização de uma infraestrutura avançada de medição, baseada em sistemas de medição centralizada, alocadas em áreas de grandes concentrações urbanas, com objetivos de diminuição, ou até mesmo eliminação de perdas comerciais de distribuição de energia, emerge como novidade na aplicação de redes inteligentes de distribuição de energia elétrica no país. Este trabalho tem por objetivo a descrição do desenvolvimento de uma solução computacional, baseada em um método de busca metaheurística conhecido como Simulated Annealing para apoio à formulação de projetos de redes otimizadas de infraestrutura avançada de medição que utilizam equipamentos de sistemas de medição centralizados em uma rede de distribuição de energia elétrica, garantindo máxima cobertura da rede, atendimento a todos os clientes geograficamente localizados na região de projeto, minimizando custos de instalação dos sistemas. Na busca deste objetivo, este trabalho apresenta a redução do problema de otimização para o Problema do Recobrimento, a construção do modelo matemático, a implementação computacional e as aplicações na forma de simulações e resultados. A metodologia proposta, aplicada ao sistema computacional como forma de simulação, permite uma análise muito rápida e dinâmica de topologias de rede a serem projetadas para utilização em projetos de redes de comunicações, onde se utilizam sistemas de medição centralizados. O sistema computacional permite a alteração das características de simulação e das características das redes, dependendo das restrições impostas pela área geográfica em estudo. A estratégia apresenta bons resultados na formação de topologias de redes de comunicação para sistemas de medição centralizada, bem como otimização na utilização de equipamentos, reduzindo custos de instalação na rede. / Centralized systems of energy measurement are actually a choice for automating networks and ensure the operation of the complex system of electricity distribution. Currently, these systems comprise the so-called smart grid generation, transmission and distribution of energy, equipped with data communication devices, enabling the implementation of distributed systems, based on the exchange of information between devices, forming a new network, of high complexity, and for interconnection data control. In Brazil, a new work is being developed for creating this type of network: the use of an advanced infrastructure for measurement, centralized measurement systems allocation in areas of large urban concentrations, with goals of reducing, or even eliminating, commercial losses of power distribution, that emerges as a novelty in applying smart grid distribution of electricity in the country. This work aims at describing the development of a computational solution, based on a metaheuristic search method known as Simulated Annealing to support the design of optimized network for advanced metering infrastructure using equipment measurement systems on a centralized network of power distribution, ensuring maximum coverage, service to all customers geographically located in the area and minimizing installation costs of the systems. In pursuit of this goal, the work presents the reduction of the optimization problem to the covering problem, the mathematical model construction, computational implementation and application in the form of simulations and results. The methodology applied to the computing system like a simulation, allows an very fast and dynamic analysis of the network topologies to be designed for use in projects where communication networks are used in centralized measurement systems. The computing system allows modification of simulation features and characteristics of networks, depending on the constraints imposed by the geographical area under consideration. The strategy presents good results in the formation of network topologies communication systems for centralized measurement and optimizing the use of equipment, reducing installation costs in the network.
830

Plans prédictifs à taille fixe et séquentiels pour le krigeage / Fixed-size and sequential designs for kriging

Abtini, Mona 30 August 2018 (has links)
La simulation numérique est devenue une alternative à l’expérimentation réelle pour étudier des phénomènes physiques. Cependant, les phénomènes complexes requièrent en général un nombre important de simulations, chaque simulation étant très coûteuse en temps de calcul. Une approche basée sur la théorie des plans d’expériences est souvent utilisée en vue de réduire ce coût de calcul. Elle consiste à partir d’un nombre réduit de simulations, organisées selon un plan d’expériences numériques, à construire un modèle d’approximation souvent appelé métamodèle, alors beaucoup plus rapide à évaluer que le code lui-même. Traditionnellement, les plans utilisés sont des plans de type Space-Filling Design (SFD). La première partie de la thèse concerne la construction de plans d’expériences SFD à taille fixe adaptés à l’identification d’un modèle de krigeage car le krigeage est un des métamodèles les plus populaires. Nous étudions l’impact de la contrainte Hypercube Latin (qui est le type de plans les plus utilisés en pratique avec le modèle de krigeage) sur des plans maximin-optimaux. Nous montrons que cette contrainte largement utilisée en pratique est bénéfique quand le nombre de points est peu élevé car elle atténue les défauts de la configuration maximin-optimal (majorité des points du plan aux bords du domaine). Un critère d’uniformité appelé discrépance radiale est proposé dans le but d’étudier l’uniformité des points selon leur position par rapport aux bords du domaine. Ensuite, nous introduisons un proxy pour le plan minimax-optimal qui est le plan le plus proche du plan IMSE (plan adapté à la prédiction par krigeage) et qui est coûteux en temps de calcul, ce proxy est basé sur les plans maximin-optimaux. Enfin, nous présentons une procédure bien réglée de l’optimisation par recuit simulé pour trouver les plans maximin-optimaux. Il s’agit ici de réduire au plus la probabilité de tomber dans un optimum local. La deuxième partie de la thèse porte sur un problème légèrement différent. Si un plan est construit de sorte à être SFD pour N points, il n’y a aucune garantie qu’un sous-plan à n points (n 6 N) soit SFD. Or en pratique le plan peut être arrêté avant sa réalisation complète. La deuxième partie est donc dédiée au développement de méthodes de planification séquentielle pour bâtir un ensemble d’expériences de type SFD pour tout n compris entre 1 et N qui soient toutes adaptées à la prédiction par krigeage. Nous proposons une méthode pour générer des plans séquentiellement ou encore emboités (l’un est inclus dans l’autre) basée sur des critères d’information, notamment le critère d’Information Mutuelle qui mesure la réduction de l’incertitude de la prédiction en tout point du domaine entre avant et après l’observation de la réponse aux points du plan. Cette approche assure la qualité des plans obtenus pour toutes les valeurs de n, 1 6 n 6 N. La difficulté est le calcul du critère et notamment la génération de plans en grande dimension. Pour pallier ce problème une solution a été présentée. Cette solution propose une implémentation astucieuse de la méthode basée sur le découpage par blocs des matrices de covariances ce qui la rend numériquement efficace. / In recent years, computer simulation models are increasingly used to study complex phenomena. Such problems usually rely on very large sophisticated simulation codes that are very expensive in computing time. The exploitation of these codes becomes a problem, especially when the objective requires a significant number of evaluations of the code. In practice, the code is replaced by global approximation models, often called metamodels, most commonly a Gaussian Process (kriging) adjusted to a design of experiments, i.e. on observations of the model output obtained on a small number of simulations. Space-Filling-Designs which have the design points evenly spread over the entire feasible input region, are the most used designs. This thesis consists of two parts. The main focus of both parts is on construction of designs of experiments that are adapted to kriging, which is one of the most popular metamodels. Part I considers the construction of space-fillingdesigns of fixed size which are adapted to kriging prediction. This part was started by studying the effect of Latin Hypercube constraint (the most used design in practice with the kriging) on maximin-optimal designs. This study shows that when the design has a small number of points, the addition of the Latin Hypercube constraint will be useful because it mitigates the drawbacks of maximin-optimal configurations (the position of the majority of points at the boundary of the input space). Following this study, an uniformity criterion called Radial discrepancy has been proposed in order to measure the uniformity of the points of the design according to their distance to the boundary of the input space. Then we show that the minimax-optimal design is the closest design to IMSE design (design which is adapted to prediction by kriging) but is also very difficult to evaluate. We then introduce a proxy for the minimax-optimal design based on the maximin-optimal design. Finally, we present an optimised implementation of the simulated annealing algorithm in order to find maximin-optimal designs. Our aim here is to minimize the probability of falling in a local minimum configuration of the simulated annealing. The second part of the thesis concerns a slightly different problem. If XN is space-filling-design of N points, there is no guarantee that any n points of XN (1 6 n 6 N) constitute a space-filling-design. In practice, however, we may have to stop the simulations before the full realization of design. The aim of this part is therefore to propose a new methodology to construct sequential of space-filling-designs (nested designs) of experiments Xn for any n between 1 and N that are all adapted to kriging prediction. We introduce a method to generate nested designs based on information criteria, particularly the Mutual Information criterion. This method ensures a good quality forall the designs generated, 1 6 n 6 N. A key difficulty of this method is that the time needed to generate a MI-sequential design in the highdimension case is very larg. To address this issue a particular implementation, which calculates the determinant of a given matrix by partitioning it into blocks. This implementation allows a significant reduction of the computational cost of MI-sequential designs, has been proposed.

Page generated in 0.0568 seconds