11 |
Caractérisation structurale et suivi du vieillissement par diffusion X aux petits angles d'un polymère époxyde : Contribution à l'étude des propriétés électriquesRekik, Hazar 27 April 2009 (has links) (PDF)
Ce travail constitue une approche innovante mettant en complémentarité diverses techniques de caractérisation physicochimique, électrique et structurale. Menées sur un polymère époxyde, ces études ont pour principal objectif la compréhension des phénomènes et propriétés diélectriques associés aux charges d'espace, ainsi que le suivi de leur évolution dans le temps ou suite à l'application de contraintes extérieures. Les mesures de spectroscopie d'impédance et de courant de dépolarisation thermo-stimulé (CDTS), ont mis en évidence des processus de relaxations dipolaires et interfaciales. L'origine de ces phénomènes a pu être expliquée à partir des analyses physico-chimiques et structurales. Tout d'abord à l'aide des mesures de fluorescence X qui ont révélé la présence de deux types d'impuretés, pouvant créer des états énergétiques plus ou moins profonds dans la bande interdite. Ensuite, par des mesures en réflectométrie X qui ont mis en évidence plusieurs structures ordonnées au sein d'une matrice amorphe. Cette hétérogénéité structurale explique les mécanismes de piégeage et d'accumulation des charges d'espaces aux interfaces. De même, l'ordre local favorisant la mobilité des charges, ces résultats donnent une première réponse quant à la valeur relativement élevée de la conductivité électrique du polymère, telle qu'elle a pu être déterminée à partir des mesures des caractéristiques courant-tension. Des études de vieillissement accéléré ont également été menées. Les différents recuits appliqués ont contribué à la création de charges qui sont piégées dans des niveaux énergétiques de plus en plus profonds. Cela s'est traduit par une diminution de la quantité de charges qui relaxent par activation thermique ainsi que par une diminution de la conductivité électrique des échantillons. Ces changements de propriétés électriques ont été corrélés aux changements structuraux qui se sont produits au sein du polymère, et dont la principale manifestation est la disparition progressive des structures ordonnées. Cette disparition de l'ordre local a aussi été observée en l'absence de contraintes thermiques (vieillissement naturel), où il a été montré que le comportement superficiel et en volume des échantillons n'était pas identique.
|
12 |
Algorithmic aspects of connectivity, allocation and design problemsChakrabarty, Deeparnab 23 May 2008 (has links)
Most combinatorial optimization problems are
NP -hard, which imply that under well- believed complexity assumptions, there exist no polynomial time
algorithms to solve them. To cope with the NP-hardness, approximation algorithms which return solutions close to
the optimal, have become a rich field of study. One successful method for designing approx-
imation algorithms has been to model the optimization problem as an integer program and
then using its polynomial time solvable linear programming relaxation for the design and
analysis of such algorithms. Such a technique is called the LP-based technique.
In this thesis, we study the algorithmic aspects of three classes of combinatorial optimization problems
using LP-based techniques as our main tool.
Connectivity Problems:
We study the Steiner tree problem and devise new linear pro-
gramming relaxations for the problem. We show an equivalence of our relaxation with the
well studied bidirected cut relaxation for the Steiner tree problem. Furthermore, for a class
of graphs called quasi-bipartite graphs, we improve the best known upper bound on the
integrality gap from 3/2 to 4/3. Algorithmically, we obtain fast and simple approximation
algorithms for the Steiner tree problem on quasi-bipartite graphs.
Allocation Problems:
We study the budgeted al location problem of allocating a set of
indivisible items to a set of agents who bid on it but possess a hard budget constraint more
than which they are unwilling to pay. This problem is a special case of submodular welfare
maximization. We use a natural LP relaxation for the problem and improve the best known
approximation factor for the problem from ~ 0.632 to 3/4. We also improve the inapprox-
imability factor of the problem to 15/16 and use our techniques to show inapproximability
results for many other allocation problems.
We also study online allocation problems where the set of items are unknown and appear one at a time.
Under some necessary assumptions we provide online algorithms for
many problems which attain the (almost) optimal competitive ratio. Both these works have
applications in the area of budgeted auctions, the most famous of which are the sponsored
search auctions hosted by search engines on the Internet.
Design Problems:
We formally define and study design problems which asks how the
weights of an input instance can be designed, so that the minimum (or maximum) of
a certain function of the input can be maximized (respectively, minimized). We show
if the function can be approximated to any factor $alpha$, then the optimum design can be
approximated to the same factor.
We also show that (max-min) design problems are dual to packing problems. We use
the framework developed by our study of design problems to obtain results about fraction-
ally packing Steiner trees in a "black-box" fashion. Finally, we study integral packing of
spanning trees and provide an alternate proof of a theorem of Nash-Williams and Tutte
about packing spanning trees.
|
13 |
Network Based Approaches for Clustering and Location DecisionsVerma, Anurag 2012 August 1900 (has links)
The objective of this dissertation is to study commonly occurring location and clustering problems on graphs. The dissertation is presented as a collection of results in topics including finding maximum cliques in large graphs, graph clustering in large scale graphs, determining location of facilities for pre-positioning emergency relief supplies, and selecting nodes to form a virtual backbone in a wireless sensor network.
To begin with, a new clique relaxation called a k-community is defined as a connected subgraph such that endpoints of every edge have at least k common neighbors within the subgraph. It is used to develop scale reduction techniques to obtain the maximum clique on very large scale real life networks. Analytically, the technique is been shown to be very effective on power-law random graphs. Experimental results on real life graph instances (Collaboration networks, P2P networks, Social networks, etc.) show our procedure to be much more effective than a regular k-core peeling approach.
Next, a general purpose network clustering algorithm based on the clique relaxation concept of k-community is presented. A salient feature of this approach is that it does not use any prior information about the structure of the network. By defining a cluster as a k-community, the proposed algorithm aims to provide a clustering of a network into k-communities with varying values of k. Even though the algorithm is not designed to optimize any particular performance measure, the computational results suggest that it performs well on a number of criteria that are used in literature to evaluate the quality of a clustering.
The third topic deals with choosing the locations of disaster response facilities for the storage of emergency supplies, which is critical to the quality of service provided in a large scale emergency like an earthquake. In the existing literature, large scale emergency facility location models have either assumed that disaster response facilities will always be functioning and available when required, or that the functioning of a facility is independent of a particular disaster scenario. In this paper new location models are presented that explicitly take into consideration the stochastic nature of the impact a disaster can have on the disaster response facilities and the population centers in surrounding areas. A comparison of the results obtained using our models with those from models available in literature using a case study suggests that the locations suggested by the model in this paper significantly reduce the expected cost of transportation of supplies when we consider the damage a disaster causes to the disaster response facilities and areas near it.
Lastly, a distributed approximate algorithm for forming the communication backbone in wireless sensor networks is presented. Some of the most popular routing protocols for wireless sensor networks require a virtual backbone for efficient communication be- tween the sensors. Connected Dominating Sets (CDS) have been studied as a method of choosing nodes to be in the backbone. The traditional approach is to assume that the transmission range of each node is given and then minimize the number of nodes in the CDS representing the backbone. A recently introduced alternative strategy is based on the concept of k-bottleneck connected dominating set (k-BCDS), which, given a positive integer k, minimizes the transmission range of the nodes that ensures a CDS of size k exists in the network. This paper provides a 6-approximate distributed algorithm for the k-BCDS problem. The results of empirical evaluation of the proposed algorithm are also included.
|
14 |
Contributions à la résolution globale de problèmes bilinéaires appliqués à l'indstrie porcine / Contribution to the global resolution of bilinear problems applied to the swine industryJoannopoulos, Emilie 27 April 2018 (has links)
Aujourd'hui, l'alimentation représente plus de 70% du coût de production en engraissement porcin et dans le contexte économique actuel, il est important de parvenir à réduire ce coût. L'alimentation utilisée actuellement utilise des phases et est représentée par un modèle linéaire. L'alimentation par mélanges introduite récemment est représentée par un modèle bilinéaire. Nous introduisons ici une nouvelle formulation qui est une combinaison des alimentations traditionnelle par mélanges: la méthode hybride. Nous montrons qu'elle permet de réduire le coût de plus de 5%. L'étude principale porte sur l'optimisation globale du problème bilinéaire, non convexe, modélisant l'alimentation par mélanges. La bilinéarité apparaît dans la fonction objectif et dans les contraintes. Ce problème peut posséder plusieurs minima, mais nous souhaitons obtenir un minimum global. Il est équivalent à un problème de pooling et nous montrons qu'il est fortement NP-difficile. Après de premiers résultats, nous énonçons la conjecture que tout minimum local est global pour ce problème bilinéaire appliqué à l'industrie porcine. Nous la prouvons sur un exemple de dimension réduite. Notre problème ne pouvant pas être résolu avec des solveurs globaux à cause de sa dimension, nous utilisons des approches telle que la pénalisation, la discrétisation, et techniques de relaxation lagrangienne ou convexe. Toutes ces approches supportent notre conjecture. Nous faisons également une étude de la robustesse des modèles à la variation des prix des ingrédients ainsi qu'une étude multicritère nous permettant d'obtenir des résultats numériques réduisant considérablement les rejets, autres enjeux importants. / Today, feed represents more than 70% of the production cost in growing-finishing pig industry and in the current economic context, it is important to reduce it. The feeding system currently used uses phases and is expressed as a linear model. The feeding system using feeds introduced more recently is represented by a bilinear model. We introduced here new feeding system which is a combination offeeding systems using phases and feeds: the hybrid method. We show that it can reduce the feed cost by more than 5%. The main part of this manuscript is about global optimization of the bilinear problem, and non convex, problem modeling feeding system using feeds. The objective function and some constraints are bilinear. This problem can have several local minima but we would like to have a global one. It is equivalent to a pooling problem and we prove that it is a strongly NP-hard problem. After a study of first results, we enounce the conjecture that any local minimum is a global minimum for that problem applied in the pig industry. We prove it for a small size example. Our problem cannot be solved by using global solver due to its size, then we applied some relaxation methods such as penalization of bilinear terms, their discretization and Langrangian and convex relaxations. All these approaches support our conjecture. Then we study the robustness of the models on the ingredient price variations and a multicriteria study reducing phosphorus and nitrogen excretion.
|
15 |
Understanding the Dynamics of Short-Range Electron Transfer Reactions in Biological SystemsLu, Yangyi, Lu January 2018 (has links)
No description available.
|
16 |
Integrality Gaps for Strong Linear Programming and Semidefinite Programming RelaxationsGeorgiou, Konstantinos 17 February 2011 (has links)
The inapproximability for NP-hard combinatorial optimization problems lies in the heart of theoretical computer science. A negative result can be either conditional, where the starting point is a complexity assumption, or unconditional, where the inapproximability holds for a restricted model of computation.
Algorithms based on Linear Programming (LP) and Semidefinite Programming (SDP) relaxations are among the most prominent models of computation. The related and common measure of efficiency is the integrality gap, which sets the limitations of the associated algorithmic schemes. A number of systematic procedures, known as lift-and-project systems, have been proposed to improve the integrality gap of standard relaxations. These systems build strong hierarchies of either LP relaxations, such as the Lovasz-Schrijver (LS) and the Sherali-Adams (SA) systems, or SDP relaxations, such as the Lovasz-Schrijver SDP (LS+), the Sherali-Adams SDP (SA+) and the Lasserre (La) systems.
In this thesis we prove integrality gap lower bounds for the aforementioned lift-and-project systems and for a number of combinatorial optimization problems, whose inapproximability is yet unresolved. Given that lift-and-project systems produce relaxations that have given the best algorithms known for a series of combinatorial problems, the lower bounds can be read as strong evidence of the inapproximability of the corresponding optimization problems. From the results found in the thesis we highlight the following:
For every epsilon>0, the level-Omega(sqrt(log n/ log log n)) LS+ relaxation of the Vertex Cover polytope has integrality gap 2-epsilon.
The integrality gap of the standard SDP for Vertex Cover remains 2-o(1) even if all hypermetric inequalities are added to the relaxation. The resulting relaxations are incomparable to the SDP relaxations derived by the LS+ system. Finally, the addition of all ell1 inequalities eliminates all solutions not in the integral hull.
For every epsilon>0, the level-Omega(sqrt(log n/ log log n)) SA relaxation of Vertex Cover has integrality gap 2-epsilon. The integrality gap remains tight even for superconstant-level SA+ relaxations.
We prove a tight lower bound for the number of tightenings that the SA system needs in order to prove the Pigeonhole Principle. We also prove sublinear and linear rank bounds for the La and SA systems respectively for the Tseitin tautology.
Linear levels of the SA+ system treat highly unsatisfiable instances of fixed predicate-P constraint satisfaction problems over q-ary alphabets as fully satisfiable, when the satisfying assignments of the predicates P can be equipped with a balanced and pairwise independent distribution.
We study the performance of the Lasserre system on the cut polytope. When the input is the complete graph on 2d+1 vertices, we show that the integrality gap is at least 1+1/(4d(d+1)) for the level-d SDP relaxation.
|
17 |
Integrality Gaps for Strong Linear Programming and Semidefinite Programming RelaxationsGeorgiou, Konstantinos 17 February 2011 (has links)
The inapproximability for NP-hard combinatorial optimization problems lies in the heart of theoretical computer science. A negative result can be either conditional, where the starting point is a complexity assumption, or unconditional, where the inapproximability holds for a restricted model of computation.
Algorithms based on Linear Programming (LP) and Semidefinite Programming (SDP) relaxations are among the most prominent models of computation. The related and common measure of efficiency is the integrality gap, which sets the limitations of the associated algorithmic schemes. A number of systematic procedures, known as lift-and-project systems, have been proposed to improve the integrality gap of standard relaxations. These systems build strong hierarchies of either LP relaxations, such as the Lovasz-Schrijver (LS) and the Sherali-Adams (SA) systems, or SDP relaxations, such as the Lovasz-Schrijver SDP (LS+), the Sherali-Adams SDP (SA+) and the Lasserre (La) systems.
In this thesis we prove integrality gap lower bounds for the aforementioned lift-and-project systems and for a number of combinatorial optimization problems, whose inapproximability is yet unresolved. Given that lift-and-project systems produce relaxations that have given the best algorithms known for a series of combinatorial problems, the lower bounds can be read as strong evidence of the inapproximability of the corresponding optimization problems. From the results found in the thesis we highlight the following:
For every epsilon>0, the level-Omega(sqrt(log n/ log log n)) LS+ relaxation of the Vertex Cover polytope has integrality gap 2-epsilon.
The integrality gap of the standard SDP for Vertex Cover remains 2-o(1) even if all hypermetric inequalities are added to the relaxation. The resulting relaxations are incomparable to the SDP relaxations derived by the LS+ system. Finally, the addition of all ell1 inequalities eliminates all solutions not in the integral hull.
For every epsilon>0, the level-Omega(sqrt(log n/ log log n)) SA relaxation of Vertex Cover has integrality gap 2-epsilon. The integrality gap remains tight even for superconstant-level SA+ relaxations.
We prove a tight lower bound for the number of tightenings that the SA system needs in order to prove the Pigeonhole Principle. We also prove sublinear and linear rank bounds for the La and SA systems respectively for the Tseitin tautology.
Linear levels of the SA+ system treat highly unsatisfiable instances of fixed predicate-P constraint satisfaction problems over q-ary alphabets as fully satisfiable, when the satisfying assignments of the predicates P can be equipped with a balanced and pairwise independent distribution.
We study the performance of the Lasserre system on the cut polytope. When the input is the complete graph on 2d+1 vertices, we show that the integrality gap is at least 1+1/(4d(d+1)) for the level-d SDP relaxation.
|
18 |
Microporous Membranes Derived using Crystallisation Induced Phase Separation in PVDF/PMMA (Polyvinylidene Fluoride/ Polymethyl Methacrylate) Blends in Presence of Multiwalled Carbon NanotubesSharma, Maya January 2017 (has links) (PDF)
Segmental chain dynamics in polymer blends is a very important topic, not only from a fundamental point of view but also from technological applications. Because of the difficulties in the commercialization of new polymers, industries have turned increasingly towards blending of polymers to optimise their end use (mechanical, rheological) properties. The design of tailor-made materials would be enormously facilitated by the understanding of the blending phenomena at a molecular level. The key question to address is to understand the dynamics of each component of the blend modified by blending? The thesis has systematically studied the effect of multiwalled carbon nanotubes on the chain dynamics, demixing temperature, structural properties and evolution of morphology in a classical miscible polymer blend system (PVDF/PMMA).
The thesis comprises of six chapters, Chapter 1 is an introductory chapter that outlines the fundamentals of polymer blends, crystallisation in polymer blends and the basics of dielectric spectroscopy. As one of the rationales of this work is to systematic study whether phase separated in these blends can be used as a tool to develop membrane for water purification. This chapter also gives an overview of the reported studies of ultrafiltration membrane fabrication, factors affecting membrane morphology and flux. In Chapter 2, the materials and methodology used to carry out experiments and the experimental procedures are discussed.
Chapter 3 discusses the effect of concentration of PMMA and amine functionalized multiwalled carbon nanotubes (MWNTs) on the crystallisation induced phase separation using FTIR, XRD, POM and shear rheology. Electron microscopy and selective etching confirmed the localisation of MWNTs in the PVDF phase of the blends. Blends with MWNTs facilitated in heterogeneous nucleation manifesting in an increase in crystallisation temperature. The crystallisation induced phase separation in PVDF/PMMA blends was observed to influence the interconnected network of MWNTs in the blends.
Chapter 4 discuss the effect of concentration of PMMA and MWNTs on the miscibility and the segmental relaxations was probed in situ by DSC and dielectric relaxation spectroscopy (DRS). The dynamic heterogeneity in the blends as manifested by the presence of an extra relaxation at a higher frequency at or below the crystallisation induced phase separation temperature was also discussed. We found that PVDF/PMMA blend (PVDF ≥ 80 wt%) exhibits three distinct relaxations; αc corresponding to crystalline PVDF, αβ segmental relaxation of PMMA and αm of amorphous miscibility whereas all relaxations overlap and constitute a single broad relaxation in PVDF/PMMA blend (PVDF ≤ 70 wt%). This confirms that there is a certain composition width in this blend wherein three distinct relaxations can be traced. This could due to many reasons like the width of crystal-amorphous interphase in the crystal lamellae, crystal size and morphology is strongly contingent on the concentration of PMMA. Relaxations are not very distinct in presence of MWNTs due to defective spherulites that shift the relaxations towards a higher frequency.
Chapter 5 has attempted to tune the microporous morphology of PVDF membranes using crystallisation induced phase separation in PVDF/PMMA blends. As PVDF/PMMA is a melt-miscible blend, the samples were allowed to crystallise and the amorphous PMMA phase, which isolates in the interlamellar or inter-spherulitic regions in the blends, was etched out to generate
microporous structures. The pore sizes can be tuned by varying the PMMA concentration in the blends. We observed that 60/40 PVDF/PMMA blends showed larger pores as compared to 90/10 PVDF/PMMA blends. We further modified PVDF membranes by sputtering silver on the surface. The bacterial cell viability was distinctly suppressed (99 %) in silver sputtered membranes. The ICP analysis suggests that slow Ag+ ions release from the sputtered membrane surface assisted in developing antibacterial surface. Our findings open new avenues in designing water filtration membranes and also help in understanding the crystallisation kinetics for tuning pore size in membranes.
Chapter 6 summarises the important results of this work. MWNTs act as hetero nucleating agent and specifically interact with PVDF thereby influences the dynamics of PVDF chains. MWNTs can also restrict the amorphous segmental mobility and can influence the intermolecular cooperativity and coupling. The crystallisation induced phase separation in various blends can result in various crystalline morphologies depending on the PVDF concentration. By selectively etching PMMA from the phase-separated blends, microporous morphology can be generated
|
19 |
Alignement moléculaire : caractérisation et application à la mesure de thermalisation ultra-rapide et au contrôle de génération d'harmoniques / Molecular alignment : caracterisation and application to the measurement of ultra-fast thermalization and to the control of harmonic generationHouzet, Julien 16 December 2013 (has links)
La thématique de cette thèse est l'alignement moléculaire. Celui-ci est un sujet très important qui ouvre la voie sur un contrôle beaucoup plus fin de nombreux phénomènes. Ainsi, nous avons développé une nouvelle technique de mesure de l’alignement moléculaire suivant un axe et permettant d’en conserver le signe. Celle-ci est, à l’instar des techniques de mesure de l’alignement moléculaire développées dans l’équipe, basée sur la mesure de variation d’indice de réfraction induite par l’alignement moléculaire. La technique développée ensuite permet également la mesure de l’alignement moléculaire, tout en étant aussi une application de celui-ci puisqu’il permet ici la génération de troisième harmonique. L’alignement moléculaire est également mis en oeuvre dans la dernière étude puisque nous montrons qu’il apporte la résolution nécessaire à l’étude de la thermalisation d’un échantillon moléculaire excité / The thematic of this thesis is molecular alignment. The latter is a very important topic that opens the way toward a much more thin control of many phenomenons. So, we have developed a new measurement technique of the molecular alignment along one axis that permits to preserve the sign of alignment. This one is, like other measurement techniques developed by the team,based on the measurement of the refractive index variation induced by the molecular alignment.The technique developed then also permits the molecular alignment measurement, being also an application of it because it allows the third harmonic generation. In the last study, molecular alignment is implemented to show that it brings the necessary resolution for the study of an excited molecular sample thermalization
|
20 |
Sur quelques problèmes de reconstruction en imagerie MA-TIRF et en optimisation parcimonieuse par relaxation continue exacte de critères pénalisés en norme-l0 / On some reconstruction problems in MA-TIRF imaging and in sparse optimization using continuous exact relaxation of l0-penalized criteriaSoubies, Emmanuel 14 October 2016 (has links)
Cette thèse s'intéresse à deux problèmes rencontrés en traitement du signal et des images. Le premierconcerne la reconstruction 3D de structures biologiques à partir d'acquisitions multi-angles enmicroscopie par réflexion totale interne (MA-TIRF). Dans ce contexte, nous proposons de résoudre leproblème inverse avec une approche variationnelle et étudions l'effet de la régularisation. Une batteried'expériences, simples à mettre en oeuvre, sont ensuite proposées pour étalonner le système et valider lemodèle utilisé. La méthode proposée s'est montrée être en mesure de reconstruire avec précision unéchantillon phantom de géométrie connue sur une épaisseur de 400 nm, de co-localiser deux moléculesfluorescentes marquant les mêmes structures biologiques et d'observer des phénomènes biologiquesconnus, le tout avec une résolution axiale de l'ordre de 20 nm. La deuxième partie de cette thèseconsidère plus précisément la régularisation l0 et la minimisation du critère moindres carrés pénalisé (l2-l0) dans le contexte des relaxations continues exactes de cette fonctionnelle. Nous proposons dans unpremier temps la pénalité CEL0 (Continuous Exact l0) résultant en une relaxation de la fonctionnelle l2-l0 préservant ses minimiseurs globaux et pour laquelle de tout minimiseur local on peut définir unminimiseur local de l2-l0 par un simple seuillage. Par ailleurs, nous montrons que cette relaxation éliminedes minimiseurs locaux de la fonctionnelle initiale. La minimisation de cette fonctionnelle avec desalgorithmes d'optimisation non-convexe est ensuite utilisée pour différentes applications montrantl'intérêt de la minimisation de la relaxation par rapport à une minimisation directe du critère l2-l0. Enfin,une vue unifiée des pénalités continues de la littérature est proposée dans ce contexte de reformulationexacte du problème / This thesis is devoted to two problems encountered in signal and image processing. The first oneconcerns the 3D reconstruction of biological structures from multi-angle total interval reflectionfluorescence microscopy (MA-TIRF). Within this context, we propose to tackle the inverse problem byusing a variational approach and we analyze the effect of the regularization. A set of simple experimentsis then proposed to both calibrate the system and validate the used model. The proposed method hasbeen shown to be able to reconstruct precisely a phantom sample of known geometry on a 400 nmdepth layer, to co-localize two fluorescent molecules used to mark the same biological structures andalso to observe known biological phenomena, everything with an axial resolution of 20 nm. The secondpart of this thesis considers more precisely the l0 regularization and the minimization of the penalizedleast squares criteria (l2-l0) within the context of exact continuous relaxations of this functional. Firstly,we propose the Continuous Exact l0 (CEL0) penalty leading to a relaxation of the l2-l0 functional whichpreserves its global minimizers and for which from each local minimizer we can define a local minimizerof l2-l0 by a simple thresholding. Moreover, we show that this relaxed functional eliminates some localminimizers of the initial functional. The minimization of this functional with nonsmooth nonconvexalgorithms is then used on various applications showing the interest of minimizing the relaxation incontrast to a direct minimization of the l2-l0 criteria. Finally we propose a unified view of continuouspenalties of the literature within this exact problem reformulation framework
|
Page generated in 0.0714 seconds