• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 138
  • 34
  • 27
  • 10
  • 7
  • 7
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 288
  • 49
  • 43
  • 32
  • 31
  • 27
  • 26
  • 23
  • 22
  • 21
  • 20
  • 20
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Source-channel coding for wireless networks

Wernersson, Niklas January 2006 (has links)
The aim of source coding is to represent information as accurately as possible using as few bits as possible and in order to do so redundancy from the source needs to be removed. The aim of channel coding is in some sense the contrary, namely to introduce redundancy that can be exploited to protect the information when being transmitted over a nonideal channel. Combining these two techniques leads to the area of joint source–channel coding which in general makes it possible to achieve a better performance when designing a communication system than in the case when source and channel codes are designed separately. In this thesis two particular areas in joint source–channel coding are studied: multiple description coding (MDC) and soft decoding. Two new MDC schemes are proposed and investigated. The first is based on sorting a frame of samples and transmitting, as side-information/redundancy, an index that describes the resulting permutation. In case that some of the transmitted descriptors are lost during transmission this side information (if received) can be used to estimate the lost descriptors based on the received ones. The second scheme uses permutation codes to produce different descriptions of a block of source data. These descriptions can be used jointly to estimate the original source data. Finally, also the MDC method multiple description coding using pairwise correlating transforms as introduced by Wang et al is studied. A modification of the quantization in this method is proposed which yields a performance gain. A well known result in joint source–channel coding is that the performance of a communication system can be improved by using soft decoding of the channel output at the cost of a higher decoding complexity. An alternative to this is to quantize the soft information and store the pre-calculated soft decision values in a lookup table. In this thesis we propose new methods for quantizing soft channel information, to be used in conjunction with soft-decision source decoding. The issue on how to best construct finite-bandwidth representations of soft information is also studied. / QC 20101124
192

The Impact of the COVID-19 Lockdown on the Urban Air Quality: A Machine Learning Approach.

Bobba, Srinivas January 2021 (has links)
‘‘SARS-CoV-2’’ which is responsible for the current pandemic of COVID-19 disease was first reported from Wuhan, China, on 31 December 2019. Since then, to prevent its propagation around the world, a set of rapid and strict countermeasures have been taken. While most of the researchers around the world initiated their studies on the Covid-19 lockdown effect on air quality and concluded pollution reduction, the most reliable methods that can be used to find out the reduction of the pollutants in the air are still in debate. In this study, we performed an analysis on how Covid-19 lockdown procedures impacted the air quality in selected cities i.e. New Delhi, Diepkloof, Wuhan, and London around the world. The results show that the air quality index (AQI) improved by 43% in New Delhi,18% in Wuhan,15% in Diepkloof, and 12% in London during the initial lockdown from the 19th of March 2020 to 31st May 2020 compared to that of four-year pre-lockdown. Furthermore, the concentrations of four main pollutants, i.e., NO2, CO, SO2, and PM2.5 were analyzed before and during the lockdown in India. The quantification of pollution drop is supported by statistical measurements like the AVOVA Test and the Permutation Test. Overall, 58%, 61%,18% and 55% decrease is observed in NO2, CO,SO2, and PM2.5 concentrations, respectively. To check if the change in weather has played any role in pollution level reduction or not we analyzed how weather factors are correlated with pollutants using a correlation matrix. Finally, machine learning regression models are constructed to assess the lockdown impact on air quality in India by incorporating weather data. Gradient Boosting is performed well in the Prediction of drop-in PM2.5 concentration on individual cities in India. By comparing the feature importance ranking by regression models supported by correlation factors with PM2.5.This study concludes that COVID-19 lockdown has a significant effect on the natural environment and air quality improvement.
193

Sémantický parsing nezávislý na uspořádání vrcholů / Permutation-Invariant Semantic Parsing

Samuel, David January 2021 (has links)
Deep learning has been successfully applied to semantic graph parsing in recent years. However, to our best knowledge, all graph-based parsers depend on a strong assumption about the ordering of graph nodes. This work explores a permutation-invariant approach to sentence-to-graph semantic parsing. We present a versatile, cross-framework, and language-independent architecture for universal modeling of semantic structures. To empirically validate our method, we participated in the CoNLL 2020 shared task, Cross- Framework Meaning Representation Parsing (MRP 2020), which evaluated the competing systems on five different frameworks (AMR, DRG, EDS, PTG, and UCCA) across four languages. Our parsing system, called PERIN, was one of the winners of this shared task. Thus, we believe that permutation invariance is a promising new direction in the field of semantic parsing. 1
194

Propriétés différentielles des permutations et application en cryptographie symétrique / Differential properties of permutations and application to symmetric cryptography

Suder, Valentin 05 November 2014 (has links)
Les travaux exposés dans cette thèse se situent à l’interface des mathématiques discrètes, des corps finis et de la cryptographie symétrique.Les 'boîtes-S’ sont des fonctions non-linéaires de petites tailles qui constituent souvent la partie de confusion, indispensable, des chiffrements par blocs ou des fonctions de hachages.Dans la première partie de cette thèse, nous nous intéressons à la construction de boîtes-S bijectives résistantes aux attaques différentielle. Nous étudions l’inverse pour la composition des monômes de permutations optimaux vis-à-vis du critère différentiel. Nous explorons ensuite des classes spécifiques de polynômes creux. Enfin, nous construisons des boîtes-S à partir de leurs dérivées discrètes.Dans la deuxième partie, nous portons notre attention sur la cryptanalyse différentielle impossible. Cette cryptanalyse à clairs choisis très performante pour attaquer des chiffrements par blocs itératifs, exploite la connaissance d’une différentielle de probabilité zéro pour écarter les clés candidates. Elle est très technique, et de nombreuses erreurs ont été repérées dans des travaux passés, invalidant certaines attaques. Le but de ces travaux est de formaliser et d’automatiser l’évaluation des complexités d’une telle attaque afin d’unifier et d’optimiser les résultats obtenus. Nous proposons aussi de nouvelles techniques réduisant les complexités cette cryptanalyse. Nous démontrons enfin l’efficacité de notre approche en fournissant les meilleures cryptanalyses différentielles impossibles contre les chiffrements CLEFIA, Camellia, LBlock et Simon. / The work I have carried out in this thesis lie between discrete mathematics, finite fields theory and symmetric cryptography. In block ciphers, as well as in hash functions, SBoxes are small non-linear and necessary functions working as confusion layer.In the first part of this document, we are interesting in the design of bijective SBoxes that have the best resistance to differential attacks. We study the compositional inverse of the so-called Almost Perfect Nonlinear power functions. Then, we extensively study a class of sparse permutation polynomials with low differential uniformity. Finally, we build functions, over finite fields, from their discrete derivatives.In the second part, we realize an automatic study of a certain class of differential attacks: impossible differential cryptanalysis. This known plaintexts attack has been shown to be very efficient against iterative block ciphers. It exploits the knowledge of a differential with probability zero to occur. However this cryptanalysis is very technical and many flaws have been discovered, thus invalidating many attacks realized in the past. Our goal is to formalize, to improve and to automatize the complexity evaluation in order to optimize the results one can obtain. We also propose new techniques that aims at reducing necessary data and time complexities. We finally prove the efficiency of our method by providing some of the best impossible differential cryptanalysis against Feistel oriented block ciphers CLEFIA, Camellia, LBlock and Simon.
195

Addressing nonlinear systems with information-theoretical techniques

Castelluzzo, Michele 07 July 2023 (has links)
The study of experimental recording of dynamical systems often consists in the analysis of signals produced by that system. Time series analysis consists of a wide range of methodologies ultimately aiming at characterizing the signals and, eventually, gaining insights on the underlying processes that govern the evolution of the system. A standard way to tackle this issue is spectrum analysis, which uses Fourier or Laplace transforms to convert time-domain data into a more useful frequency space. These analytical methods allow to highlight periodic patterns in the signal and to reveal essential characteristics of linear systems. Most experimental signals, however, exhibit strange and apparently unpredictable behavior which require more sophisticated analytical tools in order to gain insights into the nature of the underlying processes generating those signals. This is the case when nonlinearity enters into the dynamics of a system. Nonlinearity gives rise to unexpected and fascinating behavior, among which the emergence of deterministic chaos. In the last decades, chaos theory has become a thriving field of research for its potential to explain complex and seemingly inexplicable natural phenomena. The peculiarity of chaotic systems is that, despite being created by deterministic principles, their evolution shows unpredictable behavior and a lack of regularity. These characteristics make standard techniques, like spectrum analysis, ineffective when trying to study said systems. Furthermore, the irregular behavior gives the appearance of these signals being governed by stochastic processes, even more so when dealing with experimental signals that are inevitably affected by noise. Nonlinear time series analysis comprises a set of methods which aim at overcoming the strange and irregular evolution of these systems, by measuring some characteristic invariant quantities that describe the nature of the underlying dynamics. Among those quantities, the most notable are possibly the Lyapunov ex- ponents, that quantify the unpredictability of the system, and measure of dimension, like correlation dimension, that unravel the peculiar geometry of a chaotic system’s state space. These methods are ultimately analytical techniques, which can often be exactly estimated in the case of simulated systems, where the differential equations governing the system’s evolution are known, but can nonetheless prove difficult or even impossible to compute on experimental recordings. A different approach to signal analysis is provided by information theory. Despite being initially developed in the context of communication theory, by the seminal work of Claude Shannon in 1948, information theory has since become a multidisciplinary field, finding applications in biology and neuroscience, as well as in social sciences and economics. From the physical point of view, the most phenomenal contribution from Shannon’s work was to discover that entropy is a measure of information and that computing the entropy of a sequence, or a signal, can answer to the question of how much information is contained in the sequence. Or, alternatively, considering the source, i.e. the system, that generates the sequence, entropy gives an estimate of how much information the source is able to produce. Information theory comprehends a set of techniques which can be applied to study, among others, dynamical systems, offering a complementary framework to the standard signal analysis techniques. The concept of entropy, however, was not new in physics, since it had actually been defined first in the deeply physical context of heat exchange in thermodynamics in the 19th century. Half a century later, in the context of statistical mechanics, Boltzmann reveals the probabilistic nature of entropy, expressing it in terms of statistical properties of the particles’ motion in a thermodynamic system. A first link between entropy and the dynamical evolution of a system is made. In the coming years, following Shannon’s works, the concept of entropy has been further developed through the works of, to only cite a few, Von Neumann and Kolmogorov, being used as a tool for computer science and complexity theory. It is in particular in Kolmogorov’s work, that information theory and entropy are revisited from an algorithmic perspective: given an input sequence and a universal Turing machine, Kolmogorov found that the length of the shortest set of instructions, i.e. the program, that enables the machine to compute the input sequence was related to the sequence’s entropy. This definition of the complexity of a sequence already gives hint of the differences between random and deterministic signals, in the fact that a truly random sequence would require as many instructions for the machine as the size of the input sequence to compute, as there is no other option than programming the machine to copy the sequence point by point. On the other hand, a sequence generated by a deterministic system would simply require knowing the rules governing its evolution, for example the equations of motion in the case of a dynamical system. It is therefore through the work of Kolmogorov, and also independently by Sinai, that entropy is directly applied to the study of dynamical systems and, in particular, deterministic chaos. The so-called Kolmogorov-Sinai entropy, in fact, is a well-established measure of how complex and unpredictable a dynamical system can be, based on the analysis of trajectories in its state space. In the last decades, the use of information theory on signal analysis has contributed to the elaboration of many entropy-based measures, such as sample entropy, transfer entropy, mutual information and permutation entropy, among others. These quantities allow to characterize not only single dynamical systems, but also highlight the correlations between systems and even more complex interactions like synchronization and chaos transfer. The wide spectrum of applications of these methods, as well as the need for theoretical studies to provide them a sound mathematical background, make information theory still a thriving topic of research. In this thesis, I will approach the use of information theory on dynamical systems starting from fundamental issues, such as estimating the uncertainty of Shannon’s entropy measures on a sequence of data, in the case of an underlying memoryless stochastic process. This result, beside giving insights on sensitive and still-unsolved aspects when using entropy-based measures, provides a relation between the maximum uncertainty on Shannon’s entropy estimations and the size of the available sequences, thus serving as a practical rule for experiment design. Furthermore, I will investigate the relation between entropy and some characteristic quantities in nonlinear time series analysis, namely Lyapunov exponents. Some examples of this analysis on recordings of a nonlinear chaotic system are also provided. Finally, I will discuss other entropy-based measures, among them mutual information, and how they compare to analytical techniques aimed at characterizing nonlinear correlations between experimental recordings. In particular, the complementarity between information-theoretical tools and analytical ones is shown on experimental data from the field of neuroscience, namely magnetoencefalography and electroencephalography recordings, as well as mete- orological data.
196

Inteligencia computacional en la programación de la producción con recursos adicionales

Alfaro Fernández, Pedro 26 October 2023 (has links)
[ES] En esta Tesis Doctoral se aborda el problema del taller de flujo de permutación considerando recursos adicionales renovables, que es una versión más realista del clásico problema de taller de flujo de permutación, muy estudiado en la literatura. La inclusión de los recursos ayuda a acercar el mundo académico-científico al mundo real de la industria. Se ha realizado una completa revisión bibliográfica que no se ha limitado a problemas del taller de flujo, sino que han revisado problemas similares del ámbito de scheduling que consideren recursos. En esta revisión, no se han encontrado en la literatura artículos para el problema concreto que se estudia en esta tesis. Por ello, la aportación principal de esta Tesis Doctoral es el estudio por primera vez de este problema y la propuesta y adaptación de métodos para su resolución. Inicialmente, el problema se modeliza a través de un modelo de programación lineal entera mixta (MILP). Dada la complejidad del problema, el MILP es capaz de resolver instancias de un tamaño muy pequeño. Por ello, es necesario adaptar, diseñar e implementar heurísticas constructivas y metaheurísticas para obtener buenas soluciones en un tiempo de computación razonable. Para evaluar la eficacia y eficiencia de los métodos propuestos, se generan instancias de problemas partiendo de los conjuntos más utilizados en la literatura para el taller de flujo de permutación. Se utilizan estas instancias propuestas tanto para calibrar los distintos métodos como para evaluar su rendimiento a través de experimentos computacionales masivos. Los experimentos muestran que las heurísticas propuestas son métodos sencillos que consiguen soluciones factibles de una forma muy rápida. Para mejorar las soluciones obtenidas con las heurísticas y facilitar el movimiento a otros espacios de soluciones, se proponen tres metaheurísticas: un método basado en búsqueda local iterativa (ILS), un método voraz iterativo (IG) y un algoritmo genético con búsqueda local (HGA). Todos ellos utilizan las heurísticas propuestas más eficaces como solución o soluciones iniciales. Las metaheurísticas obtienen las mejores soluciones utilizando tiempos de computación razonables, incluso para las instancias de mayor tamaño. Todos los métodos han sido implementados dentro de la plataforma FACOP (Framework for Applied Combinatorial Optimization Problems). Dicha plataforma es capaz de incorporar nuevos algoritmos de optimización para problemas de investigación operativa relacionados con la toma de decisiones de las organizaciones y está diseñada para abordar casos reales en empresas. El incorporar en esta plataforma todas las metodologías propuestas en esta Tesis Doctoral, acerca el mundo académico al mundo empresarial. / [CA] En aquesta Tesi Doctoral s'aborda el problema del taller de flux de permutació considerant recursos addicionals renovables, que és una versió més realista del clàssic problema de taller de flux de permutació, molt estudiat a la literatura. La inclusió dels recursos ajuda a apropar el món acadèmic-científic al món real de la indústria. S'ha realitzat una revisió bibliogràfica completa que no s'ha limitat a problemes del taller de flux, sinó que ha revisat problemes similars de l'àmbit de scheduling que considerin recursos. En aquesta revisió, no s'ha trobat a la literatura articles per al problema concret que s'estudia en aquesta tesi. Per això, l'aportació principal d'aquesta Tesi Doctoral és l'estudi per primera vegada d'aquest problema i la proposta i l'adaptació de mètodes per resoldre'ls. Inicialment, el problema es modelitza mitjançant un model de programació lineal sencera mixta (MILP). Donada la complexitat del problema, el MILP és capaç de resoldre instàncies d'un tamany molt petita. Per això, cal adaptar, dissenyar i implementar heurístiques constructives i metaheurístiques per obtenir bones solucions en un temps de computació raonable. Per avaluar l'eficàcia i l'eficiència dels mètodes proposats, es generen instàncies de problemes partint dels conjunts més utilitzats a la literatura per al taller de flux de permutació. S'utilitzen aquestes instàncies proposades tant per calibrar els diferents mètodes com per avaluar-ne el rendiment a través d'experiments computacionals massius. Els experiments mostren que les heurístiques proposades són mètodes senzills que aconsegueixen solucions factibles de manera molt ràpida. Per millorar les solucions obtingudes amb les heurístiques i facilitar el moviment a altres espais de solucions, es proposen tres metaheurístiques: un mètode basat en cerca local iterativa (ILS), un mètode voraç iteratiu (IG) i un algorisme genètic híbrid (HGA). Tots ells utilitzen les heurístiques proposades més eficaces com a solució o solucions inicials. Les metaheurístiques obtenen les millors solucions utilitzant temps de computació raonables, fins i tot per a les instàncies més grans. Tots els mètodes han estat implementats dins de la plataforma FACOP (Framework for Applied Combinatorial Optimization Problems). Aquesta plataforma és capaç d'incorporar nous algorismes d'optimització per a problemes de recerca operativa relacionats amb la presa de decisions de les organitzacions i està dissenyada per abordar casos reals a empreses. El fet d'incorporar en aquesta plataforma totes les metodologies proposades en aquesta Tesi Doctoral, apropa el món acadèmic al món empresarial. / [EN] In this Doctoral Thesis, the permutation flowshop problem is addressed considering additional renewable resources, which is a more realistic version of the classic permutation flowshop problem, widely studied in the literature. The inclusion of resources helps to bring the academic-scientific world closer to the real world of industry. A complete bibliographic review has been carried out that has not been limited to flow shop problems, but has reviewed similar problems in the scheduling field that consider resources. In this review, no articles have been found in the literature for the specific problem studied in this thesis. Therefore, the main contribution of this Doctoral Thesis is the study for the first time of this problem and the proposal and adaptation of methods for its resolution. Initially, the problem is modeled through a mixed integer linear programming (MILP) model. Given the complexity of the problem, the MILP is capable of solving very small instances. Therefore, it is necessary to adapt, design and implement constructive heuristics and metaheuristics to obtain good solutions in a reasonable computation time. In order to evaluate the effectiveness and efficiency of the proposed methods, problem instances are generated starting from the sets most used in the literature for the permutation flowshop. These proposed instances are used both to calibrate the different methods and to evaluate their performance through massive computational experiments. Experiments show that proposed heuristics are simple methods that achieve feasible solutions very quickly. To improve the solutions obtained with the heuristics and facilitate movement to other solution spaces, three metaheuristics are proposed: a method based on iterated local search (ILS), an iterative greedy method (IG) and a hybrid genetic algorithm (HGA). All of them use the most effective proposed heuristics as initial solution or solutions. Metaheuristics get the best solutions using reasonable computation times, even for the largest instances. All the methods have been implemented within the FACOP platform (Framework for Applied Combinatorial Optimization Problems). Said platform is capable of incorporating new optimization algorithms for operational research problems related to decision-making in organizations and it is designed to address real cases in companies. Incorporating in this platform all the methodologies proposed in this Doctoral Thesis, brings the academic world closer to the business world. / Alfaro Fernández, P. (2023). Inteligencia computacional en la programación de la producción con recursos adicionales [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/198891
197

Pattern posets: enumerative, algebraic and algorithmic issues

Cervetti, Matteo 22 March 2021 (has links)
The study of patterns in combinatorial structures has grown up in the past few decades to one of the most active trends of research in combinatorics. Historically, the study of permutations which are constrained by not containing subsequences ordered in various prescribed ways has been motivated by the problem of sorting permutations with certain devices. However, the richness of this notion became especially evident from its plentiful appearances in several very different disciplines, such as pure mathematics, mathematical physics, computer science,biology, and many others. In the last decades, similar notions of patterns have been considered on discrete structures other than permutations, such as integer sequences, lattice paths, graphs, matchings and set partitions. In the first part of this talk I will introduce the general framework of pattern posets and some classical problems about patterns. In the second part of this talk I will present some enumerative results obtained in my PhD thesis about patterns in permutations, lattice paths and matchings. In particular I will describe a generating tree with a single label for permutations avoiding the vincular pattern 1 - 32 - 4, a finite automata approach to enumerate lattice excursions avoiding a single pattern and some results about matchings avoiding juxtapositions and liftings of patterns.
198

Développement de Graphe de Connectivité Différentiel pour Caractérisation des Régions Cérébrales Impliquées dans l'Epilepsie

Amini, Ladan 21 December 2010 (has links) (PDF)
Les patients pharmaco-résistants sont des candidats pour la chirurgie de l'épilepsie. Le but de cette chirurgie est d'enlever les zones à l'origine de la crise (SOZ) sans créer de nouveaux déficits neurologiques. Pour localiser les SOZs, une des meilleures approches consiste à analyser des électroencéphalogrammes intracérébraux (iEEG). Toutefois, l'enregistrement des crises, qui sont des événements rares et critiques, est compliqué contrairement à l'enregistrement de décharges épileptiques intercritiques (IED), qui sont généralement très fréquentes et anodines. La prévision des SOZs, par estimation des régions à l'origine des IEDs, est donc une alternative très intéressante, et la question de savoir si l'estimation des régions IED peut être utile pour prédire les SOZs, a été au coeur de plusieurs études. Malgré des résultats intéressants, la question reste ouverte, notamment en raison du manque de fiabilité des résultats fournis par ces méthodes. L'objectif de cette thèse est de proposer une méthode robuste d'estimation des régions à l'origine des IEDs (notées LIED) par analyse d'enregistrements intracérébraux iEEG. Le point essentiel de cette nouvelle méthode repose sur la détermination d'un graphe de connectivité différentiel (DCG), qui ne conserve que les noeuds (électrodes) associées aux signaux iEEG qui changent de façon significative selon la présence ou l'absence d'IEDs. En fait, on construit plusieurs DCGs, chacun étant caractéristique d'une échelle obtenue après transformée en ondelettes. La fiabilitié statistiques des DCGs est obtenue à l'aide des tests de permutation. L'étape suivante consiste à mesurer les quantités d'information émise par chaque noeud, et d'associer à chaque connexion (arête) du graphe une orientation qui indique le transfert d'information du noeud source vers le noeud cible. Pour celà, nous avons introduit une nouvelle mesure nommée Local Information (LI), que nous avons comparée à des mesures classiques de graphes, et qui permet de définir de façon robuste les noeuds sources pour les graphes de chaque échelle. Les LIEDs sont finalement estimées selon une méthode d'optimisation multi-objectifs (de type Pareto, peu utilisée dans la communauté signal-image) construite à partir des valeurs des LI des DCG dans les différentes bandes de fréquences. La méthode proposée a été validée sur cinq patients épileptiques, qui ont subi une chirurgie d'exérèse et sont déclarés guéris. L'estimation des régions LIED a été comparée avec les SOZs détectées visuellement par l'épileptologue et celles détectées automatiquement par une méthode utilisant une stimulation destinée à provoquer des crises. La comparaison révèle des résultats congruents entre les SOZs et les régions LIED estimées. Ainsi, cette approche fournit des LIED qui devraient être des indications précieuses pour l'évaluation préopératoire en chirugie de l'épilepsie.
199

Développement de potentiels statistiques pour l'étude in silico de protéines et analyse de structurations alternatives. Development of statistical potentials for the in silico study of proteins and analysis of alternative structuring.

Dehouck, Yves 20 May 2005 (has links)
Cette thèse se place dans le cadre de l'étude in silico, c'est-à-dire assistée par ordinateur, des liens qui unissent la séquence d'une protéine à la (ou aux) structure(s) tri-dimensionnelle(s) qu'elle adopte. Le décryptage de ces liens présente de nombreuses applications dans divers domaines et constitue sans doute l'une des problématiques les plus fascinantes de la recherche en biologie moléculaire. Le premier aspect de notre travail concerne le développement de potentiels statistiques dérivés de bases de données de protéines dont les structures sont connues. Ces potentiels présentent plusieurs avantages: ils peuvent être aisément adaptés à des représentations structurales simplifiées, et permettent de définir un nombre limité de fonctions énergétiques qui incarnent l'ensemble complexe d'interactions gouvernant la structure et la stabilité des protéines, et qui incluent également certaines contributions entropiques. Cependant, leur signification physique reste assez nébuleuse, car l'impact des diverses hypothèses nécessaires à leur dérivation est loin d'être clairement établi. Nous nous sommes attachés à l'étude de certaines limitations des ces potentiels: leur dépendance en la taille des protéines incluses dans la base de données, la non-additivité des termes de potentiels, et l'importance souvent négligée de l'environnement protéique spécifique ressenti par chaque résidu. Nous avons ainsi mis en évidence que l'influence de la taille des protéines de la base de données sur les potentiels de distance entre résidus est spécifique à chaque paire d'acides aminés, peut être relativement importante, et résulte essentiellement de la répartition inhomogène des résidus hydrophobes et hydrophiles entre le coeur et la surface des protéines. Ces résultats ont guidé la mise au point de fonctions correctives qui permettent de tenir compte de cette influence lors de la dérivation des potentiels. Par ailleurs, la définition d'une procédure générale de dérivation de potentiels et de termes de couplage a rendu possible la création d'une fonction énergétique qui tient compte simultanément de plusieurs descripteurs de séquence et de structure (la nature des résidus, leurs conformations, leurs accessibilités au solvant, ainsi que les distances qui les séparent dans l'espace et le long de la séquence). Cette fonction énergétique présente des performances nettement améliorées par rapport aux potentiels originaux, et par rapport à d'autres potentiels décrits dans la littérature. Le deuxième aspect de notre travail concerne l'application de programmes basés sur des potentiels statistiques à l'étude de protéines qui adoptent des structures alternatives. La permutation de domaines est un phénomène qui affecte diverses protéines et qui implique la génération d'un oligomère suite à l'échange de fragments structuraux entre monomères identiques. Nos résultats suggèrent que la présence de "faiblesses structurales", c'est-à-dire de régions qui ne sont pas optimales vis-à-vis de la stabilité de la structure native ou qui présentent une préférence marquée pour une conformation non-native en absence d'interactions tertiaires, est intimement liée aux mécanismes de permutation. Nous avons également mis en évidence l'importance des interactions de type cation-{pi}, qui sont fréquemment observées dans certaines zones clés de la permutation. Finalement, nous avons sélectionné un ensemble de mutations susceptibles de modifier sensiblement la propension de diverses protéines à permuter. L'étude expérimentale de ces mutations devrait permettre de valider, ou de raffiner, les hypothèses que nous avons proposées quant au rôle joué par les faiblesses structurales et les interactions de type cation-{pi}. Nous avons également analysé une autre protéine soumise à d'importants réarrangements conformationnels: l'{alpha}1-antitrypsine. Dans le cas de cette protéine, les modifications structurales sont indispensables à l'exécution de l'activité biologique normale, mais peuvent sous certaines conditions mener à la formation de polymères insolubles et au développement de maladies. Afin de contribuer à une meilleure compréhension des mécanismes responsables de la polymérisation, nous avons cherché à concevoir rationnellement des protéines mutantes qui présentent une propension à polymériser contrôlée. Des tests expérimentaux ont été réalisés par le groupe australien du Professeur S.P. Bottomley, et ont permis de valider nos prédictions de manière assez remarquable. ---------------------------------------------------------------------------------------------------- The work presented in this thesis concerns the computational study of the relationships between the sequence of a protein and its three-dimensional structure(s). The unravelling of these relationships has many applications in different domains and is probably one of the most fascinating issues in molecular biology. The first part of our work is devoted to the development of statistical potentials derived from databases of known protein structures. These potentials allow to define a limited number of energetic functions embodying the complex ensemble of interactions that rule protein folding and stability (including some entropic contributions), and can be easily adapted to simplified representations of protein structures. However, their physical meaning remains unclear since several hypotheses and approximations are necessary, whose impact is far from clearly understood. We studied some of the limitations of these potentials: their dependence on the size of the proteins included in the database, the non-additivity of the different potential terms, and the importance of the specific environment of each residue. Our results show that residue-based distance potentials are affected by the size of the database proteins, and that this effect can be quite strong, is residue-specific, and seems to result mostly from the inhomogeneous partition of hydrophobic and hydrophilic residues between the surface and the core of proteins. On the basis of these observations, we defined a set of corrective functions in order to take protein size into account while deriving the potentials. On the other hand, we developed a general procedure of derivation of potentials and coupling terms and consequently created an energetic function describing the correlations between several sequence and structure descriptors (the nature of each residue, the conformation of its main chain, its solvent accessibility, and the distances that separate it from other residues, in space and along the sequence). This energetic function presents a strongly improved predictive power, in comparison with the original potentials and with other potentials described in the literature. The second part describes the application of different programs, based on statistical potentials, to the study of proteins that adopt alternative structures. Domain swapping involves the exchange of a structural element between identical proteins, and leads to the generation of an oligomeric unit. We showed that the presence of “structural weaknesses”, regions that are not optimal with respect to the folding mechanisms or to the stability of the native structure, seems to be intimately linked with the swapping mechanisms. In addition, cation-{pi} interactions were frequently detected in some key locations and might also play an important role. Finally, we designed a set of mutations that are likely to affect the swapping propensities of different proteins. The experimental study of these mutations should allow to validate, or refine, our hypotheses concerning the importance of structural weaknesses and cation-{pi} interactions. We also analysed another protein that undergoes large conformational changes: {alpha}1-antitrypsin. In this case, the structural modifications are necessary to the proper execution of the biological activity. However, under certain circumstances, they lead to the formation of insoluble polymers and the development of diseases. With the aim of reaching a better understanding of the mechanisms that are responsible for this polymerisation, we tried to design mutant proteins that display a controlled polymerisation propensity. An experimental study of these mutants was conducted by the group of Prof. S.P. Bottomley, and remarkably confirmed our predictions.
200

Novos limitantes inferiores para o método branch-and-bound na solução de problemas flowshop permutacional / New lower bounds for the branch-and-bound method for solving permutation flowshop problems

Tomazella, Caio Paziani 15 May 2019 (has links)
Em um contexto industrial, a programação da produção tem como objetivo alocar recursos para operações de forma a aumentar a eficiência operacional do processo de fabricação. Esta programação pode ser modelada na forma de problemas de sequenciamento de tarefas, que são resolvidos visando minimizar um determinado critério de desempenho. A aplicação de métodos exatos nestes problemas possibilita encontrar a solução ótima, tanto para aplicação direta como para a validação de métodos heurísticos e metaheurísticas. Entretanto, a literatura mostra que os métodos exatos, tanto a resolução do problema pela modelagem em programação linear-inteira mista como o branch-and-bound, têm sua aplicação restrita à problemas de menores tamanhos. O objetivo deste trabalho é propor novas formulações de limitantes inferiores para a aplicação do branch-and-bound em problemas de flowshop permutacional visando aumentar sua eficiência e aplicabilidade. Os limitantes propostos são avaliados em problemas de flowshop permutacional com tempos de setup dependente da sequência, tendo como critérios de desempenho o tempo de fluxo total e o atraso total. A avaliação da aplicabilidade de cada limitante é feita através do número de nós explorados e o tempo computacional gasto pelo branch-and-bound para resolver problemas de diversos tamanhos. / In an industrial context, production sequencing aims at allocating resources for job processing while increasing manufacturing efficiency. This task can be modelled in the form of scheduling problems, which are solved by minimizing a pre-determined performance criterion. The use of exact methods allows the optimal solution to be found, which can be applied directly in the manufacturing shop or used to validate heuristic and metaheuristic methods. However, the literature shows that MILP and branch-and-bound, both exact methods, are restrained to small-sized scheduling problems. The aim of this project is to propose new lower bound formulations to be used in the branch-and-bound method for permutational flowshop probems, in order to extend its efficiency and applicability. The proposed bounds are tested in permutational flowshop problems with sequence dependent setup times, and using as performance criteria the total flow time and the total tardiness. The evaluation of each lower bounds applicability is done considering the number of explored nodes and the required computational time for the branch-and-bound to solve problem instances of different sizes.

Page generated in 0.4481 seconds