Spelling suggestions: "subject:"deterministic"" "subject:"eterministic""
271 |
L'intelligence en essaim sous l'angle des systèmes complexes : étude d'un système multi-agent réactif à base d'itérations logistiques couplées / Swarm Intelligence and complex systems : study of a reactive multi-agent system based on iterated logistic mapsCharrier, Rodolphe 08 December 2009 (has links)
L'intelligence en essaim constitue désormais un domaine à part entière de l'intelligence artificielle distribuée. Les problématiques qu'elle soulève touchent cependant à de nombreux autres domaines ou questions scientifiques. En particulier le concept d'essaim trouve pleinement sa place au sein de la science dites des ``systèmes complexes''. Cette thèse présente ainsi la conception, les caractéristiques et les applications d'un modèle original, le système multi-agent logistique (SMAL), pour le domaine de l'intelligence en essaim. Le SMAL trouve son origine en modélisation des systèmes complexes : il est en effet issu des réseaux d'itérations logistiques couplées dont nous avons adapté le modèle de calcul au schéma ``influence-réaction'' des systèmes multi-agents. Ce modèle est fondé sur des principes communs à d'autres disciplines, comme la synchronisation et le contrôle paramétrique, que nous plaçons au coeur des mécanismes d'auto-organisation et d'adaptation du système. L'environnement à base de champs est l'autre aspect fondamental du SMAL, en permettant la réalisation des interactions indirectes des agents et en jouant le rôle d'une structure de données pour le système. Les travaux décrits dans cette thèse donnent lieu à des applications principalement en simulation et en optimisation combinatoire.L'intérêt et l'originalité du SMAL pour l'intelligence en essaim résident dans l'aspect générique de son schéma théorique qui permet de traiter avec un même modèle des phénomènes considérés a priori comme distincts dans la littérature : phénomènes de ``flocking'' et phénomènes stigmergiques ``fourmis'' à base de phéromones. Ce modèle répond ainsi à un besoin d'explication des mécanismes mis en jeu autant qu'au besoin d'en synthétiser les algorithmes générateurs. / Swarm Intelligence is from now on a full part of Distributed Artificial Intelligence. Its associated problematics meet many other fields and scientific questions. The concept of swarm in particular belongs to the science called the science of complex systems. This phd thesis shows the design and the characteristics and the applications of a novel type of model called the logistic multi-agent system (LMAS) dedicated to the Swarm Intelligence field. The LMAS has its foundations in complex system modeling: it is inspired from the coupled logistic map lattice model which has been adapted to the ``Influence-Reaction'' modeling of multi-agent systems. This model is based on universal principles such as synchronization and parametric control which are considered as the main mechanisms of self-organization and adaptation in the heart of the system. The field-layered based environment is the other important feature of the LMAS, since it enables indirect interactions and plays the part of a data structure for the whole system. The work of this thesis is put into practice for simulation and optimization.The novelty of the LMAS lies in its generic theoretical framework, which enables to tackle problems considered as distinct in the literature, in particular flocking and ant-like stigmergic behavior. This model meets the need of explaining basic mechanisms and the need of synthesizing generative algorithms for the Swarm Intelligence.
|
272 |
Modélisation stochastique de l'expression des gènes et inférence de réseaux de régulation / From stochastic modelling of gene expression to inference of regulatory networksHerbach, Ulysse 27 September 2018 (has links)
L'expression des gènes dans une cellule a longtemps été observable uniquement à travers des quantités moyennes mesurées sur des populations. L'arrivée des techniques «single-cell» permet aujourd'hui d'observer des niveaux d'ARN et de protéines dans des cellules individuelles : il s'avère que même dans une population de génome identique, la variabilité entre les cellules est parfois très forte. En particulier, une description moyenne est clairement insuffisante étudier la différenciation cellulaire, c'est-à-dire la façon dont les cellules souches effectuent des choix de spécialisation. Dans cette thèse, on s'intéresse à l'émergence de tels choix à partir de réseaux de régulation sous-jacents entre les gènes, que l'on souhaiterait pouvoir inférer à partir de données. Le point de départ est la construction d'un modèle stochastique de réseaux de gènes capable de reproduire les observations à partir d'arguments physiques. Les gènes sont alors décrits comme un système de particules en interaction qui se trouve être un processus de Markov déterministe par morceaux, et l'on cherche à obtenir un modèle statistique à partir de sa loi invariante. Nous présentons deux approches : la première correspond à une approximation de champ assez populaire en physique, pour laquelle nous obtenons un résultat de concentration, et la deuxième se base sur un cas particulier que l'on sait résoudre explicitement, ce qui aboutit à un champ de Markov caché aux propriétés intéressantes / Gene expression in a cell has long been only observable through averaged quantities over cell populations. The recent development of single-cell transcriptomics has enabled gene expression to be measured in individual cells: it turns out that even in an isogenic population, the molecular variability can be very important. In particular, an averaged description is not sufficient to account for cell differentiation. In this thesis, we are interested in the emergence of such cell decision-making from underlying gene regulatory networks, which we would like to infer from data. The starting point is the construction of a stochastic gene network model that is able to explain the data using physical arguments. Genes are then seen as an interacting particle system that happens to be a piecewise-deterministic Markov process, and our aim is to derive a tractable statistical model from its stationary distribution. We present two approaches: the first one is a popular field approximation, for which we obtain a concentration result, and the second one is based on an analytically tractable particular case, which provides a hidden Markov random field with interesting properties
|
273 |
Modélisation d’actifs industriels pour l’optimisation robuste de stratégies de maintenance / Modelling of industrial assets in view of robust maintenance optimizationDemgne, Jeanne Ady 16 October 2015 (has links)
Ce travail propose de nouvelles méthodes d’évaluation d’indicateurs de risque associés à une stratégie d’investissements, en vue d’une optimisation robuste de la maintenance d’un parc de composants. La quantification de ces indicateurs nécessite une modélisation rigoureuse de l’évolution stochastique des durées de vie des composants soumis à maintenance. Pour ce faire, nous proposons d’utiliser des processus markoviens déterministes par morceaux, qui sont généralement utilisés en Fiabilité Dynamique pour modéliser des composants en interaction avec leur environnement. Les indicateurs de comparaison des stratégies de maintenance candidates sont issus de la Valeur Actuelle Nette (VAN). La VAN représente la différence entre les flux financiers associés à une stratégie de référence et ceux associés à une stratégie de maintenance candidate. D’un point de vue probabiliste, la VAN est la différence de deux variables aléatoires dépendantes, ce qui en complique notablement l’étude. Dans cette thèse, les méthodes de Quasi Monte Carlo sont utilisées comme alternatives à la méthode de Monte Carlo pour la quantification de la loi de la VAN. Ces méthodes sont dans un premier temps appliquées sur des exemples illustratifs. Ensuite, elles ont été adaptées pour l’évaluation de stratégie de maintenance de deux systèmes de composants d’une centrale de production d’électricité. Le couplage de ces méthodes à un algorithme génétique a permis d’optimiser une stratégie d’investissements. / This work proposes new assessment methods of risk indicators associated with an investments plan in view of a robust maintenance optimization of a fleet of components. The quantification of these indicators requires a rigorous modelling of the stochastic evolution of the lifetimes of components subject to maintenance. With that aim, we propose to use Piecewise Deterministic Markov Processes which are usually used in Dynamic Reliability for the modelling of components in interaction with their environment. The comparing indicators of candidate maintenance strategies are derived from the Net Present Value (NPV). The NPV stands for the difference between the cumulated discounted cash-flows of both reference and candidate maintenance strategies. From a probabilistic point of view, the NPV is the difference between two dependent random variables, which complicates its study. In this thesis, Quasi Monte Carlo methods are used as alternatives to Monte Carlo method for the quantification of the NPV probabilistic distribution. These methods are firstly applied to illustrative examples. Then, they were adapted to the assessment of maintenance strategy of two systems of components of an electric power station. The coupling of these methods with a genetic algorithm has allowed to optimize an investments plan.
|
274 |
Funções generalizadas, modelos de crescimento contínuos e discretos e caminhadas estocásticas em meios desordenados / Generalized functions, discrete and continuous growth models and stochastic walks on disordered mediaGonzalez, Rodrigo Silva 06 July 2011 (has links)
Este trabalho está divido em duas partes. Na primeira apresentamos as funções logaritmo e exponencial generalizadas. A partir delas uma grande variedade de outras funções generalizadas pode ser obtida, permitindo uma formulação única dos comportamentos oscilatório, exponencial e lei de potência, característicos dos principais fenômenos físicos. Também mostramos que é possível generalizar a função densidade de probabilidade (pdf) exponencial estendida (stretched exponential) e, a partir dela, uma vasta gama de outras pdfs, que caracterizam os sistemas complexos em Física. As funções logaritmo e exponencial generalizadas também são úteis na generalização de vários modelos contínuos de crescimento em uma formulação única: o modelo de crescimento generalizado de Tsoullaris e Wallace. O mesmo pode ser feito para modelos discretos de crescimento, obtendo, como modelo mais geral, o -Ricker generalizado. Encerrando a primeira parte, mostramos que a pdf gaussiana generalizada (um caso particular da exponencial estendida generalizada) é a solução da equação de difusão não-linear, que caracteriza a caminhada determinista do turista. Na segunda parte deste trabalho é apresentada a caminhada do turista e suas duas versões originais: a determinista (CDT) e a estocástica (CET). A primeira delas é uma caminhada parcialmente autorrepulsiva, caracterizada por uma memória , em um meio desordenado multidimensional formado por N pontos. Em um ambiente unidimensional, ela apresenta uma transição entre uma exploração local e outra global, em um valor bem definido de memória 1 = log2N. Em sua versão estocástica (da qual a CDT é um caso particular), a dinâmica de movimentação é regida pela memória e pela temperatura T, responsável, em última instância, pelas probabilidades de deslocamento. Da mesma forma que a CDT, a CET também apresenta uma transição entre os regimes de exploração, caracterizada por uma memória e uma temperatura críticas e pela idade Np da caminhada (efeito de envelhecimento). Dada a dificuldade em tratar analiticamente a CET, introduzimos a caminhada estocástica modificada do turista (CEMT). Nesta versão, o parâmetro T passa a representar o alcance máximo de um passo da caminhada. Esta modificação permitiu tratar analiticamente a caminhada, sendo possível obter uma expressão analítica geral para a transição, em função dos parâmetros , T e Np. Estes resultados foram validados por experimentos numéricos. / The present work is splitted into two parts. In the first one we present the generalized logarithm and exponential functions. From them, a wide variety of other generalized functions can be obtained, that allow a unique formulation of oscillatory, exponential an power-law behaviors, that characterize physical phenomena. We also show that it is possible to generalize the stretched exponential probability density function (pdf) and, from there, a wide range of other pdfs that characterize complex systems in Physics. The generalized logarithm and exponential functions are also useful to generalize several continuous growth models into a single formulation: the generalized Tsoullaris and Wallace growth model. The same can be done for discrete growth models, getting, as more general model, the generalized -Ricker growth model. Concluding the first part, we show that the generalized Gaussian pdf (a special case of the generalized stretched exponential) is a solution of the nonlinear diffusion equation, which is a characteristic of deterministic tourist walk. In the second part we present the tourist walk and its two original versions: the deterministic one (DTW) and stochastic one (STW). The first one is a partially self-avoiding walk over a disordered multidimensional medium formed by N points and characterized by a memory . In a one-dimensional environment, it presents a transition from a local exploration to a global one at a well-defined memory value 1 = log2N. In its stochastic version (from which DTW is a particular case), the movement dynamics is ruled by the memory and a temperature T which is responsible by the displacement probabilities. Similar to DTW, STW also has a transition between exploration schemes, characterized by a critical memory and temperature and the walking age (Np) (aging effect). Due the difficulty on analytical treatment of the CET, we introduced the modified stochastic tourist walk (MSTW). In this version, the parameter T plays the role of a maximum distance of one walking step. This modification allowed us to treat analytically the walk, being possible to obtain a general analytical expression for the transition, as function to the parameters , T and Np. These results were validated by numerical experiments.
|
275 |
Análise de textura em imagens baseado em medidas de complexidade / Image Texture Analysis based on complex measuresCondori, Rayner Harold Montes 30 November 2015 (has links)
A análise de textura é uma das mais básicas e famosas áreas de pesquisa em visão computacional. Ela é também de grande importância em muitas outras disciplinas, tais como ciências médicas e biológicas. Por exemplo, uma tarefa comum de análise de textura é a detecção de tecidos não saudáveis em imagens de Ressonância Magnética do pulmão. Nesta dissertação, nós propomos um método novo de caracterização de textura baseado nas medidas de complexidade tais como o expoente de Hurst, o expoente de Lyapunov e a complexidade de Lempel-Ziv. Estas medidas foram aplicadas sobre amostras de imagens no espaço de frequência. Três métodos de amostragem foram propostas, amostragem: radial, circular e por caminhadas determinísticas parcialmente auto- repulsivas (amostragem CDPA). Cada método de amostragem produz um vetor de características por medida de complexidade aplicada. Esse vetor contem um conjunto de descritores que descrevem a imagem processada. Portanto, cada imagem será representada por nove vetores de características (três medidas de complexidade e três métodos de amostragem), os quais serão comparados na tarefa de classificação de texturas. No final, concatenamos cada vetor de características conseguido calculando a complexidade de Lempel-Ziv em amostras radiais e circulares com os descritores obtidos através de técnicas de análise de textura tradicionais, tais como padrões binários locais (LBP), wavelets de Gabor (GW), matrizes de co-ocorrência en níveis de cinza (GLCM) e caminhadas determinísticas parcialmente auto-repulsivas em grafos (CDPAg). Este enfoque foi testado sobre três bancos de imagens: Brodatz, USPtex e UIUC, cada um com seus próprios desafios conhecidos. As taxas de acerto de todos os métodos tradicionais foram incrementadas com a concatenação de relativamente poucos descritores de Lempel-Ziv. Por exemplo, no caso do método LBP, o incremento foi de 84.25% a 89.09% com a concatenação de somente cinco descritores. De fato, simplesmente concatenando cinco descritores são suficientes para ver um incremento na taxa de acerto de todos os métodos tradicionais estudados. Por outro lado, a concatenação de un número excessivo de descritores de Lempel-Ziv (por exemplo mais de 40) geralmente não leva a melhora. Neste sentido, vendo os resultados semelhantes obtidos nos três bancos de imagens analisados, podemos concluir que o método proposto pode ser usado para incrementar as taxas de acerto em outras tarefas que envolvam classificação de texturas. Finalmente, com a amostragem CDPA também se obtém resultados significativos, que podem ser melhorados em trabalhos futuros. / Texture analysis is one of the basic and most popular computer vision research areas. It is also of importance in many other disciplines, such as medical sciences and biology. For example, non-healthy tissue detection in lung Magnetic Resonance images is a common texture analysis task. We proposed a novel method for texture characterization based on complexity measures such as Lyapunov exponent, Hurst exponent and Lempel-Ziv complexity. This measurements were applied over samples taken from images in the frequency domain. Three types of sampling methods were proposed: radial sampling, circular sampling and sampling by using partially self-avoiding deterministic walks (CDPA sampling). Each sampling method produce a feature vector which contains a set of descriptors that characterize the processed image. Then, each image will be represented by nine feature vectors which are means to be compared in texture classification tasks (three complexity measures over samples from three sampling methods). In the end, we combine each Lempel-Ziv feature vector from the circular and radial sampling with descriptors obtained through traditional image analysis techniques, such as Local Binary Patterns (LBP), Gabor Wavelets (GW), Gray Level Co-occurrence Matrix (GLCM) and Self-avoiding Deterministic Walks in graphs (CDPAg). This approach were tested in three datasets: Brodatz, USPtex and UIUC, each one with its own well-known challenges. All traditional methods success rates were increased by adding relatively few Lempel-Ziv descriptors. For example in the LBP case the increment went from 84.25% to 89.09% with the addition of only five descriptors. In fact, just adding five Lempel-Ziv descriptors are enough to see an increment in the success rate of every traditional method. However, adding too many Lempel-Ziv descriptors (for example more than 40) generally doesnt produce better results. In this sense, seeing the similar results we obtain in all three databases, we conclude that this approach may be used to increment the success rate in a lot of others texture classification tasks. Finally, the CDPA sampling also obtain very promising results that we can improve further on future works.
|
276 |
Analyse et traitement de grandeurs électriques pour la détection et le diagnostic de défauts mécaniques dans les entraînements asynchrones. Application à la surveillance des roulements à billes / Detection and diagnostics of faults in permanent magnet synchronous machines by signal processing of control dataTrajin, Baptiste 01 December 2009 (has links)
Les entraînements électriques à base de machine asynchrone sont largement utilisés dans les applications industrielles en raison de leur faible coût, de leurs performances et de leur robustesse. Cependant, des modes de fonctionnement dégradés peuvent apparaître durant la vie de la machine. L'une des raisons principales de ces défaillances reste les défauts de roulements à billes. Afin d'améliorer la sûreté de fonctionnement des entraînements, des schémas de surveillance peuvent être mis en place afin d'assurer une maintenance préventive. Ce travail de thèse traite de la détection et du diagnostic des défauts mécaniques et plus particulièrement des défauts de roulements dans une machine asynchrone. Généralement, une surveillance vibratoire peut être mise en place. Cette méthode de surveillance est cependant souvent chère du fait de la chaîne de mesure. Une approche, basée sur l'analyse et le traitement des courants statoriques, est alors proposée, afin de suppléer à l'analyse vibratoire. L'étude est basée sur l'existence et la caractérisation des effets des oscillations du couple de charge sur les courants d'alimentation. Un schéma de détection est alors introduit pour détecter différents types de défauts de roulements. De plus, des variables mécaniques, telles que la vitesse ou le couple, sont également reconstruites afin de fournir une indication sur la présence de défauts de roulements. Par ailleurs, un diagnostic des modulations des courants statoriques est proposé, en régime permanent et en régime transitoire, quel que soit le rapport entre les fréquences porteuse et modulante. Les méthodes étudiées sont la transformée de Hilbert, la transformée de Concordia, l'amplitude et la fréquence instantanées ainsi que la distribution de Wigner-Ville. / Asynchronous drives are widely used in many industrial applications because of their low cost, high performance and robustness. However, faulty operations may appear during the lifetime of the system. The most frequently encountered faults in asynchronous drives come from rolling bearings. To improve the availability and reliability of the drives, a condition monitoring may be implemented to favor the predictive maintenance. This Ph.D. thesis deals with detection and diagnosis of mechanical faults, particularly rolling bearings defects in induction motors. Traditionally, bearing monitoring is supervised using vibration analysis. Measuring such quantities is often expensive due to the measurement system. An other approach, based on stator current analysis, is then proposed. The characterization of load torque oscillation effects on stator currents is studied. A detection scheme is then proposed to detect several types of bearing faults. Moreover, mechanical variables, such as rotating speed or torque, are estimated in order to detect bearings defects. In addition, a diagnosis of stator currents modulations is proposed, in steady and transient state, whatever the career and modulation frequencies. Hilbert transform, Concordia transform, instantaneous amplitude and frequency are studied. The Wigner-Ville distribution is used in transient state.
|
277 |
Koncepce optimalizace lůžkového fondu kraje Vysočina / The Concept of Optimalization of Hospital Bed Fund in the Vysočina RegionMlčák, Jan January 2011 (has links)
Thesis concentrates on theory and practical issues regarding hospital bed fund. In its theoretical part, the thesis describes reasons that lead to optimalization of hospital bed fund and methods used for calculation of suitable amount of hospital beds. In its practical part it focuses on a concept proposal of a hospital bed fund optimalization in the Vysočina region while calculating a minimal amount of hospital beds needed using deterministic methods arising from empiric formulas. This concept is then compared to already existing concept created by General Healthcare Insurance Company of the Czech Republic (in Czech: Vseobecna Zdravotni Pojistovna).
|
278 |
Scalable Trajectory Approach for ensuring deterministic guarantees in large networks / Passage à l'échelle de l'approche par trajectoire dans de larges réseauxMedlej, Sara 26 September 2013 (has links)
Tout comportement défectueux d’un système temps-réel critique, comme celui utilisé dans le réseau avionique ou le secteur nucléaire, peut mettre en danger des vies. Par conséquent, la vérification et validation de ces systèmes est indispensable avant leurs déploiements. En fait, les autorités de sécurité demandent d’assurer des garanties déterministes. Dans cette thèse, nous nous intéressons à obtenir des garanties temporelles, en particulier nous avons besoin de prouver que le temps de réponse de bout-en-bout de chaque flux présent dans le réseau est borné. Ce sujet a été abordé durant de nombreuses années et plusieurs approches ont été développées. Après une brève comparaison entre les différentes approches existantes, une semble être un bon candidat. Elle s’appelle l’approche par trajectoire; cette méthode utilise les résultats établis par la théorie de l'ordonnancement afin de calculer une limite supérieure. En réalité, la surestimation de la borne calculée peut entrainer la rejection de certification du réseau. Ainsi une première partie du travail consiste à détecter les sources de pessimisme de l’approche adoptée. Dans le cadre d’un ordonnancement FIFO, les termes ajoutant du pessimisme à la borne calculée ont été identifiés. Cependant, comme les autres méthodes, l’approche par trajectoire souffre du problème de passage à l’échelle. En fait, l’approche doit être appliquée sur un réseau composé d’une centaine de commutateur et d’un nombre de flux qui dépasse les milliers. Ainsi, il est important qu’elle soit en mesure d'offrir des résultats dans un délai acceptable. La première étape consiste à identifier, dans le cas d’un ordonnancement FIFO, les termes conduisant à un temps de calcul important. L'analyse montre que la complexité du calcul est due à un processus récursif et itératif. Ensuite, en se basant toujours sur l’approche par trajectoire, nous proposons de calculer une limite supérieure dans un intervalle de temps réduit et sans perte significative de précision. C'est ce qu'on appelle l'approche par trajectoire scalable. Un outil a été développé permettant de comparer les résultats obtenus par l’approche par trajectoire et notre proposition. Après application sur un réseau de taille réduite (composé de 10 commutateurs), les résultats de simulations montrent que la durée totale nécessaire pour calculer les bornes des milles flux a été réduite de plusieurs jours à une dizaine de secondes. / In critical real-time systems, any faulty behavior may endanger lives. Hence, system verification and validation is essential before their deployment. In fact, safety authorities ask to ensure deterministic guarantees. In this thesis, we are interested in offering temporal guarantees; in particular we need to prove that the end-to-end response time of every flow present in the network is bounded. This subject has been addressed for many years and several approaches have been developed. After a brief comparison between the existing approaches, the Trajectory Approach sounded like a good candidate due to the tightness of its offered bound. This method uses results established by the scheduling theory to derive an upper bound. The reasons leading to a pessimistic upper bound are investigated. Moreover, since the method must be applied on large networks, it is important to be able to give results in an acceptable time frame. Hence, a study of the method’s scalability was carried out. Analysis shows that the complexity of the computation is due to a recursive and iterative processes. As the number of flows and switches increase, the total runtime required to compute the upper bound of every flow present in the network understudy grows rapidly. While based on the concept of the Trajectory Approach, we propose to compute an upper bound in a reduced time frame and without significant loss in its precision. It is called the Scalable Trajectory Approach. After applying it to a network, simulation results show that the total runtime was reduced from several days to a dozen seconds.
|
279 |
Complexidade descritiva das lÃgicas de ordem superior com menor ponto fixo e anÃlise de expressividade de algumas lÃgicas modais / Descriptive complexity of the logic of higher order with lower fixed point and analysis of expression of some modal logicsCibele Matos Freire 13 August 2010 (has links)
Em Complexidade Descritiva investigamos o uso de logicas para caracterizar classes
problemas pelo vies da complexidade. Desde 1974, quando Fagin provou que NP e capturado
pela logica existencial de segunda-ordem, considerado o primeiro resultado da area,
outras relac~oes entre logicas e classes de complexidade foram estabelecidas. Os resultados
mais conhecidos normalmemte envolvem logica de primeira-ordem e suas extens~oes,
e classes de complexidade polinomiais em tempo ou espaco. Alguns exemplos sÃo que a
logica de primeira-ordem estendida com o operador de menor ponto xo captura a clsse
P e que a logica de segunda-ordem estendida com o operador de fecho transitivo captura
a classe PSPACE. Nesta dissertaÃÃo, analisaremos inicialmente a expressividade de algumas
logicas modais com relacÃo ao problema de decisÃo REACH e veremos que e possvel
expressa-lo com as logicas temporais CTL e CTL. Analisaremos tambem o uso combinado
de logicas de ordem superior com o operador de menor ponto xo e obteremos como
resultado que cada nvel dessa hierarquia captura cada nvel da hierarquia determinstica
em tempo exponencial. Como corolario, provamos que a hierarquia de HOi(LFP) nÃo
colapsa, ou seja, HOi(LFP) HOi+1(LFP) / In Descriptive Complexity, we investigate the use of logics to characterize computational
classes os problems through complexity. Since 1974, when Fagin proved that the
class NP is captured by existential second-order logic, considered the rst result in this
area, other relations between logics and complexity classes have been established. Wellknown
results usually involve rst-order logic and its extensions, and complexity classes
in polynomial time or space. Some examples are that the rst-order logic extended by
the least xed-point operator captures the class P and the second-order logic extended by
the transitive closure operator captures the class PSPACE. In this dissertation, we will
initially analyze the expressive power of some modal logics with respect to the decision
problem REACH and see that is possible to express it with temporal logics CTL and
CTL. We will also analyze the combined use of higher-order logics extended by the least
xed-point operator and obtain as result that each level of this hierarchy captures each
level of the deterministic exponential time hierarchy. As a corollary, we will prove that the
hierarchy of HOi(LFP), for i 2, does not collapse, that is, HOi(LFP) HOi+1(LFP)
|
280 |
Simulation des matériaux magnétiques à base Cobalt par Dynamique Moléculaire Magnétique / Simulation of Cobalt base materials using Magnetic Molecular DynamicsBeaujouan, David 07 November 2012 (has links)
Les propriétés magnétiques des matériaux sont fortement connectées à leur structure cristallographique. Nous proposons un modèle atomique de la dynamique d'aimantation capable de rendre compte de cette magnétoélasticité. Bien que ce travail s'inscrive dans une thématique générale de l'étude des matériaux magnétiques en température, nous la particularisons à un seul élément, le Cobalt. Dans ce modèle effectif, les atomes sont décrits par 3 vecteurs classiques qui sont position, impulsion et spin. Ils interagissent entre eux via un potentiel magnéto-mécanique ad hoc. On s'intéresse tout d'abord à la dynamique de spin atomique. Cette méthode permet d'aborder simplement l'écriture des équations d'évolution d'un système atomique de spins dans lequel la position et l'impulsion des atomes sont gelées. Il est toutefois possible de définir une température de spin permettant de développer naturellement une connexion avec un bain thermique. Montrant les limites d'une approche stochastique, nous développons une nouvelle formulation déterministe du contrôle de la température d'un système à spins.Dans un second temps, nous développons et analysons les intégrateurs géométriques nécessaires au couplage temporel de la dynamique moléculaire avec cette dynamique de spin atomique. La liaison des spins avec le réseau est assurée par un potentiel magnétique dépendant des positions des atomes. La nouveauté de ce potentiel réside dans la manière de paramétrer l'anisotropie magnétique qui est la manifestation d'un couplage spin-orbite. L'écriture d'un modèle de paires étendu de l'anisotropie permet de restituer les constantes de magnétostriction expérimentales du hcp-Co. En considérant un système canonique, où pression et température sont contrôlées, nous avons mis en évidence la transition de retournement de spin si particulière au Co vers 695K.Nous finissons par l'étude des retournements d'aimantation super-paramagnétiques de nanoplots de Co permettant de comparer ce couplage spin-réseau aux mesures récentes. / The magnetic properties of materials are strongly connected to their crystallographic structure. An atomistic model of the magnetization dynamics is developed which takes into account magneto-elasticity. Although this study is valid for all magnetic materials under temperatures, this study focuses only on Cobalt. In our effective model, atoms are described by three classical vectors as position, momentum and spin, which interact via an ad hoc magneto-mechanical potential.The atomistic spin dynamics is first considered. This method allows us to write the evolution equations of an atomic system of spins in which positions and impulsions are first frozen. However, a spin temperature is introduced to develop a natural connection with a thermal bath. Showing the limits of the stochastic approach, a genuine deterministic approach is followed to control the canonical temperature in this spin system.In a second step, several geometrical integrators are developed and analyzed to couple together both the molecular dynamics and atomic spin dynamics schemes. The connection between the spins and the lattice is provided by the atomic positions dependence of the magnetic potential. The novelty of this potential lies in the parameterization of the magnetic anisotropy which originates in the spin-orbit coupling. Using a dedicated pair model of anisotropy, the magnetostrictive constants of hcp-Co are restored. In a canonical system where pressure and temperature are controlled simultaneously, the transition of rotational magnetization of Co is found.Finally the magnetization reversals of super-paramagnetic Co nanodots is studied to quantify the impact of spin-lattice coupling respectively to recent measurements.
|
Page generated in 0.0558 seconds