• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 43
  • Tagged with
  • 109
  • 109
  • 81
  • 45
  • 40
  • 40
  • 40
  • 38
  • 34
  • 22
  • 18
  • 17
  • 17
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Mathematical programming approaches to pricing problems

Violin, Alessia 18 December 2014 (has links)
There are many real cases where a company needs to determine the price of its products so as to maximise its revenue or profit.<p>To do so, the company must consider customers' reactions to these prices, as they may refuse to buy a given product or service if its price is too high. This is commonly known in literature as a pricing problem.<p>This class of problems, which is typically bilevel, was first studied in the 1990s and is NP-hard, although polynomial algorithms do exist for some particular cases. Many questions are still open on this subject.<p><p>The aim of this thesis is to investigate mathematical properties of pricing problems, in order to find structural properties, formulations and solution methods that are as efficient as possible. In particular, we focus our attention on pricing problems over a network. In this framework, an authority owns a subset of arcs and imposes tolls on them, in an attempt to maximise his/her revenue, while users travel on the network, seeking for their minimum cost path.<p><p>First, we provide a detailed review of the state of the art on bilevel pricing problems. <p>Then, we consider a particular case where the authority is using an unit toll scheme on his/her subset of arcs, imposing either the same toll on all of them, or a toll proportional to a given parameter particular to each arc (for instance a per kilometre toll). We show that if tolls are all equal then the complexity of the problem is polynomial, whereas in case of proportional tolls it is pseudo-polynomial.<p>We then address a robust approach taking into account uncertainty on parameters. We solve some polynomial cases of the pricing problem where uncertainty is considered using an interval representation.<p><p>Finally, we focus on another particular case where toll arcs are connected such that they constitute a path, as occurs on highways. We develop a Dantzig-Wolfe reformulation and present a Branch-and-Cut-and-Price algorithm to solve it. Several improvements are proposed, both for the column generation algorithm used to solve the linear relaxation and for the branching part used to find integer solutions. Numerical results are also presented to highlight the efficiency of the proposed strategies. This problem is proved to be APX-hard and a theoretical comparison between our model and another one from the literature is carried out. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
62

Machine learning strategies for multi-step-ahead time series forecasting

Ben Taieb, Souhaib 08 October 2014 (has links)
How much electricity is going to be consumed for the next 24 hours? What will be the temperature for the next three days? What will be the number of sales of a certain product for the next few months? Answering these questions often requires forecasting several future observations from a given sequence of historical observations, called a time series. <p><p>Historically, time series forecasting has been mainly studied in econometrics and statistics. In the last two decades, machine learning, a field that is concerned with the development of algorithms that can automatically learn from data, has become one of the most active areas of predictive modeling research. This success is largely due to the superior performance of machine learning prediction algorithms in many different applications as diverse as natural language processing, speech recognition and spam detection. However, there has been very little research at the intersection of time series forecasting and machine learning.<p><p>The goal of this dissertation is to narrow this gap by addressing the problem of multi-step-ahead time series forecasting from the perspective of machine learning. To that end, we propose a series of forecasting strategies based on machine learning algorithms.<p><p>Multi-step-ahead forecasts can be produced recursively by iterating a one-step-ahead model, or directly using a specific model for each horizon. As a first contribution, we conduct an in-depth study to compare recursive and direct forecasts generated with different learning algorithms for different data generating processes. More precisely, we decompose the multi-step mean squared forecast errors into the bias and variance components, and analyze their behavior over the forecast horizon for different time series lengths. The results and observations made in this study then guide us for the development of new forecasting strategies.<p><p>In particular, we find that choosing between recursive and direct forecasts is not an easy task since it involves a trade-off between bias and estimation variance that depends on many interacting factors, including the learning model, the underlying data generating process, the time series length and the forecast horizon. As a second contribution, we develop multi-stage forecasting strategies that do not treat the recursive and direct strategies as competitors, but seek to combine their best properties. More precisely, the multi-stage strategies generate recursive linear forecasts, and then adjust these forecasts by modeling the multi-step forecast residuals with direct nonlinear models at each horizon, called rectification models. We propose a first multi-stage strategy, that we called the rectify strategy, which estimates the rectification models using the nearest neighbors model. However, because recursive linear forecasts often need small adjustments with real-world time series, we also consider a second multi-stage strategy, called the boost strategy, that estimates the rectification models using gradient boosting algorithms that use so-called weak learners.<p><p>Generating multi-step forecasts using a different model at each horizon provides a large modeling flexibility. However, selecting these models independently can lead to irregularities in the forecasts that can contribute to increase the forecast variance. The problem is exacerbated with nonlinear machine learning models estimated from short time series. To address this issue, and as a third contribution, we introduce and analyze multi-horizon forecasting strategies that exploit the information contained in other horizons when learning the model for each horizon. In particular, to select the lag order and the hyperparameters of each model, multi-horizon strategies minimize forecast errors over multiple horizons rather than just the horizon of interest.<p><p>We compare all the proposed strategies with both the recursive and direct strategies. We first apply a bias and variance study, then we evaluate the different strategies using real-world time series from two past forecasting competitions. For the rectify strategy, in addition to avoiding the choice between recursive and direct forecasts, the results demonstrate that it has better, or at least has close performance to, the best of the recursive and direct forecasts in different settings. For the multi-horizon strategies, the results emphasize the decrease in variance compared to single-horizon strategies, especially with linear or weakly nonlinear data generating processes. Overall, we found that the accuracy of multi-step-ahead forecasts based on machine learning algorithms can be significantly improved if an appropriate forecasting strategy is used to select the model parameters and to generate the forecasts.<p><p>Lastly, as a fourth contribution, we have participated in the Load Forecasting track of the Global Energy Forecasting Competition 2012. The competition involved a hierarchical load forecasting problem where we were required to backcast and forecast hourly loads for a US utility with twenty geographical zones. Our team, TinTin, ranked fifth out of 105 participating teams, and we have been awarded an IEEE Power & Energy Society award.<p> / Doctorat en sciences, Spécialisation Informatique / info:eu-repo/semantics/nonPublished
63

Pension and health insurance, phase-type modeling

Govorun, Maria 26 August 2013 (has links)
Depuis longtemps les modèles de type phase sont utilisés dans plusieurs domaines scientifiques pour décrire des systèmes qui peuvent être caractérisés par différents états. Les modèles sont bien connus en théorie des files d’attentes, en économie et en assurance.<p><p>La thèse est focalisée sur différentes applications des modèles de type phase en assurance et montre leurs avantages. En particulier, le modèle de Lin et Liu en 2007 est intéressant, parce qu’il décrit le processus de vieillissement de l’organisme humain. La durée de vie d’un individu suit une loi de type phase et les états de ce modèle représentent des états de santé. Le fait que le modèle prévoit la connexion entre les états de santé et l’âge de l’individu le rend très utile en assurance.<p><p>Les résultats principaux de la thèse sont des nouveaux modèles et méthodes en assurance pension et en assurance santé qui utilisent l’hypothèse de la loi de type phase pour décrire la durée de vie d’un individu.<p><p>En assurance pension le but d’estimer la profitabilité d’un fonds de pension. Pour cette raison, on construit un modèle « profit-test » qui demande la modélisation de plusieurs caractéristiques. On décrit l’évolution des participants du fonds en adaptant le modèle du vieillissement aux causes multiples de sortie. L’estimation des profits futurs exige qu’on détermine les valeurs des cotisations pour chaque état de santé, ainsi que l’ancienneté et l’état de santé initial pour chaque participant. Cela nous permet d’obtenir la distribution de profits futurs et de développer des méthodes pour estimer les risques de longevité et de changements de marché. De plus, on suppose que la diminution des taux de mortalité pour les pensionnés influence les profits futurs plus que pour les participants actifs. C’est pourquoi, pour évaluer l’impact de changement de santé sur la profitabilité, on modélise séparément les profits venant des pensionnés.<p><p>En assurance santé, on utilise le modèle de type phase pour calculer la distribution de la valeur actualisée des coûts futurs de santé. On développe des algorithmes récursifs qui permettent d’évaluer la distribution au cours d’une période courte, en utilisant des modèles fluides en temps continu, et pendant toute la durée de vie de l’individu, en construisant des modèles en temps discret. Les trois modèles en temps discret correspondent à des hypothèses différentes qu’on fait pour les coûts: dans le premier modèle on suppose que les coûts de santé sont indépendants et identiquement distribués et ne dépendent pas du vieillissement de l’individu; dans les deux autres modèles on suppose que les coûts dépendent de son état de santé.<p> / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
64

Preserving the separation of concerns while composing aspects with reflective AOP

Marot, Antoine 07 October 2011 (has links)
Aspect-oriented programming (AOP) is a programming paradigm to localize and modularize the concerns that tend to be tangled and scattered across traditional programming modules, like functions or classes. Such concerns are known as crosscutting concerns and aspect-oriented languages propose to encapsulate them in modules called aspects. Because each crosscutting concern implemented in an aspect is separated from the other concerns, AOP improves reusability, readability, and maintainability of code.<p><p>While it improves separation of concerns, AOP suffers from well-known composition issues. Aspects developed in isolation may indeed interact with each other in ways that were not expected by the programmers and therefore lead to a program that does not meet its requirements. Without appropriate tools, undesired aspect interactions must be identified by reading code in order to gain global knowledge of the program and understand where and how aspects interact. Then, if the aspect language does not offer the needed support, these interactions must be resolved by invasively changing the code of the conflicting aspects to make them work together. Neither one of these solutions are acceptable since global knowledge as well as invasive and composition-specific modifications are exactly what separation of concerns seeks to avoid.<p><p>In this dissertation we show that the existing approaches to compose aspects are not entirely satisfying either with respect to separation of concerns. These approaches either rely on global knowledge and invasive modifications, which is problematic, or lack genericity and/or expressivity, which means that code reading/code modification may still be required for the aspect interactions they cannot handle.<p><p>To properly detect and resolve aspect interactions we propose a novel approach that is based on AOP itself. Since aspect composition is a concern that, by definition, crosscuts the aspects, it indeed makes sense to expect that a technique to improve the separation of crosscutting concerns such as AOP is well-suited for the task. The resulting mechanism is based on reflection principles and is called reflective AOP. <p><p>The main difference between "regular" AOP and reflective AOP lies in the parts of the system they address. While traditional AOP aims at modularizing the concerns that crosscut the base system, reflective AOP offers the possibility to handle the concerns that crosscut the aspects themselves. This is achieved by incorporating new kinds of joinpoints, pointcuts and advice into the aspect language. These new elements, which form what we call a meta joinpoint model, are dedicated to the aspect level and enable programmers to reason about and act upon the semantics of aspects at runtime. As validated on numerous examples of aspect composition, having a well-designed and principled meta joinpoint model makes it possible to deal with both the detection and the resolution of composition issues in a way that preserves the separation of concerns principle. These examples are illustrated using Phase, our prototype reflective AOP language. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
65

New algorithms and data structures for the emptiness problem of alternating automata / Nouveaux algorithmes et structures de données pour le problème du vide des automates alternants

Maquet, Nicolas 03 March 2011 (has links)
This work studies new algorithms and data structures that are useful in the context of program verification. As computers have become more and more ubiquitous in our modern societies, an increasingly large number of computer-based systems are considered safety-critical. Such systems are characterized by the fact that a failure or a bug (computer error in the computing jargon) could potentially cause large damage, whether in loss of life, environmental damage, or economic damage. For safety-critical systems, the industrial software engineering community increasingly calls for using techniques which provide some formal assurance that a certain piece of software is correct.<p>One of the most successful program verification techniques is model checking, in which programs are typically abstracted by a finite-state machine. After this abstraction step, properties (typically in the form of some temporal logic formula) can be checked against the finite-state abstraction, with the help of automated tools. Alternating automata play an important role in this context, since many temporal logics on words and trees can be efficiently translated into those automata. This property allows for the reduction of model checking to automata-theoretic questions and is called the automata-theoretic approach to model checking. In this work, we provide three novel approaches for the analysis (emptiness checking) of alternating automata over finite and infinite words. First, we build on the successful framework of antichains to devise new algorithms for LTL satisfiability and model checking, using alternating automata. These algorithms combine antichains with reduced ordered binary decision diagrams in order to handle the exponentially large alphabets of the automata generated by the LTL translation. Second, we develop new abstraction and refinement algorithms for alternating automata, which combine the use of antichains with abstract interpretation, in order to handle ever larger instances of alternating automata. Finally, we define a new symbolic data structure, coined lattice-valued binary decision diagrams that is particularly well-suited for the encoding of transition functions of alternating automata over symbolic alphabets. All of these works are supported with empirical evaluations that confirm the practical usefulness of our approaches. / Ce travail traite de l'étude de nouveaux algorithmes et structures de données dont l'usage est destiné à la vérification de programmes. Les ordinateurs sont de plus en plus présents dans notre vie quotidienne et, de plus en plus souvent, ils se voient confiés des tâches de nature critique pour la sécurité. Ces systèmes sont caractérisés par le fait qu'une panne ou un bug (erreur en jargon informatique) peut avoir des effets potentiellement désastreux, que ce soit en pertes humaines, dégâts environnementaux, ou économiques. Pour ces systèmes critiques, les concepteurs de systèmes industriels prônent de plus en plus l'usage de techniques permettant d'obtenir une assurance formelle de correction.<p><p>Une des techniques de vérification de programmes les plus utilisées est le model checking, avec laquelle les programmes sont typiquement abstraits par une machine a états finis. Après cette phase d'abstraction, des propriétés (typiquement sous la forme d'une formule de logique temporelle) peuvent êtres vérifiées sur l'abstraction à espace d'états fini, à l'aide d'outils de vérification automatisés. Les automates alternants jouent un rôle important dans ce contexte, principalement parce que plusieurs logiques temporelle peuvent êtres traduites efficacement vers ces automates. Cette caractéristique des automates alternants permet de réduire le model checking des logiques temporelles à des questions sur les automates, ce qui est appelé l'approche par automates du model checking. Dans ce travail, nous étudions trois nouvelles approches pour l'analyse (le test du vide) desautomates alternants sur mots finis et infinis. Premièrement, nous appliquons l'approche par antichaînes (utilisée précédemment avec succès pour l'analyse d'automates) pour obtenir de nouveaux algorithmes pour les problèmes de satisfaisabilité et du model checking de la logique temporelle linéaire, via les automates alternants.Ces algorithmes combinent l'approche par antichaînes avec l'usage des ROBDD, dans le but de gérer efficacement la combinatoire induite par la taille exponentielle des alphabets d'automates générés à partir de LTL. Deuxièmement, nous développons de nouveaux algorithmes d'abstraction et raffinement pour les automates alternants, combinant l'usage des antichaînes et de l'interprétation abstraite, dans le but de pouvoir traiter efficacement des automates de grande taille. Enfin, nous définissons une nouvelle structure de données, appelée LVBDD (Lattice-Valued Binary Decision Diagrams), qui permet un encodage efficace des fonctions de transition des automates alternants sur alphabets symboliques. Tous ces travaux ont fait l'objet d'implémentations et ont été validés expérimentalement. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
66

Supervisory control of infinite state systems under partial observation / Contrôle supervisé des systèmes à états infinis sous observation partielle

Kalyon, Gabriel 26 November 2010 (has links)
A discrete event system is a system whose state space is given by a discrete set and whose state transition mechanism is event-driven i.e. its state evolution depends only on the occurrence of discrete events over the time. These systems are used in many fields of application (telecommunication networks, aeronautics, aerospace,). The validity of these systems is then an important issue and to ensure it we can use supervisory control methods. These methods consist in imposing a given specification on a system by means of a controller which runs in parallel with the original system and which restricts its behavior. In this thesis, we develop supervisory control methods where the system can have an infinite state space and the controller has a partial observation of the system (this implies that the controller must define its control policy from an imperfect knowledge of the system). Unfortunately, this problem is generally undecidable. To overcome this negative result, we use abstract interpretation techniques which ensure the termination of our algorithms by overapproximating, however, some computations. The aim of this thesis is to provide the most complete contribution it is possible to bring to this topic. Hence, we consider more and more realistic problems. More precisely, we start our work by considering a centralized framework (i.e. the system is controlled by a single controller) and by synthesizing memoryless controllers (i.e. controllers that define their control policy from the current observation received from the system). Next, to obtain better solutions, we consider the synthesis of controllers that record a part or the whole of the execution of the system and use this information to define the control policy. Unfortunately, these methods cannot be used to control an interesting class of systems: the distributed systems. We have then defined methods that allow to control distributed systems with synchronous communications (decentralized and modular methods) and with asynchronous communications (distributed method). Moreover, we have implemented some of our algorithms to experimentally evaluate the quality of the synthesized controllers. / <p><p>Un système à événements discrets est un système dont l'espace d'états est un ensemble discret et dont l'évolution de l'état courant dépend de l'occurrence d'événements discrets à travers le temps. Ces systèmes sont présents dans de nombreux domaines critiques tels les réseaux de communications, l'aéronautique, l'aérospatiale. La validité de ces systèmes est dès lors une question importante et une manière de l'assurer est d'utiliser des méthodes de contrôle supervisé. Ces méthodes associent au système un dispositif, appelé contrôleur, qui s'exécute en parrallèle et qui restreint le comportement du système de manière à empêcher qu'un comportement erroné ne se produise. Dans cette thèse, on s'intéresse au développement de méthodes de contrôle supervisé où le système peut avoir un espace d'états infini et où les contrôleurs ne sont pas toujours capables d'observer parfaitement le système; ce qui implique qu'ils doivent définir leur politique de contrôle à partir d'une connaissance imparfaite du système. Malheureusement, ce problème est généralement indécidable. Pour surmonter cette difficulté, nous utilisons alors des techniques d'interprétation abstraite qui assurent la terminaison de nos algorithmes au prix de certaines sur-approximations dans les calculs. Le but de notre thèse est de fournir la contribution la plus complète possible dans ce domaine et nous considèrons pour cela des problèmes de plus en plus réalistes. Plus précisement, nous avons commencé notre travail en définissant une méthode centralisée où le système est contrôlé par un seul contrôleur qui définit sa politique de contrôle à partir de la dernière information reçue du système. Ensuite, pour obtenir de meilleures solutions, nous avons défini des contrôleurs qui retiennent une partie ou la totalité de l'exécution du système et qui définissent leur politique de contrôle à partir de cette information. Malheureusement, ces méthodes ne peuvent pas être utilisées pour contrôler une classe intéressante de systèmes: les sytèmes distribués. Nous avons alors défini des méthodes permettant de contrôler des systèmes distribués dont les communications sont synchrones (méthodes décentralisées et modulaires) et asynchrones (méthodes distribuées). De plus, nous avons implémenté certains de nos algorithmes pour évaluer expérimentalement la qualité des contrôleurs qu'ils synthétisent. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
67

Novel measures on directed graphs and applications to large-scale within-network classification

Mantrach, Amin 25 October 2010 (has links)
Ces dernières années, les réseaux sont devenus une source importante d’informations dans différents domaines aussi variés que les sciences sociales, la physique ou les mathématiques. De plus, la taille de ces réseaux n’a cessé de grandir de manière conséquente. Ce constat a vu émerger de nouveaux défis, comme le besoin de mesures précises et intuitives pour caractériser et analyser ces réseaux de grandes tailles en un temps raisonnable.<p>La première partie de cette thèse introduit une nouvelle mesure de similarité entre deux noeuds d’un réseau dirigé et pondéré :la covariance “sum-over-paths”. Celle-ci a une interprétation claire et précise :en dénombrant tous les chemins possibles deux noeuds sont considérés comme fortement corrélés s’ils apparaissent souvent sur un même chemin – de préférence court. Cette mesure dépend d’une distribution de probabilités, définie sur l’ensemble infini dénombrable des chemins dans le graphe, obtenue en minimisant l'espérance du coût total entre toutes les paires de noeuds du graphe sachant que l'entropie relative totale injectée dans le réseau est fixée à priori. Le paramètre d’entropie permet de biaiser la distribution de probabilité sur un large spectre :allant de marches aléatoires naturelles où tous les chemins sont équiprobables à des marches biaisées en faveur des plus courts chemins. Cette mesure est alors appliquée à des problèmes de classification semi-supervisée sur des réseaux de taille moyennes et comparée à l’état de l’art.<p>La seconde partie de la thèse introduit trois nouveaux algorithmes de classification de noeuds en sein d’un large réseau dont les noeuds sont partiellement étiquetés. Ces algorithmes ont un temps de calcul linéaire en le nombre de noeuds, de classes et d’itérations, et peuvent dés lors être appliqués sur de larges réseaux. Ceux-ci ont obtenus des résultats compétitifs en comparaison à l’état de l’art sur le large réseaux de citations de brevets américains et sur huit autres jeux de données. De plus, durant la thèse, nous avons collecté un nouveau jeu de données, déjà mentionné :le réseau de citations de brevets américains. Ce jeu de données est maintenant disponible pour la communauté pour la réalisation de tests comparatifs.<p>La partie finale de cette thèse concerne la combinaison d’un graphe de citations avec les informations présentes sur ses noeuds. De manière empirique, nous avons montré que des données basées sur des citations fournissent de meilleurs résultats de classification que des données basées sur des contenus textuels. Toujours de manière empirique, nous avons également montré que combiner les différentes sources d’informations (contenu et citations) doit être considéré lors d’une tâche de classification de textes. Par exemple, lorsqu’il s’agit de catégoriser des articles de revues, s’aider d’un graphe de citations extrait au préalable peut améliorer considérablement les performances. Par contre, dans un autre contexte, quand il s’agit de directement classer les noeuds du réseau de citations, s’aider des informations présentes sur les noeuds n’améliora pas nécessairement les performances.<p>La théorie, les algorithmes et les applications présentés dans cette thèse fournissent des perspectives intéressantes dans différents domaines.<p><p><p>In recent years, networks have become a major data source in various fields ranging from social sciences to mathematical and physical sciences. Moreover, the size of available networks has grow substantially as well. This has brought with it a number of new challenges, like the need for precise and intuitive measures to characterize and analyze large scale networks in a reasonable time. <p>The first part of this thesis introduces a novel measure between two nodes of a weighted directed graph: The sum-over-paths covariance. It has a clear and intuitive interpretation: two nodes are considered as highly correlated if they often co-occur on the same -- preferably short -- paths. This measure depends on a probability distribution over the (usually infinite) countable set of paths through the graph which is obtained by minimizing the total expected cost between all pairs of nodes while fixing the total relative entropy spread in the graph. The entropy parameter allows to bias the probability distribution over a wide spectrum: going from natural random walks (where all paths are equiprobable) to walks biased towards shortest-paths. This measure is then applied to semi-supervised classification problems on medium-size networks and compared to state-of-the-art techniques.<p>The second part introduces three novel algorithms for within-network classification in large-scale networks, i.e. classification of nodes in partially labeled graphs. The algorithms have a linear computing time in the number of edges, classes and steps and hence can be applied to large scale networks. They obtained competitive results in comparison to state-of-the-art technics on the large scale U.S.~patents citation network and on eight other data sets. Furthermore, during the thesis, we collected a novel benchmark data set: the U.S.~patents citation network. This data set is now available to the community for benchmarks purposes. <p>The final part of the thesis concerns the combination of a citation graph with information on its nodes. We show that citation-based data provide better results for classification than content-based data. We also show empirically that combining both sources of information (content-based and citation-based) should be considered when facing a text categorization problem. For instance, while classifying journal papers, considering to extract an external citation graph may considerably boost the performance. However, in another context, when we have to directly classify the network citation nodes, then the help of features on nodes will not improve the results.<p>The theory, algorithms and applications presented in this thesis provide interesting perspectives in various fields.<p> / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
68

Energy-aware real-time scheduling in embedded multiprocessor systems / Ordonnancement temps réel dans les systèmes embarqués multiprocesseurs contraints par l'énergie

Nélis, Vincent 18 October 2010 (has links)
Nowadays, computer systems are everywhere. From simple portable devices such as watches and MP3 players to large stationary installations that control nuclear power plants, computer systems are now present in all aspects of our modern and every-day life. In about only 70 years, they have completely perturbed our way of life and they reached a so high degree of sophistication that they will be soon capable of driving our cars and cleaning our houses without any human intervention. As computer systems gain in responsibilities, it becomes essential that they provide both safety and reliability. Indeed, a failure in systems such as the anti-lock braking system (ABS) in cars could threaten human lives and generate catastrophic and irreversible consequences. Hence, for many years, researchers have addressed these emerging problems of system safety and reliability which come along with this fulgurant evolution. <p><p>This thesis provides a general overview of embedded real-time computer systems, i.e. a particular kind of computer system whose number grows daily. We provide the reader with some preliminary knowledge and a good understanding of the concepts that underlie this emerging technology. We focus especially on the theoretical problems related to the real-time issue and briefly summarizes the main solutions, together with their advantages and drawbacks. This brings the reader through all the conceptual layers constituting a computer system, from the software level---the logical part---that specifies both the system behavior and requirements to the hardware level---the physical part---that actually performs the expected treatments and reacts to the environment. In the meanwhile, we introduce the theoretical models that allow researchers for theoretical analyses which ensure that all the system requirements are fulfilled. Finally, we address the energy consumption problem in embedded systems. We describe the various factors of power dissipation in modern technologies and we introduce different solutions to reduce this consumption./Cette thèse se focalise sur un type de systèmes informatiques bien précis appelés “systèmes embarqués temps réel”. Un système est dit “embarqué” lorsqu’il est développé afin de servir un but bien précis. Un téléphone portable est un parfait exemple de système embarqué étant donné que toutes ses fonctionnalités sont rigoureusement définies avant même sa conception. Au contraire, un ordinateur personnel n’est généralement pas considéré comme un système embarqué, les concepteurs ne sachant pas à l’avance à quelles fins il sera utilisé. Une grande partie de ces systèmes embarqués ont des contraintes temporelles très fortes, ce qui les distingue encore plus des ordinateurs grand public. A titre d’exemple, lorsqu’un conducteur de voiture freine brusquement, l’ordinateur de bord déclenche l’application ABS et il est primordial que cette application soit traitée endéans une courte échéance. Autrement dit, cette fonctionnalité ABS doit être traitée prioritairement par rapport aux autres fonctionnalités du véhicule. Ce type de système embarqué est alors dit “temps réel”, dû à ces notions de temps et de priorités entre les applications. La problèmatique posée par les systèmes temps réel est la suivante. Comment déterminer, à tout moment, un ordre d’exécution des différentes fonctionnalités de telle sorte qu’elles soient toutes exécutées entièrement endéans leur échéance ?De plus, avec l’apparition récente des systèmes multiprocesseurs, cette problématique s’est fortement complexifiée, vu que le système doit à présent déterminer quelle fonctionnalité s’exécute à quel moment sur quel processeur afin que toutes les contraintes temporelles soient respectées. Pour finir, ces systèmes embarqués temp réel multiprocesseurs se sont rapidement retrouvés confrontés à un problème de consommation d’énergie. Leur demande en terme de performance (et donc en terme d’énergie) à évolué beaucoup plus rapidement que la capacité des batteries qui les alimentent. Ce problème est actuellement rencontré par de nombreux systèmes, tels que les téléphones portables par exemple. L’objectif de cette thèse est de parcourir les différents composants de tels système embarqués et de proposer des solutions afin de réduire leur consommation d’énergie. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
69

Toward a brain-like memory with recurrent neural networks

Salihoglu, Utku 12 November 2009 (has links)
For the last twenty years, several assumptions have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural network should be coded in some way or another in one of the dynamical attractors of the brain, and retrieved by stimulating the network to trap its dynamics in the desired item’s basin of attraction. The second view shared by neural network researchers is to base the learning of the synaptic matrix on a local Hebbian mechanism. The third assumption is the presence of chaos and the benefit gained by its presence. Chaos, although very simply produced, inherently possesses an infinite amount of cyclic regimes that can be exploited for coding information. Moreover, the network randomly wanders around these unstable regimes in a spontaneous way, thus rapidly proposing alternative responses to external stimuli, and being easily able to switch from one of these potential attractors to another in response to any incoming stimulus. Finally, since their introduction sixty years ago, cell assemblies have proved to be a powerful paradigm for brain information processing. After their introduction in artificial intelligence, cell assemblies became commonly used in computational neuroscience as a neural substrate for content addressable memories. <p> <p>Based on these assumptions, this thesis provides a computer model of neural network simulation of a brain-like memory. It first shows experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the background, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause, but the consequence of the learning. However, it appears as an helpful consequence that widens the network’s encoding capacity. To learn the information to be stored, two supervised iterative Hebbian learning algorithm are proposed. One leaves the semantics of the attractors to be associated with the feeding data unprescribed, while the other defines it a priori. Both algorithms show good results, even though the first one is more robust and has a greater storing capacity. Using these promising results, a biologically plausible alternative to these algorithms is proposed using cell assemblies as substrate for information. Even though this is not new, the mechanisms underlying their formation are poorly understood and, so far, there are no biologically plausible algorithms that can explain how external stimuli can be online stored in cell assemblies. This thesis provide such a solution combining a fast Hebbian/anti-Hebbian learning of the network's recurrent connections for the creation of new cell assemblies, and a slower feedback signal which stabilizes the cell assemblies by learning the feed forward input connections. This last mechanism is inspired by the retroaxonal hypothesis. <p> / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
70

Gaussian graphical model selection for gene regulatory network reverse engineering and function prediction

Kontos, Kevin 02 July 2009 (has links)
One of the most important and challenging ``knowledge extraction' tasks in bioinformatics is the reverse engineering of gene regulatory networks (GRNs) from DNA microarray gene expression data. Indeed, as a result of the development of high-throughput data-collection techniques, biology is experiencing a data flood phenomenon that pushes biologists toward a new view of biology--systems biology--that aims at system-level understanding of biological systems.<p><p>Unfortunately, even for small model organisms such as the yeast Saccharomyces cerevisiae, the number p of genes is much larger than the number n of expression data samples. The dimensionality issue induced by this ``small n, large p' data setting renders standard statistical learning methods inadequate. Restricting the complexity of the models enables to deal with this serious impediment. Indeed, by introducing (a priori undesirable) bias in the model selection procedure, one reduces the variance of the selected model thereby increasing its accuracy.<p><p>Gaussian graphical models (GGMs) have proven to be a very powerful formalism to infer GRNs from expression data. Standard GGM selection techniques can unfortunately not be used in the ``small n, large p' data setting. One way to overcome this issue is to resort to regularization. In particular, shrinkage estimators of the covariance matrix--required to infer GGMs--have proven to be very effective. Our first contribution consists in a new shrinkage estimator that improves upon existing ones through the use of a Monte Carlo (parametric bootstrap) procedure.<p><p>Another approach to GGM selection in the ``small n, large p' data setting consists in reverse engineering limited-order partial correlation graphs (q-partial correlation graphs) to approximate GGMs. Our second contribution consists in an inference algorithm, the q-nested procedure, that builds a sequence of nested q-partial correlation graphs to take advantage of the smaller order graphs' topology to infer higher order graphs. This allows us to significantly speed up the inference of such graphs and to avoid problems related to multiple testing. Consequently, we are able to consider higher order graphs, thereby increasing the accuracy of the inferred graphs.<p><p>Another important challenge in bioinformatics is the prediction of gene function. An example of such a prediction task is the identification of genes that are targets of the nitrogen catabolite repression (NCR) selection mechanism in the yeast Saccharomyces cerevisiae. The study of model organisms such as Saccharomyces cerevisiae is indispensable for the understanding of more complex organisms. Our third contribution consists in extending the standard two-class classification approach by enriching the set of variables and comparing several feature selection techniques and classification algorithms.<p><p>Finally, our fourth contribution formulates the prediction of NCR target genes as a network inference task. We use GGM selection to infer multivariate dependencies between genes, and, starting from a set of genes known to be sensitive to NCR, we classify the remaining genes. We hence avoid problems related to the choice of a negative training set and take advantage of the robustness of GGM selection techniques in the ``small n, large p' data setting. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished

Page generated in 0.4789 seconds