• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 140
  • 40
  • 18
  • 16
  • 15
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 271
  • 47
  • 34
  • 34
  • 30
  • 28
  • 28
  • 23
  • 21
  • 21
  • 20
  • 19
  • 18
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Study of modularity in molecular, morphological and linguistic evolution using networks methods / Etude de la modularité en évolution moléculaire, morphologique et linguistique par des méthodes de réseaux

Pathmanathan, Jananan 23 October 2017 (has links)
L'évolution moléculaire procède par divergence depuis un ancêtre commun et en combinant des fragments d'objets évoluant d'origines différentes, par des processus introgressifs. Les transferts horizontaux de gènes sont probablement les plus connus de ces processus, mais l'introgression affecte aussi d'autres niveaux d'organisation biologique. Ainsi, la plupart des objets biologiques évoluant peuvent être composés de parties d'origines phylogénétiques différentes et décrits comme composites. Cette évolution modulaire se modélise mal par des arbres, puisque les objets composites ne sont pas seulement le résultat d'une divergence depuis un ancêtre. Les réseaux sont bien plus aptes à modéliser la modularité, et la théorie des graphes peut être utilisée pour chercher dans ces réseaux des patrons caractéristiques d'une évolution réticulée. Pendant cette thèse, j'ai développé le logiciel CompositeSearch qui détecte les gènes composites dans des jeux de données de séquences massifs, jusqu'à plusieurs millions de séquences. Cet algorithme a été utilisé pour identifier et quantifier l'abondance des gènes composites dans des environnements de sols pollués ainsi que dans les plasmides. Les résultats montrent que d'importantes adaptations et nouveautés biologiques découlent de processus œuvrant au niveau subgénique. De plus, les réseaux fournissent un cadre conceptuel dont l'utilité va bien au-delà de l'évolution moléculaire et je les ai appliqués à d'autres objets évoluant, comme les animaux (réseaux de traits morphologiques) et les langues (réseaux de mots). Dans les deux cas, la modularité se révèle être une conséquence évolutive majeure, et obéit à des règles encore à préciser. / Molecular evolution proceeds not only by divergence from a common ancestor, but also by combining parts from evolving objects of different origins, through processes that are called introgressive. Lateral gene transfers are probably the most well-known of these processes, but introgression has been shown to also happen at various levels of biological organization. As a result, most biological evolving objects (genes, genomes, communities) can be composed of parts from different phylogenetic origins and can be described as composites. Such modular evolution is inadequately modeled by trees, since composite objects are not merely the result of divergence from a common ancestor only. Networks on the other hand are much more suited for handling modularity, and graph theory can be used to search networks for patterns that are characteristic of such reticulate evolution. During this PhD, I developed a piece of software, CompositeSearch, that can efficiently detect composite genes in massive sequence dataset, comprising up to millions of sequences. This algorithm was used to identify and quantify the abundance of composite genes in polluted soil environments, and in prokaryotic plasmids. These studies show that important biological novelties and adaptations can result from processes acting at subgenic levels. However, as shown in this manuscript, networks provide a framework that goes well beyond the boundaries of molecular evolution and I have applied them to other evolving entities, such as animals (trait networks) morphology and languages (word networks). In both cases, modularity appears to be a major evolutionary outcome, following rules that remain to be investigated.
92

Improving Scalability of Evolutionary Robotics with Reformulation

Bernatskiy, Anton 01 January 2018 (has links)
Creating systems that can operate autonomously in complex environments is a challenge for contemporary engineering techniques. Automatic design methods offer a promising alternative, but so far they have not been able to produce agents that outperform manual designs. One such method is evolutionary robotics. It has been shown to be a robust and versatile tool for designing robots to perform simple tasks, but more challenging tasks at present remain out of reach of the method. In this thesis I discuss and attack some problems underlying the scalability issues associated with the method. I present a new technique for evolving modular networks. I show that the performance of modularity-biased evolution depends heavily on the morphology of the robot’s body and present a new method for co-evolving morphology and modular control. To be able to reason about the new technique I develop reformulation framework: a general way to describe and reason about metaoptimization approaches. Within this framework I describe a new heuristic for developing metaoptimization approaches that is based on the technique for co-evolving morphology and modularity. I validate the framework by applying it to a practical task of zero-g autonomous assembly of structures with a fleet of small robots. Although this work focuses on the evolutionary robotics, methods and approaches developed within it can be applied to optimization problems in any domain.
93

Reengineering Object Oriented Software Systems for a better Maintainability / Ré-ingénierie des applications à objets pour une amélioration de leurs attributs de qualité

Zellagui, Soumia 05 July 2019 (has links)
Les systèmes logiciels existants représentent souvent des investissements importants pour les entreprises qui les développent avec l’intention de les utiliser pendant une longue période de temps. La qualité de ces systèmes peut être dégradée avec le temps en raison des modifications complexes qui leur sont incorporées. Pour faire face à une telle dégradation lorsque elle dépasse un seuil critique, plusieurs stratégies peuvent être utilisées. Ces stratégies peuvent se résumer en: 1) remplaçant le système par un autre développé à partir de zéro, 2) poursuivant la maintenance(massive) du système malgré son coût ou 3) en faisant une réingénierie du système. Le remplacement et la maintenance massive ne sont pas des solutions adaptées lorsque le coût et le temps doivent être pris en compte, car elles nécessitent un effort considérable et du personnel pour assurer la mise en œuvre du système dans un délai raisonnable. Dans cette thèse, nous nous intéressons à la solution de réingénierie. En général, la réingénierie d’un système logiciel inclut toutes les activités après la livraison à l’utilisateur pour améliorer sa qualité. Cette dernière est souvent caractérisé par un ensemble d’attributs de qualité. Nous proposons trois contributions pour améliorer les attributs de qualité spécifiques, que soient:la maintenabilité, la compréhensibilité et la modularité. Afin d’améliorer la maintenabilité, nous proposons de migrer les systèmes logiciels orientés objets vers des systèmes orientés composants. Contrairement aux approches existantes qui considèrent un descripteur de composant comme un cluster des classes, chaque classe du système existant sera migré en un descripteur de composant. Afin d’améliorer la compréhensibilité, nous proposons une approche pour la reconstruction de modèles d’architecture d’exécution des systèmes orientés objet et de gérer la complexité des modèles résultants. Les modèles, graphes, générés avec notre approche ont les caractéristiques suivantes: les nœuds sont étiquetés avec des durées de vie et des probabilités d’existence permettant 1) une visualisation des modèles avec un niveau de détail. 2) de cacher/montrer la structure interne des nœuds. Afin d’améliorer la modularité des systèmes logiciels orientés objets, nous proposons une approche d’identification des modules et des services dans le code source de ces systèmes.Dans cette approche, nous croyons que la structure composite est la structure principale du système qui doit être conservée lors du processus de modularisation, le composant et ses composites doivent être dans le même module. Les travaux de modularisation existants qui ont cette même vision, supposent que les relations de composition entre les éléments du code source sont déjà disponibles ce qui n’est pas toujours évident. Dans notre approche, l’identification des modules commence par une étape de reconstruction de modèles d’architecture d’exécution du système étudié. Ces modèles sont exploités pour d’identification de relations de composition entre les éléments du code source du système étudié. Une fois ces relations ont été identifiées, un algorithme génétique conservatif aux relations de composition est appliqué sur le système pour identifier des modules. En dernier, les services fournis par les modules sont identifiés à l’aide des modèles de l’architecture d’exécution du système logiciel analysé. Quelques expérimentations et études de cas ont été réalisées pour montrer la faisabilité et le gain en maintenabilité, compréhensibilité et modularité des logiciels traités avec nos propositions. / Legacy software systems often represent significant investmentsfor the companies that develop them with the intention of using themfor a long period of time. The quality of these systems can be degraded over time due to the complex changes incorporated to them.In order to deal with these systems when their quality degradation exceeds a critical threshold, a number of strategies can be used. Thesestrategies can be summarized in: 1) discarding the system and developinganother one from scratch, 2) carrying on the (massive) maintenance of the systemdespite its cost, or 3) reengineering the system. Replacement and massive maintenance are not suitable solutions when the cost and time are to be taken into account, since they require a considerable effort and staff to ensurethe system conclusion in a moderate time. In this thesis, we are interested in the reengineering solution. In general, software reengineering includes all activities following the delivery to the user to improve thesoftware system quality. This latter is often characterized with a set of quality attributes. We propose three contributions to improve specific quality attributes namely: maintainability, understandability and modularity.In order to improve maintainability, we propose to migrateobject oriented legacy software systems into equivalent component based ones.Contrary to exiting approaches that consider a component descriptor as a clusterof classes, each class in the legacy system will be migrated into a componentdescriptor. In order to improve understandability, we propose an approach forrecovering runtime architecture models of object oriented legacy systems and managing the complexity of the resulted models.The models recovered by our approach have the following distinguishing features: Nodes are labeled with lifespans and empirical probabilities of existencethat enable 1) a visualization with a level of detail. 2) the collapsing/expanding of objects to hide/show their internal structure.In order to improve modularity of object-oriented software systems,we propose an approach for identifying modulesand services in the source code.In this approach, we believe that the composite structure is the main structure of the system that must be retained during the modularization process, the component and its composites must be in the same module. Existing modularization works that has this same vision assumes that the composition relationships between the elements of the source code are already available, which is not always obvious. In our approach, module identification starts with a step of runtime architecture models recovery. These models are exploited for the identification of composition relationships between the elements of the source code. Once these relationships have been identified, a composition conservative genetic algorithm is applied on the system to identify modules. Lastly, the services provided by the modules are identified using the runtime architecture models of the software system. Some experimentations and casestudies have been performed to show the feasibility and the gain inmaintainability, understandability and modularity of the software systems studied with our proposals.
94

Mean Value Modelling of a Diesel Engine with Turbo Compound / Medelvärdesmodellering av en dieselmotor med kraftturbin

Flärdh, Oscar, Gustafson, Manne January 2003 (has links)
<p>Over the last years, the emission and on board diagnostics legislations for heavy duty trucks are getting more and more strict. An accurate engine model that is possible to execute in the engine control system enables both better diagnosis and lowered emissions by better control strategies. </p><p>The objective of this thesis is to extend an existing mean value diesel engine model, to include turbo compound. The model should be physical, accurate, modular and it should be possible to execute in real time. The calibration procedure should be systematic, with some degree of automatization. </p><p>Four different turbo compound models have been evaluated and two models were selected for further evaluation by integration with the existing model. The extended model showed to be quite insensitive to small errors in the compound turbine speed and hence, the small difference in accuracy of the tested models did not affect the other output signals significantly. The extended models had better accuracy and could be executed with longer step length than the existing model, despite that more complexity were added to the model. For example, the mean error of the intake manifold pressure at mixed driving was approximately 3.0%, compared to 5.8% for the existing model. The reasons for the improvements are probably the good performance of the added submodels and the systematic and partly automatized calibration procedure including optimization.</p>
95

The Crossroads Of Knowledge And Financialization

Satik, Erdogdu 01 February 2013 (has links) (PDF)
This thesis questions the connection between knowledge and finance and advances an account that links both in a two-folded way. The first level departs from what separates the two opposite views or alternative explanations about the value of knowledge. The source and essence of the extra profits in information goods or commodities, such as digital media contents and software, featuring increasing returns to scale owing to their peculiar cost structure manifested by a high fixed cost and very low constant marginal cost, is what separates the two views about the value of knowledge. In light of the near-decomposability/modularity hypothesis, the extra profits in information commodities should arise from &#039 / information hiding,&#039 / which is intrinsic to nearly-decomposable systems or modular architecture because they are built on an ignorance on the parts in regard to the other parts and the whole of system. Such (hidden) design information that gives rise to parts or modules creates, at the same time, the future paths of action or (real) options, according to real-options perspective. When the two perspectives are combined, knowledge production, as distinct from subsequent knowledge commodity production, basically becomes an option creation process. Then, it becomes possible to argue that the concurrence of knowledge and finance is not a coincidence at all because the logics of accumulation is no different but almost identical, which is the second level of the two-folded account attempted in this study. The main contribution of this thesis is to build an account that links financialization to knowledge via the notion of modularity. Such an account sees financialization as a reflection and consequence of a value-driven permanent innovation economy developed under the &#039 / IT paradigm&#039 / in order to exploit a surplus peculiar and intrinsic to the modular structure that makes &#039 / information hiding&#039 / an integral part of such architectures since they are by definition built on an ignorance on the parts in regard to the other parts and the whole of system.
96

The Crossroads Of Knowledge And Financialization

Satik, Erdogdu 01 February 2013 (has links) (PDF)
This thesis questions the connection between knowledge and finance and advances an account that links both in a two-folded way. The first level departs from what separates the two opposite views or alternative explanations about the value of knowledge. The source and essence of the extra profits in information goods or commodities, such as digital media contents and software, featuring increasing returns to scale owing to their peculiar cost structure manifested by a high fixed cost and very low constant marginal cost, is what separates the two views about the value of knowledge. In light of the near-decomposability/modularity hypothesis, the extra profits in information commodities should arise from &#039 / information hiding,&#039 / which is intrinsic to nearly-decomposable systems or modular architecture because they are built on an ignorance on the parts in regard to the other parts and the whole of system. Such (hidden) design information that gives rise to parts or modules creates, at the same time, the future paths of action or (real) options, according to real-options perspective. When the two perspectives are combined, knowledge production, as distinct from subsequent knowledge commodity production, basically becomes an option creation process. Then, it becomes possible to argue that the concurrence of knowledge and finance is not a coincidence at all because the logics of accumulation is no different but almost identical, which is the second level of the two-folded account attempted in this study. The main contribution of this thesis is to build an account that links financialization to knowledge via the notion of modularity. Such an account sees financialization as a reflection and consequence of a value-driven permanent innovation economy developed under the &#039 / IT paradigm&#039 / in order to exploit a surplus peculiar and intrinsic to the modular structure that makes &#039 / information hiding&#039 / an integral part of such architectures since they are by definition built on an ignorance on the parts in regard to the other parts and the whole of system.
97

Mean Value Modelling of a Diesel Engine with Turbo Compound / Medelvärdesmodellering av en dieselmotor med kraftturbin

Flärdh, Oscar, Gustafson, Manne January 2003 (has links)
Over the last years, the emission and on board diagnostics legislations for heavy duty trucks are getting more and more strict. An accurate engine model that is possible to execute in the engine control system enables both better diagnosis and lowered emissions by better control strategies. The objective of this thesis is to extend an existing mean value diesel engine model, to include turbo compound. The model should be physical, accurate, modular and it should be possible to execute in real time. The calibration procedure should be systematic, with some degree of automatization. Four different turbo compound models have been evaluated and two models were selected for further evaluation by integration with the existing model. The extended model showed to be quite insensitive to small errors in the compound turbine speed and hence, the small difference in accuracy of the tested models did not affect the other output signals significantly. The extended models had better accuracy and could be executed with longer step length than the existing model, despite that more complexity were added to the model. For example, the mean error of the intake manifold pressure at mixed driving was approximately 3.0%, compared to 5.8% for the existing model. The reasons for the improvements are probably the good performance of the added submodels and the systematic and partly automatized calibration procedure including optimization.
98

Parallel Connecting New Product Development Process¡GThe Case Study of Bicycle Industry in Taiwan

Chang, Yung-Chi 28 July 2004 (has links)
This is a case study of Taiwan¡¦s bicycle industry. With the view of international standards we tried to explore the integration of the new product development process in Taiwan¡¦s bicycle industry. We have found that Taiwan¡¦s assemblers and components suppliers are parallel connected to interact with foreign buyers simultaneously. And all the R&D services to which every member offered are finally integrated under the instructions of the foreign buyers. We describe such a cooperation mode as ¡¥Parallel Connecting New Product Development Process¡¦. We argue that this new kind of cooperation mode is better than the traditional sequential staging model that is represented as ¡¥vertical connecting¡¦ cooperation mode in innovation flexibility and speed because of the communication efficiency and convenience to the OEM buyers. In this thesis we will describe the new product development interactions among the foreign buyers, components suppliers and the assemblers. And we will also discuss the competitive advantages and the causes of such a new cooperation mode. With this kind of cooperation mode we also discuss the R&D management implications for the small and medium sized enterprises in Taiwan. We argue that this new kind of cooperation mode can bring a new management implication to the small and medium sized enterprises in Taiwan, which is different from the main argument in the literature of strategic flexibility.
99

Intelligence without hesitation

Thieme, Mikael January 2002 (has links)
<p>This thesis aims to evaluate four artificial neural network architectures, each of which implements the sensory-motor mapping in an embodied, situated, and autonomous agent set up to reach a goal area in one out of six systematically varied T-maze environments. In order to reach the goal the agent has to turn either to the left or to the right in each junction in the environment, depending on the placement of previously encountered light sources. The evaluation is broken down into (i) measuring the reliability of the agents' capacity to repeatedly reach the goal area, (ii) analyzing how the agents work, and (iii) comparing the results to related work on the problem.</p><p>Each T-maze constitutes an instance of a broad class of problems known as delayed response tasks, which are characterized by a significant (and typically varying) delay between a stimulus and the corresponding appropriate response. This thesis expands this notion to include, besides simple tasks, repeated and multiple delayed response tasks. In repeated tasks, the agent faces several stimulus-delay-response sequences after each other. In multiple tasks, the agent faces several stimuli before the delay and the corresponding appropriate responses. Even if simple at an abstract level, these tasks raise some of the fundamental issues within cognitive science and artificial intelligence such as whether or not an internal objective world model is necessary and/or suitable to achieve the appropriate behavior. For such reasons, these problems also constitute an interesting base for evaluating alternative ideas within these fields.</p><p>The work leads to several interesting insights. Firstly, purely reactive controllers (as represented by a feed-forward network) may be sufficient, in interaction with the environment, to solve both simple and repeated delayed response tasks. Secondly, an extended sequential cascaded network that selectively replaces its own sensory-motor mapping achieves significantly better performance than the other networks. This indicates that selective replacement of the sensory-motor mapping may be more powerful than both modulation (as represented by a simple recurrent network) and replacement in each step (as represented by a standard sequential cascaded network). Thirdly, this thesis demonstrates that even reactive controllers may contribute to behavior, which, from an observer's point of view, may seem to require an internal rational capacity, i.e. the ability to represent and explore alternatives internally.</p>
100

Epidemic dynamics in heterogeneous populations

Hladish, Thomas Joseph 13 November 2012 (has links)
Epidemiological models traditionally make the assumption that populations are homogeneous. By relaxing that assumption, models often become more complicated, but better representations of the real world. Here we describe new computational tools for studying heterogeneous populations, and we examine consequences of two particular types of heterogeneity: that people are not all equally likely to interact, and that people are not all equally likely to become infected if exposed to a pathogen. Contact network epidemiology provides a robust and flexible paradigm for thinking about heterogeneous populations. Despite extensive mathematical and algorithmic methods, however, we lack a programming framework for working with epidemiological contact networks and for the simulation of disease transmission through such networks. We present EpiFire, a C++ applications programming interface and graphical user interface, which includes a fast and efficient library for generating, analyzing and manipulating networks. EpiFire also provides a variety of traditional and network-based epidemic simulations. Heterogeneous population structure may cause multi-wave epidemics, but urban populations are generally assumed to be too well mixed to have such structure. Multi-wave epidemics are not predicted by simple models, and are particularly problematic for public health officials deploying limited resources. Using a unique empirical interaction network for 103,000 people in Montreal, Canada, we show that large, urban populations may feature sufficient community structure to drive multi-wave dynamics, and that highly connected individuals may play an important role in whether communities are synchronized. Finally, we show that heterogeneous immunity is an important determinant of influenza epidemic size. While many epidemic models assume a homogeneously susceptible population and describe dynamics for one season, the trans-seasonal dynamics of partially immunizing diseases likely play a critical role in determining both future epidemic size and pathogen evolution. We present a multi-season network model of a population exposed to a pathogen conferring partial cross-immunity that decays over time. We fit the model to 25 years of influenza-like illness epidemic data from France using a novel Bayesian technique. Using conservative priors, we estimate important epidemiological quantities that are consistent with empirical studies. / text

Page generated in 0.067 seconds