• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2059
  • 1534
  • 209
  • 50
  • 20
  • 19
  • 18
  • 9
  • 9
  • 7
  • 6
  • 6
  • 5
  • 3
  • 3
  • Tagged with
  • 4400
  • 1627
  • 750
  • 742
  • 555
  • 495
  • 442
  • 418
  • 393
  • 321
  • 316
  • 288
  • 288
  • 275
  • 274
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Constant power-continuously variable transmission (CP-CVT) : optimisation and simulation

Bell, Colin Alexander January 2011 (has links)
A novel continuously variable transmission has previously been designed that is capable of addressing a number of concerns within the automotive industry such as reduced emissions. At the commencement of this research, the design was in the early stages of development and little attempt had been previously made to optimise the design to meet specific measurable targets. This thesis utilises and modifies several design approaches to take the design from the concept stage to a usable product. Several optimisation techniques are adapted and created to analyse the CVT from both a design and tribological prospective. A specially designed optimisation algorithm has been created that is capable of quickly improving each critical component dimension in parallel to fulfil multiple objectives. This algorithm can be easily adapted for alternative applications and objectives. The validity of the optimised design is demonstrated through a simulation-tool that has been created in order to model the behaviour of the CVT in a real automotive environment using multiple fundamental theories and models including tire friction and traction behaviour. This powerful simulation tool is capable of predicting transmission and vehicular behaviour, and demonstrates a very good correlation with real-world data. A design critique is then performed that assesses the current state of the CVT design, and looks to address some of the concerns that have been found through the various methods used. A specific prototype design is also presented, based on the optimisation techniques developed, although the actual creation of a prototype is not presented here. Additional complementary research looks at the accuracy of the tire friction models through the use of a specially design tire friction test rig. Furthermore, a monitoring system is proposed for this particular CVT design (and similar) that is capable of continuously checking the contact film thickness between adjacent elements to ensure that there is sufficient lubricant to avoid metal-on-metal contact. The system, which is based around capacitance, requires the knowledge of the behaviour of the lubricant’s permittivity at increased pressure. This behaviour is studied through the use of a specially-designed experimental test rig.
102

Quality of Service optimisation framework for Next Generation Networks

Weber, Frank Gerd January 2012 (has links)
Within recent years, the concept of Next Generation Networks (NGN) has become widely accepted within the telecommunication area, in parallel with the migration of telecommunication networks from traditional circuit-switched technologies such as ISDN (Integrated Services Digital Network) towards packet-switched NGN. In this context, SIP (Session Initiation Protocol), originally developed for Internet use only, has emerged as the major signalling protocol for multimedia sessions in IP (Internet Protocol) based NGN. One of the traditional limitations of IP when faced with the challenges of real-time communications is the lack of quality support at the network layer. In line with NGN specification work, international standardisation bodies have defined a sophisticated QoS (Quality of Service) architecture for NGN, controlling IP transport resources and conventional IP QoS mechanisms through centralised higher layer network elements via cross-layer signalling. Being able to centrally control QoS conditions for any media session in NGN without the imperative of a cross-layer approach would result in a feasible and less complex NGN architecture. Especially the demand for additional network elements would be decreased, resulting in the reduction of system and operational costs in both, service and transport infrastructure. This thesis proposes a novel framework for QoS optimisation for media sessions in SIP-based NGN without the need for cross-layer signalling. One key contribution of the framework is the approach to identify and logically group media sessions that encounter similar QoS conditions, which is performed by applying pattern recognition and clustering techniques. Based on this novel methodology, the framework provides functions and mechanisms for comprehensive resource-saving QoS estimation, adaptation of QoS conditions, and support of Call Admission Control. The framework can be integrated with any arbitrary SIP-IP-based real-time communication infrastructure, since it does not require access to any particular QoS control or monitoring functionalities provided within the IP transport network. The proposed framework concept has been deployed and validated in a prototypical simulation environment. Simulation results show MOS (Mean Opinion Score) improvement rates between 53 and 66 percent without any active control of transport network resources. Overall, the proposed framework comes as an effective concept for central controlled QoS optimisation in NGN without the need for cross-layer signalling. As such, by either being run stand-alone or combined with conventional QoS control mechanisms, the framework provides a comprehensive basis for both the reduction of complexity and mitigation of issues coming along with QoS provision in NGN.
103

Hybrid optimisation algorithms for two-objective design of water distribution systems

Wang, Qi January 2014 (has links)
Multi-objective design or extended design of Water Distribution Systems (WDSs) has received more attention in recent years. It is of particular interest for obtaining the trade-offs between cost and hydraulic benefit to support the decision-making process. The design problem is usually formulated as a multi-objective optimisation problem, featuring a huge search space associated with a great number of constraints. Multi-objective evolutionary algorithms (MOEAs) are popular tools for addressing this kind of problem because they are capable of approximating the Pareto-optimal front effectively in a single run. However, these methods are often held by the “No Free Lunch” theorem (Wolpert and Macready 1997) that there is no guarantee that they can perform well on a wide range of cases. To overcome this drawback, many hybrid optimisation methods have been proposed to take advantage of multiple search mechanisms which can synergistically facilitate optimisation. In this thesis, a novel hybrid algorithm, called Genetically Adaptive Leaping Algorithm for approXimation and diversitY (GALAXY), is proposed. It is a dedicated optimiser for solving the discrete two-objective design or extended design of WDSs, minimising the total cost and maximising the network resilience, which is a surrogate indicator of hydraulic benefit. GALAXY is developed using the general framework of MOEAs with substantial improvements and modifications tailored for WDS design. It features a generational framework, a hybrid use of the traditional Pareto-dominance and the epsilon-dominance concepts, an integer coding scheme, and six search operators organised in a high-level teamwork hybrid paradigm. In addition, several important strategies are implemented within GALAXY, including the genetically adaptive strategy, the global information sharing strategy, the duplicates handling strategy and the hybrid replacement strategy. One great advantage of GALAXY over other state-of-the-art MOEAs lies in the fact that it eliminates all the individual parameters of search operators, thus providing an effective and efficient tool to researchers and practitioners alike for dealing with real-world cases. To verify the capability of GALAXY, an archive of benchmark problems of WDS design collected from the literature is first established, ranging from small to large cases. GALAXY has been applied to solve these benchmark design problems and its achievements in terms of both ultimate and dynamic performances are compared with those obtained by two state-of-the-art hybrid algorithms and two baseline MOEAs. GALAXY generally outperforms these MOEAs according to various numerical indicators and a graphical comparison tool. For the largest problem considered in this thesis, GALAXY does not perform as well as its competitors due to the limited computational budget in terms of number of function evaluations. All the algorithms have also been applied to solve the challenging Anytown rehabilitation problem, which considers both the design and operation of a system from the extended period simulation perspective. The performance of each algorithm is sensitive to the quality of the initial population and the random seed used. GALAXY and the Pareto-dominance based MOEAs are superior to the epsilon-dominance based methods; however, there is a tie between GALAXY and the Pareto-dominance based approaches. At the end, a summary of this thesis is provided and relevant conclusions are drawn. Recommendations for future research work are also made.
104

Outils et algorithmes pour gérer l'incertitude lors de l'ordonnancement d'application sur plateformes distribuées / Tools and Algorithms for Coping with Uncertainty in Application Scheduling on Distributed Platforms

Canon, Louis-claude 18 October 2010 (has links)
Cette thèse traite de l'ordonnancement dans les systèmes distribués. L'objectif est d'étudier l'impact de l'incertitude sur les ordonnancements et de proposer des techniques pour en réduire les effets sur les critères à optimiser. Nous distinguons plusieurs aspects de l'incertitude en considérant celle liée aux limites des méthodes employées (e.g., modèle imparfait) et celle concernant la variabilité aléatoire qui est inhérente aux phénomènes physiques (e.g., panne matérielle). Nous considérons aussi les incertitudes qui se rapportent à l'ignorance portée sur les mécanismes en jeu dans un système donné (e.g., soumission de tâches en ligne dans une machine parallèle). En toute généralité, l'ordonnancement est l'étape qui réalise une association ordonnée entre des requêtes (dans notre cas, des tâches) et des ressources (dans notre cas, des processeurs). L'objectif est de réaliser cette association de manière à optimiser des critères d'efficacité (e.g., temps total consacré à l'exécution d'un application) tout en respectant les contraintes définies. Examiner l'effet de l'incertitude sur les ordonnancements nous amène à considérer les aspects probabilistes et multicritères qui sont traités dans la première partie. La seconde partie repose sur l'analyse de problèmes représentatifs de différentes modalités en terme d'ordonnancement et d'incertitude (comme l'étude de la robustesse ou de la fiabilité des ordonnancements) / This thesis consists in revisiting traditional scheduling problematics in computational environments, and considering the adjunction of uncertainty in the models. We adopt here a wide definition of uncertainty that encompasses the intrinsic stochastic nature of some phenomena (e.g., processor failures that follow a Poissonian distribution) and the imperfection of model characteristics (e.g., inaccuracy of the costs in a model due to a bias in measurements). We also consider uncertainties that stem from indeterminations such as the user behaviors that are uncontrolled although being deterministic. Scheduling, in its general form, is the operation that assigns requests to resources in some specific way. In distributed environments, we are concerned by a workload (i.e., a set of tasks) that needs to be executed onto a computational platform (i.e., a set of processors). Therefore, our objective is to specify how tasks are mapped onto processors. Produced schedules can be evaluated through many different metrics (e.g., processing time of the workload, resource usage, etc) and finding an optimal schedule relatively to some metric constitutes a challenging issue. Probabilistic tools and multi-objectives optimization techniques are first proposed for tackling new metrics that arise from the uncertainty. In a second part, we study several uncertainty-related criteria such as the robustness (stability in presence of input variations) or the reliability (probability of success) of a schedule
105

Contribution au développement des stratégies de gestion de maintenance intégrée faisant appel à la sous-traitance / Contribution to the development of integrated maintenance strategies calling up subcontracting

Ayed, Souheil 13 December 2011 (has links)
Le cadre générale du mémoire s’articule autour de la gestion de la maintenance intégrée à la production en tenant compte de la contrainte de sous-traitance. Notre recherche traite particulièrement la gestion économique du soutient productique d’un ou de plusieurs sous-traitants qui diffèrent par leurs disponibilités et leurs coûts unitaire de production. L’étude économique consiste à minimiser un coût total intégrant la production, l’inventaire et la maintenance. Notre étude sera menée suivant deux axes. Dans un premier axe, nous considérons une demande constante sur un horizon de temps infini. Une étude analytique est menée afin d’établir la politique de maintenance à adopter et le choix entre plusieurs sous-traitants. Dans un deuxième axe, nous avons considéré une demande aléatoire à satisfaire sur un horizon de temps fini. Cette demande doit être satisfaire sous un niveau de service exigé en faisant appel à la sous-traitance tout en assumant que le taux de panne de la machine principale varie avec l’usage et le temps. L’objectif a été de proposer un plan optimal de maintenance et de production satisfaisant le niveau de service et tenant compte de la dégradation de la machine tout en minimisant les coûts de production, d’inventaire et de maintenance. Les modèles analytiques établis dans les deux axes sont validés par des exemples numériques et interprétés à travers des études de sensibilités / The general study framework is built around the maintenance management integrated with the production, taking into account the constraint of subcontracting. Our research specifically addresses the economic management of the production delivery from one or more subcontractors that differ in their availability and cost per unit of production. The economic study consists in minimizing the total cost including production, inventory and maintenance. Our study will be conducted along two axes. In a first axis, we consider a constant demand on an infinite time horizon. An analytical study is conducted to determine the maintenance policy to be adopted and the choice between several subcontractors. In a second axis, we considered a random request to accommodate over a finite time. This application must be met in a required level of service by using outsourcing while assuming that the failure rate of the main machine varies with use and time. The objective was to propose an optimal production maintenance plan that satisfies the level of service and taking into account the deterioration of the machine while minimizing production, inventory and maintenance costs. The analytical models developed in the two axes are validated by numerical examples and interpreted through sensitivity studies
106

No optimisation without representation : a knowledge based systems view of evolutionary/neighbourhood search optimisation

Tuson, Andrew Laurence January 1999 (has links)
In recent years, research into ‘neighbourhood search’ optimisation techniques such as simulated annealing, tabu search, and evolutionary algorithms has increased apace, resulting in a number of useful heuristic solution procedures for real-world and research combinatorial and function optimisation problems. Unfortunately, their selection and design remains a somewhat ad hoc procedure and very much an art. Needless to say, this shortcoming presents real difficulties for the future development and deployment of these methods. This thesis presents work aimed at resolving this issue of principled optimiser design. Driven by the needs of both the end-user and designer, and their knowledge of the problem domain and the search dynamics of these techniques, a semi-formal, structured, design methodology that makes full use of the available knowledge will be proposed, justified, and evaluated. This methodology is centred around a Knowledge Based System (KBS) view of neighbourhood search with a number of well-defined knowledge sources that relate to specific hypotheses about the problem domain. This viewpoint is complemented by a number of design heuristics that suggest a structured series of hillclimbing experiments which allow these results to be empirically evaluated and then transferred to other optimisation techniques if desired. First of all, this thesis reviews the techniques under consideration. The case for the exploitation of problem-specific knowledge in optimiser design is then made. Optimiser knowledge is shown to be derived from either the problem domain theory, or the optimiser search dynamics theory. From this, it will be argued that the design process should be primarily driven by the problem domain theory knowledge as this makes best use of the available knowledge and results in a system whose behaviour is more likely to be justifiable to the end-user. The encoding and neighbourhood operators are shown to embody the main source of problem domain knowledge, and it will be shown how forma analysis can be used to formalise the hypotheses about the problem domain that they represent. Therefore it should be possible for the designer to experimentally evaluate hypotheses about the problem domain. To this end, proposed design heuristics that allow the transfer of results across optimisers based on a common hillclimbing class, and that can be used to inform the choice of evolutionary algorithm recombination operators, will be justified. In fact, the above approach bears some similarity to that of KBS design. Additional knowledge sources and roles will therefore be described and discussed, and it will be shown how forma analysis again plays a key part in their formalisation. Design heuristics for many of these knowledge sources will then be proposed and justified. This methodology will be evaluated by testing the validity of the proposed design heuristics in the context of two sequencing case studies. The first case study is a well-studied problem from operational research, the flowshop sequencing problem, which will provide a through test of many of the design heuristics proposed here. Also, an idle-time move preference heuristic will be proposed and demonstrated on both directed mutation and candidate list methods. The second case study applies the above methodology to design a prototype system for resource redistribution in the developing world, a problem that can be modelled as a very large transportation problem with non-linear constraints and objective function. The system, combining neighbourhood search with a constructive algorithm which reformulates the problem to one of sequencing, was able to produce feasible shipment plans for problems derived from data from the World Health Organisation’s TB programme in China that are much larger than those problems tackled by the current ‘state-of-the-art’ for transportation problems.
107

Optimisation de décisions économiques concurrentielles dans un simulateur de gestion d’entreprise / Optimizing competitive economic decisions in a business game

Dufourny, Sylvain 13 October 2017 (has links)
Les technologies du numérique s’invitent de plus en plus dans l’enseignement. Les nouvelles pratiques pédagogiques révolutionnent également les standards de la formation. La « gamification » des cursus est, par exemple, devenue une tendance actuelle. Elle permet, par le jeu, d’exercer les apprenants différemment. Les simulations de gestion d’entreprise entrent dans ce cadre. Elles positionnent les stagiaires à la tête d’entreprises virtuelles et simulent un marché concurrentiel. Le déploiement de cette pratique se heurte néanmoins à des difficultés opérationnelles : taille du groupe, formation de l’animateur… C’est dans ce contexte que nous envisageons la mise en œuvre d’agents autonomes permettant d’accompagner ou de concurrencer les apprenants.Pour cela, nous proposons, tout d’abord, une modélisation performante d’une entreprise à base de programmes linéaires mixtes permettant l’optimisation des départements internes des entreprises (production, distribution, finance). Ensuite, nous introduisons une heuristique de recherche locale afin de générer des solutions performantes dans un environnement économique. Aussi, à la suite d’une phase d’extraction de connaissances, nous proposons la définition et la construction d’arbres d’anticipation qui permettent de prévoir les décisions concurrentielles des protagonistes engagés et ainsi de pouvoir estimer la qualité des solutions construites. Afin de valider les approches proposées, nous les avons comparées aux comportements réels de joueurs et avons évalué l’apport de l’exploitation de la connaissance. Enfin, nous avons proposé une généralisation de la méthode à d’autres simulateurs de gestion d’entreprise. / Digital technologies are becoming increasingly popular in teaching and learning processes. New educational practices are also revolutionizing the standards of training. For example, the "gamification" of the curricula has become a current trend. It allows, through games, to exercise learners differently. Business management simulation, also known as business games, fall within this context. They place learners at the head of virtual companies and simulate a competitive market. The deployment of this practice nevertheless encounters some operational difficulties: size of the group, training of the teacher... It is in this context that we envisage the implementation of autonomous agents to accompany the learners or the competitors.To do this, firstly, we propose a modeling of a company, based on mixed linear programs allowing optimization of the internal departments of the companies (production, delivery, finance). For the second step, we will introduce a local heuristic search, ensuring a generation of efficient solutions in a given economic and competitive environment. Thirdly, following a knowledge extraction phase, we propose the definition and construction of anticipation trees that predict the competitive decisions of the engaged protagonists and thus to be able to estimate the quality of the solutions built. In order to validate the proposed approaches, we compared them with the real behaviors of players and evaluated the contribution of the exploitation of the knowledge. Finally, we proposed a framework allowing a generalization of the method to other business games.
108

Convex optimization-based resource scheduling for multi-user wireless systems

Zarakovitis, Charilaos C. January 2011 (has links)
No description available.
109

Méthodologie et algorithmes adaptés à l'optimisation multi-niveaux et multi-objectif de systèmes complexes

Moussouni-Messad, Fouzia 08 July 2009 (has links) (PDF)
La conception d'un système électrique est une tâche très complexe qui relève d'expertises dans différents domaines de compétence. Dans un contexte compétitif où l'avance technologique est un facteur déterminant, l'industrie cherche à réduire les temps d'étude et à fiabiliser les solutions trouvées par une approche méthodologique rigoureuse fournissant une solution optimale systémique.Il est alors nécessaire de construire des modèles et de mettre au point des méthodes d'optimisation compatibles avec ces préoccupations. En effet, l'optimisation unitaire de sous-systèmes sans prendre en compte les interactions ne permet pas d'obtenir un système optimal. Plus le système est complexe plus le travail est difficile et le temps de développement est important car il est difficile pour le concepteur d'appréhender le système dans toute sa globalité. Il est donc nécessaire d'intégrer la conception des composants dans une démarche systémique et globale qui prenne en compte à la fois les spécificités d'un composant et ses relations avec le système qui l'emploie.Analytical Target Cascading est une méthode d'optimisation multi niveaux de systèmes complexes. Cette approche hiérarchique consiste à décomposer un système complexe en sous-systèmes, jusqu'au niveau composant dont la conception relève d'algorithmes d'optimisation classiques. La solution optimale est alors trouvée par une technique de coordination qui assure la cohérence de tous les sous-systèmes. Une première partie est consacrée à l'optimisation de composants électriques. L'optimisation multi niveaux de systèmes complexes est étudiée dans la deuxième partie où une chaîne de traction électrique est choisie comme exemple
110

Algorithmes de décomposition/coordination en optimisation stochastique

Culioli, Jean-Christophe 28 September 1987 (has links) (PDF)
Pas de résumé

Page generated in 0.1181 seconds