• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 9
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 60
  • 60
  • 13
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Quantum Algorithms Using Nuclear Magnetic Resonance Quantum Information Processor

Mitra, Avik 10 1900 (has links)
The present work, briefly described below, consists of implementation of several quantum algorithms in an NMR Quantum Information Processor. Game theory gives us mathematical tools to analyze situations of conflict between two or more players who take decisions that influence their welfare. Classical game theory has been applied to various fields such as market strategy, communication theory, biological processes, foreign policies. It is interesting to study the behaviour of the games when the players share certain quantum correlations such as entanglement. Various games have been studied under the quantum regime with the hope of obtaining some insight into designing new quantum algorithms. Chapter 2 presents the NMR implementation of three such algorithms. Experimental NMR implementation given in this chapter are: (i) Three qubit ‘Dilemma’ game with corrupt sources’. The Dilemma game deals with the situation where three players have to choose between going/not going to a bar with a seating capacity of two. It is seen that in the players have a higher payoff if they share quantum correlations. However, the pay-off falls rapidly with increasing corruption in the source qubits. Here we report the experimental NMR implementation of the quantum version of the Dilemma game with and without corruption in the source qubits. (ii) Two qubit ‘Ulam’s game’. This is a two player game where one player has to find out the binary number thought by the other player. This problem can be solved with one query if quantum resources are used. This game has been implemented in a two qubit system in an NMR quantum information processor. (iii) Two qubit ‘Battle of Sexes’ game. This game deal with a situation where two players have conflicting choices but a deep desire to be together. This leads to a dilemma in the classical case. Quantum mechanically this dilemma is resolved and a unique solution emerges. The NMR implementation of the quantum version of this game is also given in this chapter. Quantum adiabatic algorithm is a method of solving computational problems by evolving the ground state of a slowly varying Hamiltonian. The technique uses evolution of the ground state of a slowly varying Hamiltonian to reach the required output state. In some cases, such as the adiabatic versions of Grover’s search algorithm and Deutsch-Jozsa algorithm, applying the global adiabatic evolution yields a complexity similar to their classical algorithms. However, if one uses local adiabatic evolutions, their complexity is of the order √N (where N=2n) [37, 38]. In Chapter 3, the NMR implementation of (i) the Deutsch-Jozsa and the (ii) Grover’s search algorithm using local adiabatic evolution has been presented. In adiabatic algorithm, the system is first prepared in the equal superposition of all the possible states which is the ground state of the beginning Hamiltonian. The solution is encoded in the ground state of the final Hamiltonian. The system is evolved under a linear combination of the beginning and the final Hamiltonian. During each step of the evolution the interpolating Hamiltonian slowly changes from the beginning to the final Hamiltonian, thus evolving the ground state of the beginning Hamiltonian towards the ground state of the final Hamiltonian. At the end of the evolution the system is in the ground state of the final Hamiltonian which is the solution. The final Hamiltonian, for each of the two cases of adiabatic algorithm described in this chapter, are constructed depending on the problem definition. Adiabatic algorithms have been proved to be equivalent to standard quantum algorithms with respect to complexity [39]. NMR implementation of adiabatic algorithms in homonuclear spin systems face problems due to decoherence and complicated pulse sequences. The decoherence destroys the answer as it causes the final state to evolve to a mixed state and in homonuclear systems there is a substantial evolution under the internal Hamiltonian during the application of the soft pulses which prevents the initial state to converge to the solution state. The resolution of these issues are necessary before one can proceed for the implementation of an adiabatic algorithm in a large system. Chapter 4 demonstrates that by using ‘strongly modulated pulses’ for creation of interpolating Hamiltonian, one can circumvent both the problems and thus successfully implement the adiabatic SAT algorithm in a homonuclear three qubit system. The ‘strongly modulated pulses’ (SMP) are computer optimized pulses in which the evolution under the internal Hamiltonian of the system and RF inhomogeneities associated with the probe is incorporated while generating the SMPs. This results in precise implementation of unitary operators by these pulses. This work also demonstrates that the strongly modulated pulses tremendously reduce the time taken for the implementation of the algorithm, can overcome problems associated with decoherence and will be the modality in future implementation of quantum information processing by NMR. Quantum search algorithm, involving a large number of qubits, is highly sensitive to errors in the physical implementation of the unitary operators. This can put an upper limit to the size of the data base that can be practically searched. The lack of robustness of the quantum search algorithm for a large number of qubits, arises from the fact that stringent ‘phase-matching’ conditions are imposed on the algorithm. To overcome this problem, a modified operator for the search algorithm has been suggested by Tulsi [40]. He has theoretically shown that even when there are errors in implementation of the unitary operators, the search algorithm with his modified operator converges to the target state while the original Grover’s algorithm fails. Chapter 5, presents the experimental NMR implementation of the modified search algorithm with errors and its comparison with the original Grover’s search algorithm. We experimentally validate the theoretical predictions made by Tulsi that the introduction of compensatory Walsh-Hadamard and phase-flip operations refocuses the errors. Experimental Quantum Information Processing is in a nascent stage and it would be too early to predict its future. The excitement on this topic is still very prevalent and many options are being explored to enhance the hardware and software know-how. This thesis endeavors in this direction and probes the experimental feasibility of the quantum algorithms in an NMR quantum information processor.
32

SUSTAINABLE LIFETIME VALUE CREATION THROUGH INNOVATIVE PRODUCT DESIGN: A PRODUCT ASSURANCE MODEL

Seevers, K. Daniel 01 January 2014 (has links)
In the field of product development, many organizations struggle to create a value proposition that can overcome the headwinds of technology change, regulatory requirements, and intense competition, in an effort to satisfy the long-term goals of sustainability. Today, organizations are realizing that they have lost portfolio value due to poor reliability, early product retirement, and abandoned design platforms. Beyond Lean and Green Manufacturing, shareholder value can be enhanced by taking a broader perspective, and integrating sustainability innovation elements into product designs in order to improve the delivery process and extend the life of product platforms. This research is divided into two parts that lead to closing the loop towards Sustainable Value Creation in product development. The first section presents a framework for achieving Sustainable Lifetime Value through a toolset that bridges the gap between financial success and sustainable product design. Focus is placed on the analysis of the sustainable value proposition between producers, consumers, society, and the environment and the half-life of product platforms. The Half-Life Return Model is presented, designed to provide feedback to producers in the pursuit of improving the return on investment for the primary stakeholders. The second part applies the driving aspects of the framework with the development of an Adaptive Genetic Search Algorithm. The algorithm is designed to improve fault detection and mitigation during the product delivery process. A computer simulation is used to study the effectiveness of primary aspects introduced in the search algorithm, in order to attempt to improve the reliability growth of the system during the development life-cycle. The results of the analysis draw attention to the sensitivity of the driving aspects identified in the product development lifecycle, which affect the long term goals of sustainable product development. With the use of the techniques identified in this research, cost effective test case generation can be improved without a major degradation in the diversity of the search patterns required to insure a high level of fault detection. This in turn can lead to improvements in the driving aspects of the Half-Life Return Model, and ultimately the goal of designing sustainable products and processes.
33

Finding the optimal speed profile for an electric vehicle using a search algorithm

Medin, Jonas January 2018 (has links)
This master thesis presents a method to find the optimal speed profile for a dynamic system in the shape of an electric vehicle and any topography using a search algorithm. The search algorithm is capable of considering all the speed choices in a topography presented discretely, in order to find the most energy efficient one. How well the calculations made by the search algorithm represents the reality, depends on the speed and topography resolution and the vehicle energy model. With the correct settings, up to 18.4% of energy can be saved for a given topography compared to having the lowest constant speed allowed. The speed is ranging between 85-95 km/h but the method presented is capable of having any set of speed options, even if the resolution varies from point to point on the road. How to use this method and its properties is explained in detail using text and step for step figures of how the search algorithm iterates.A comparison between allowing regenerative braking and not allowing it is shown in the results. It is clear that there is most energy saving potential where no regenerative braking is allowed. / <p>Mustafa Ali Arat has stopped working at NEVS and moved abroad.</p>
34

Décomposition des problèmes de planification de tâches basée sur les landmarks / Planning problem decomposition using landmarks

Vernhes, Simon 12 December 2014 (has links)
Les algorithmes permettant la création de stratégies efficaces pour la résolution d’ensemble de problèmeshétéroclites ont toujours été un des piliers de la recherche en Intelligence Artificielle. Dans cette optique,la planification de tâches a pour objectif de fournir à un système la capacité de raisonner pour interagiravec son environnement de façon autonome afin d’atteindre les buts qui lui ont été assignés. À partir d’unedescription de l’état initial du monde, des actions que le système peut exécuter, et des buts qu’il doit atteindre,un planificateur calcule une séquence d’actions dont l’exécution permet de faire passer l’état du monde danslequel évolue le système vers un état qui satisfait les buts qu’on lui a fixés. Le problème de planification esten général difficile à résoudre (PSPACE-difficile), cependant certaines propriétés des problèmes peuvent êtreautomatiquement extraites permettant ainsi une résolution efficace.Dans un premier temps, nous avons développé l’algorithme LMBFS (Landmark-based Meta Best-First Search).À contre-courant des planificateurs state-of-the-art, basés sur la recherche heuristique dans l’espace d’états,LMBFS est un algorithme qui réactualise la technique de décomposition des problèmes de planification baséssur les landmarks. Un landmark est un fluent qui doit être vrai à un certain moment durant l’exécutionde n’importe quel plan solution. L’algorithme LMBFS découpe le problème principal en un ensemble desous-problèmes et essaie de trouver une solution globale grâce aux solutions trouvées pour ces sous-problèmes.Dans un second temps, nous avons adapté un ensemble de techniques pour améliorer les performances del’algorithme. Enfin, nous avons testé et comparé chacune de ces méthodes permettant ainsi la création d’unplanificateur efficace. / The algorithms allowing on-the-fly computation of efficient strategies solving aheterogeneous set of problems has always been one of the greatest challengesfaced by research in Artificial Intelligence. To this end, classical planningprovides to a system reasoning capacities, in order to help it to interact with itsenvironment autonomously. Given a description of the world current state, theactions the system is able to perform, and the goal it is supposed to reach, a plannercan compute an action sequence yielding a state satisfying the predefined goal. Theplanning problem is usually intractable (PSPACE-hard), however some propertiesof the problems can be automatically extracted allowing the design of efficientsolvers.Firstly, we have developed the Landmark-based Meta Best-First Search (LMBFS)algorithm. Unlike state-of-the-art planners, usually based on state-space heuristicsearch, LMBFS reenacts landmark-based planning problem decomposition. Alandmark is a fluent appearing in each and every solution plan. The LMBFSalgorithm splits the global problem in a set of subproblems and tries to find aglobal solution using the solutions found for these subproblems. Secondly, wehave adapted classical planning techniques to enhance the performance of ourbase algorithm, making LMBFS a competitive planner. Finally, we have tested andcompared these methods.
35

Performance of alternative option pricing models during spikes in the FTSE 100 volatility index : Empirical evidence from FTSE100 index options

Rehnby, Nicklas January 2017 (has links)
Derivatives have a large and significant role on the financial markets today and the popularity of options has increased. This has also increased the demand of finding a suitable option pricing model, since the ground-breaking model developed by Black &amp; Scholes (1973) have poor pricing performance. Practitioners and academics have over the years developed different models with the assumption of non-constant volatility, without reaching any conclusions regarding which model is more suitable to use. This thesis examines four different models, the first model is the Practitioners Black &amp; Scholes model proposed by Christoffersen &amp; Jacobs (2004b). The second model is the Heston´s (1993) continuous time stochastic volatility model, a modification of the model is also included, which is called the Strike Vector Computation suggested by Kilin (2011). The last model is the Heston &amp; Nandi (2000) Generalized Autoregressive Conditional Heteroscedasticity type discrete model. From a practical point of view the models are evaluated, with the goal of finding the model with the best pricing performance and the most practical usage. The model´s robustness is also tested to see how the models perform in out-of-sample during a high respectively low implied volatility market. All the models are effected in the robustness test, the out-sample ability is negatively affected by a high implied volatility market. The results show that both of the stochastic volatility models have superior performances in the in-sample and out-sample analysis. The Generalized Autoregressive Conditional Heteroscedasticity type discrete model shows surprisingly poor results both in the in-sample and out-sample analysis. The results indicate that option data should be used instead of historical return data to estimate the model’s parameters. This thesis also provides an insight on why overnight-index-swap (OIS) rates should be used instead of LIBOR rates as a proxy for the risk-free rate.
36

Theoretical investigation of electronic properties of atomic clusters in their free forms and adsorbed on functionalized graphene support / Investigations théoriques de propriétés électroniques de clusters atomiques sous leurs formes libre et adsorbée sur un substrat de graphène dopé

Li, Rui 11 October 2016 (has links)
Les (sub)nanoclusters sont des agrégats d’atomes ou de molécules composés de quelques unités à quelques centaines d’unités. En raison de leur petite taille, ils peuvent avoir des propriétés électroniques, optiques, magnétiques et catalytiques très différentes par rapport au solide correspondant . D'un point de vue expérimental, il est encore très difficile de synthétiser des agrégats de taille calibrée. D'un point de vue théorique, le développement des puissances de calcul, des méthodes de calcul de structure électronique et des algorithmes de recherches globales de structures stables, permettent un calcul toujours plus précis de leurs propriétés physico-chimiques. L’étude théorique permet alors de déterminer de façon fiable les structures stables de ces systèmes qui président aux calculs de leurs propriétés . L’exemple qui illustre ce travail s’inspire du processus observé au sein des piles à combustible dans lequel le Platine (Pt) est couramment utilisé pour produire de l’énergie par oxydation du dihydrogène en favorisant notamment sa dissociation . L’objet de ce travail consiste à comparer la capacité des clusters de Platine de différentes tailles à adsorber la molécule de dihydrogène sous leur forme libre et adsorbée sur substrat. Le graphène , matériaux bidimensionnel cristallin formé de carbone est choisi dans ce travail en tant que substrat en raison de sa grande résistance mécanique et chimique. La première partie de ce travail est consacrée à la recherche d’éléments dopants qui vont permettent à la fois d’améliorer la capacité d’adsorption des clusters de Platine sur la surface et éviter leur migration. L’objectif est ici de proposer un substrat sur lequel peuvent être empêchés les phénomènes d’agglomération, de dissolution et de détachement du cluster qui ainsi limiteraient son efficacité catalytique . Des dopages de la surface, tel qu’ils sont réalisables expérimentalement , par l’Azote, le Bore et le Nitrure de Bore, par substitution atomique et avec ou sans considération préalable de lacunes, ont été étudiés. La seconde partie correspond à l’implémentation dans le code GSAM (Global Search Algorithm of Minima - algorithme de recherche globale de minima) développé au laboratoire , , des éléments qui permettent la recherche de structures de plus basse énergie de clusters moléculaires adsorbés sur substrat, tels que les systèmes [H2-Ptn-Graphène dopé] de cet exemple. La troisième partie concerne l’illustration de la fiabilité de la méthode de recherche globale employée et de la qualité de quelques méthodes de calcul de l’énergie moléculaire (DFT et GUPTA) vis-à-vis de résultats mentionnés dans la littérature sur les clusters de Platine. La dernière partie comporte l’investigation structurale des systèmes [H2-Ptn] et [H2-Ptn-Graphène dopé] pour différentes tailles de clusters allant de n=6 à n=20. La variation de l’énergie d’adsorption de H2 sur les clusters libres et supportés ainsi que celle du cluster moléculaire sur le substrat en fonction de la taille est reportée. / A sub-nanometer sized metal cluster consists of only several to tens of atoms. Due to its small size and quantum effects, it can have very specific electronic, optical, magnetic and catalytic properties as compared with their bulk behaviors . From an experimental point of view, it is still a big challenge to realize size-controlled synthesis for (sub) nanoclusters. From a theoretical point of view, benefiting from the development of faster high-performance computational sources, more efficient electronic structure modelling software and more reliable global search methods for the determination of the most stable structures, the chemical and physical properties of clusters can be determinate more accurately. As it is experimentally a big challenge to realize size-controlled synthesis for (sub) nanoclusters, theoretical studies can provide detailed information on their geometric structure, electronic structure, as well as adsorption and reaction properties . The example chosen to be treated in this study is inspired by the fuel cell, in which the Platinum (Pt) is a typical and most commonly used precious metal catalyst for the production of energy by the oxidation of dihydrogene . Graphene is a recently discovered 2D carbon net structure, has several special properties, such as: low weight, high strength, high surface area, high electrical conductivity, etc. With these properties and their novel combinations, graphene might prove a promising candidate to be used as catalyst supports. The first part of this study is devoted to the search of the doping elements which permit both enhance the adsorption capacity of Pt clusters on the surface and prevent their migration. The aim here is propose one substrate which can avoid the problems of cluster agglomeration, dissolution and detachment, which reduce the performance of the catalysts . The ways of doping of the surface, which have already been experimentally realized , such as Nitrogen, Boron, and N-B patches substitution of Carbon atoms with or without introducing the vacancy on the pristine graphene, are studied. The second part corresponds to the implementation of some new features into the code GSAM (Global Search Algorithm of Minima) developed in our laboratory , , , which permit the search of the most stable structures of the molecular clusters adsorbed on substrate, such as the complex systems of [H2-Ptn-doped Graphene]. The third part is to evaluate the reliabilities of the global search method used, as well as the DFT and the empirical (GUPTA) potential energy surface. Thus, the main discussion appears as a comparison with the results of the literature concerning the Pt clusters. The fourth part consists of the structural investigation of [H2-Ptn] and [H2-Ptn-doped Graphene] systems for different sizes of Pt clusters with n=6 to n=20. The variation of the adsorption energy of H2 on the free and supported Ptn clusters, and the adsorption energy of (H2+Ptn) system on the surface with respect to the size of the cluster is discussed.
37

Novel Pattern Recognition Techniques for Improved Target Detection in Hyperspectral Imagery

Sakla, Wesam Adel 2009 December 1900 (has links)
A fundamental challenge in target detection in hyperspectral imagery is spectral variability. In target detection applications, we are provided with a pure target signature; we do not have a collection of samples that characterize the spectral variability of the target. Another problem is that the performance of stochastic detection algorithms such as the spectral matched filter can be detrimentally affected by the assumptions of multivariate normality of the data, which are often violated in practical situations. We address the challenge of lack of training samples by creating two models to characterize the target class spectral variability --the first model makes no assumptions regarding inter-band correlation, while the second model uses a first-order Markovbased scheme to exploit correlation between bands. Using these models, we present two techniques for meeting these challenges-the kernel-based support vector data description (SVDD) and spectral fringe-adjusted joint transform correlation (SFJTC). We have developed an algorithm that uses the kernel-based SVDD for use in full-pixel target detection scenarios. We have addressed optimization of the SVDD kernel-width parameter using the golden-section search algorithm for unconstrained optimization. We investigated a proper number of signatures N to generate for the SVDD target class and found that only a small number of training samples is required relative to the dimensionality (number of bands). We have extended decision-level fusion techniques using the majority vote rule for the purpose of alleviating the problem of selecting a proper value of s 2 for either of our target variability models. We have shown that heavy spectral variability may cause SFJTC-based detection to suffer and have addressed this by developing an algorithm that selects an optimal combination of the discrete wavelet transform (DWT) coefficients of the signatures for use as features for detection. For most scenarios, our results show that our SVDD-based detection scheme provides low false positive rates while maintaining higher true positive rates than popular stochastic detection algorithms. Our results also show that our SFJTC-based detection scheme using the DWT coefficients can yield significant detection improvement compared to use of SFJTC using the original signatures and traditional stochastic and deterministic algorithms.
38

Desifn And Optimization Of A Mixed Flow Compressor Impeller Using Robust Design Methods

Cevik, Mert 01 September 2009 (has links) (PDF)
This is a study that is focused on developing an individual design methodology for a centrifugal impeller and generating a mixed flow impeller for a small turbojet engine by using this methodology. The structure of the methodology is based on the design, modeling and the optimization processes, which are operated sequentially. The design process consists of engine design and compressor design codes operated together with a commercial design code. Design of Experiment methods and an in-house Neural Network code is used for the modeling phase. The optimization is based on an in-house code which is generated based on multidirectional search algorithm. The optimization problem is constructed by using the inhouse parametric design codes of the engine and the compressor. The goal of the optimization problem is to reach an optimum design which gives the best possible combination of the thrust and the fuel consumption for a small turbojet engine. The final combination of the design parameters obtained from the optimization study are used in order to generate the final design with the commercial design code. On the last part of the thesis a comparison of the final design and a standard radial flow impeller is made in order to clarify the benefit of the study. The results have been showed that a mixed flow compressor design is superior to a standard radial flow compressor in a small turbojet application.
39

Optimum Design Of Reinforced Concrete Plane Frames Using Harmony Search Algorithm

Akin, Alper 01 August 2010 (has links) (PDF)
In this thesis, the optimum design algorithm is presented for reinforced concrete special moment frames. The objective function is considered as the total cost of reinforced concrete frame which includes the cost of concrete, formwork and reinforcing steel bars. The cost of any component is inclusive of material, fabrication and labor. The design variables in beams are selected as the width and the depth of beams in each span, the diameter and the number of longitudinal reinforcement bars along the span and supports. In columns the width and the depth of the column section, the number and the diameter of bars in x and y directions are selected as design variables. The column section database is prepared which includes the width and height of column section, the diameter and the number of reinforcing bars in the column section is constructed. This database is used by the design algorithm to select appropriate sections for the columns of the frame under consideration. The design constraints are implemented from ACI 318-05 which covers the flexural and shear strength, serviceability, the minimum and maximum steel percentage for flexural and shear reinforcement, the spacing requirements for the reinforcing bars and the upper and lower bound requirements for the concrete sections. The optimum design problem formulated according to ACI 318-05 provisions with the design variables mentioned above turns out to be a combinatorial optimization problem. The solution of the design problem is obtained by using the harmony search algorithm (HS) which is one of the recent additions to meta-heuristic optimization techniques which are widely used in obtaining the solution of combinatorial optimization problems. The HS algorithm is quite simple and has few parameters to initialize and consists of simple steps which make it easy to implement. Number of design examples is presented to demonstrate the efficiency and robustness of the optimum design algorithm developed.
40

Strategic behavior analysis in electricity markets

Son, You Seok 14 May 2015 (has links)
Strategic behaviors in electricity markets are analyzed. Three related topics are investigated. The first topic is a research about the NE search algorithm for complex non-cooperative games in electricity markets with transmission constraints. Hybrid co-evolutionary programming is suggested and simulated for complex examples. The second topic is an analysis about the competing pricing mechanisms of uniform and pay-as-bid pricing in an electricity market. We prove that for a two-player static game the Nash Equilibrium under pay-as-bid pricing will yield less total revenue in expectation than under uniform pricing when demand is inelastic. The third topic is to address a market power mitigation issue of the current Texas electricity market by limiting Transmission Congestion Right (TCR) ownership. The strategic coordination of inter zonal scheduling and balancing market manipulation is analyzed. A market power measurement algorithm useful to determine the proper level of TCR ownership limitation is suggested. / text

Page generated in 0.0666 seconds