• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 383
  • 82
  • 52
  • 44
  • 13
  • 12
  • 11
  • 9
  • 8
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 716
  • 716
  • 151
  • 140
  • 120
  • 100
  • 89
  • 85
  • 83
  • 79
  • 76
  • 74
  • 68
  • 67
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Development of simulation tools, control strategies, and a hybrid vehicle prototype

Pei, Dekun 14 November 2012 (has links)
This thesis (1) reports the development of simulation tools and control strategies for optimizing hybrid electric vehicle (HEV) energy management, and (2) reports the design and testing of a hydraulic hybrid school bus (HHB) prototype. A hybrid vehicle is one that combines two or more energy sources for use in vehicle propulsion. Hybrid electric vehicles have become popular in the consumer market due to their greatly improved fuel economy over conventional vehicles. The control strategy of an HEV has a paramount effect on its fuel economy performance. In this thesis, backward-looking and forward-looking simulations of three HEV architectures (parallel, power-split and 2-mode power-split) are developed. The Equivalent Cost Minimization Strategy (ECMS), which weights electrical power as an equivalent fuel usage, is then studied in great detail and improvements are suggested. Specifically, the robustness of an ECMS controller is improved by linking the equivalence factor to dynamic programming and then further tailoring its functional form. High-fidelity vehicle simulations over multiple drive-cycles are performed to measure the improved performance of the new ECMS controller, and to show its potential for online application. While HEVs are prominent in the consumer market and studied extensively in current literature, hydraulic hybrid vehicles (HHVs) only exist as heavy utility vehicle prototypes. The second half of this thesis reports design, construction, and testing of a hydraulic hybrid school bus prototype. Design considerations, simulation results, and preliminary testing results are reported, which indicate the strong potential for hydraulic hybrids to improve fuel economy in the school bus vehicle segment.
502

Méthodes multigrilles pour les jeux stochastiques à deux joueurs et somme nulle, en horizon infini

Detournay, Sylvie 25 September 2012 (has links) (PDF)
Dans cette thèse, nous proposons des algorithmes et présentons des résultats numériques pour la résolution de jeux répétés stochastiques, à deux joueurs et somme nulle dont l'espace d'état est de grande taille. En particulier, nous considérons la classe de jeux en information complète et en horizon infini. Dans cette classe, nous distinguons d'une part le cas des jeux avec gain actualisé et d'autre part le cas des jeux avec gain moyen. Nos algorithmes, implémentés en C, sont principalement basés sur des algorithmes de type itérations sur les politiques et des méthodes multigrilles. Ces algorithmes sont appliqués soit à des équations de la programmation dynamique provenant de problèmes de jeux à deux joueurs à espace d'états fini, soit à des discrétisations d'équations de type Isaacs associées à des jeux stochastiques différentiels. Dans la première partie de cette thèse, nous proposons un algorithme qui combine l'algorithme des itérations sur les politiques pour les jeux avec gain actualisé à des méthodes de multigrilles algébriques utilisées pour la résolution des systèmes linéaires. Nous présentons des résultats numériques pour des équations d'Isaacs et des inéquations variationnelles. Nous présentons également un algorithme d'itérations sur les politiques avec raffinement de grilles dans le style de la méthode FMG. Des exemples sur des inéquations variationnelles montrent que cet algorithme améliore de façon non négligeable le temps de résolution de ces inéquations. Pour le cas des jeux avec gain moyen, nous proposons un algorithme d'itération sur les politiques pour les jeux à deux joueurs avec espaces d'états et d'actions finis, dans le cas général multichaine (c'est-à-dire sans hypothèse d'irréductibilité sur les chaînes de Markov associées aux stratégies des deux joueurs). Cet algorithme utilise une idée développée dans Cochet-Terrasson et Gaubert (2006). Cet algorithme est basé sur la notion de projecteur spectral non-linéaire d'opérateurs de la programmation dynamique de jeux à un joueur (lequel est monotone et convexe). Nous montrons que la suite des valeurs et valeurs relatives satisfont une propriété de monotonie lexicographique qui implique que l'algorithme termine en temps fini. Nous présentons des résultats numériques pour des jeux discrets provenant d'une variante des jeux de Richman et sur des problèmes de jeux de poursuite. Finalement, nous présentons de nouveaux algorithmes de multigrilles algébriques pour la résolution de systèmes linéaires singuliers particuliers. Ceux-ci apparaissent, par exemple, dans l'algorithme d'itérations sur les politiques pour les jeux stochastiques à deux joueurs et somme nulle avec gain moyen, décrit ci-dessus. Nous introduisons également une nouvelle méthode pour la recherche de mesures invariantes de chaînes de Markov irréductibles basée sur une approche de contrôle stochastique. Nous présentons un algorithme qui combine les itérations sur les politiques d'Howard et des itérations de multigrilles algébriques pour les systèmes linéaires singuliers.
503

Generalized unit commitment by the radar multiplier method

Beltran Royo, César 09 July 2001 (has links)
This operations research thesis should be situated in the field of the power generation industry. The general objective of this work is to efficiently solve the Generalized Unit Commitment (GUC) problem by means of specialized software. The GUC problem generalizes the Unit Commitment (UC) problem by simultane-ously solving the associated Optimal Power Flow (OPF) problem. There are many approaches to solve the UC and OPF problems separately, but approaches to solve them jointly, i.e. to solve the GUC problem, are quite scarce. One of these GUC solving approaches is due to professors Batut and Renaud, whose methodology has been taken as a starting point for the methodology presented herein.This thesis report is structured as follows. Chapter 1 describes the state of the art of the UC and GUC problems. The formulation of the classical short-term power planning problems related to the GUC problem, namely the economic dispatching problem, the OPF problem, and the UC problem, are reviewed. Special attention is paid to the UC literature and to the traditional methods for solving the UC problem. In chapter 2 we extend the OPF model developed by professors Heredia and Nabona to obtain our GUC model. The variables used and the modelling of the thermal, hydraulic and transmission systems are introduced, as is the objective function. Chapter 3 deals with the Variable Duplication (VD) method, which is used to decompose the GUC problem as an alternative to the Classical Lagrangian Relaxation (CLR) method. Furthermore, in chapter 3 dual bounds provided by the VDmethod or by the CLR methods are theoretically compared.Throughout chapters 4, 5, and 6 our solution methodology, the Radar Multiplier (RM) method, is designed and tested. Three independent matters are studied: first, the auxiliary problem principle method, used by Batut and Renaud to treat the inseparable augmented Lagrangian, is compared with the block coordinate descent method from both theoretical and practical points of view. Second, the Radar Sub- gradient (RS) method, a new Lagrange multiplier updating method, is proposed and computationally compared with the classical subgradient method. And third, we study the local character of the optimizers computed by the Augmented Lagrangian Relaxation (ALR) method when solving the GUC problem. A heuristic to improve the local ALR optimizers is designed and tested.Chapter 7 is devoted to our computational implementation of the RM method, the MACH code. First, the design of MACH is reviewed brie y and then its performance is tested by solving real-life large-scale UC and GUC instances. Solutions computed using our VD formulation of the GUC problem are partially primal feasible since they do not necessarily fulfill the spinning reserve constraints. In chapter 8 we study how to modify this GUC formulation with the aim of obtaining full primal feasible solutions. A successful test based on a simple UC problem is reported. The conclusions, contributions of the thesis, and proposed further research can be found in chapter 9.
504

Demand-Driven Type Inference with Subgoal Pruning

Spoon, Steven Alexander 29 August 2005 (has links)
Highly dynamic languages like Smalltalk do not have much static type information immediately available before the program runs. Static types can still be inferred by analysis tools, but historically, such analysis is only effective on smaller programs of at most a few tens of thousands of lines of code. This dissertation presents a new type inference algorithm, DDP, that is effective on larger programs with hundreds of thousands of lines of code. The approach of the algorithm borrows from the field of knowledge-based systems: it is a demand-driven algorithm that sometimes prunes subgoals. The algorithm is formally described, proven correct, and implemented. Experimental results show that the inferred types are usefully precise. A complete program understanding application, Chuck, has been developed that uses DDP type inferences. This work contributes the DDP algorithm itself, the most thorough semantics of Smalltalk to date, a new general approach for analysis algorithms, and experimental analysis of DDP including determination of useful parameter settings. It also contributes an implementation of DDP, a general analysis framework for Smalltalk, and a complete end-user application that uses DDP.
505

Vector-Valued Markov Games / Vektorwertige Markov-Spiele

Piskuric, Mojca 16 April 2001 (has links) (PDF)
The subject of the thesis are vector-valued Markov Games. Chapter 1 presents the idea, that has led to the development of the theory of general stochastic games. The work of Lloyd S. Shapley is outlined, and the most important authors and bibliography are stated. Also, the motivation behind the research of vector-valued game-theoretic problems is presented. Chapter 2 develops a rigorous mathematical model of vector-valued N-person Markov games. The corresponding definitions are stated, and the notations, as well as the notion of a strategy are explained in detail. On the basis of these definitions a probability measure is constructed, in an appropriate probability space, which controls the stochastic game process. Furthermore, as in all models of stochastic control, a payoff is specified, in our case the expected discounted payoff. The principles of vector optimization are stated in Chapter 3, and the concept of optimality with recpect to some convex cone is developed. This leads to the generalization of Nash-equilibria from scalar- to vector-valued games, the so-called D-equilibria. Examples are provided to show, that this definition really is a generalization of the existing definitions for scalar-valued games. For a given convex cone D, necessary and sufficient conditions are found to show, when a strategy is also a D-equilibrium. Furthermore it is shown that a D-equilibrium in stationary strategies exists, as one could expect from the known results from the theory of scalar-valued stochastic games. The main result of this chapter is a generalization of an existing result for 2-person vector-valued Markov games to N-person Markov Games, namely that a D-equilibrium of an N-person Markov game is a subgradient of specially constructed support functions of the original payoff functions. To be able to develop solution procedures in the simplest case, that is, the 2-person zero-sum case, Chapter 4 introduces the Denardo dynamic programming formalism. In the space of all p-dimensional functions we define a dynamic programming operator H? to describe the solutions of Markov games. The first of the two main results in this chapter is the following: the expected overall payoff to player 1, f(??), for a fixed stationary strategy ??, is the fixed point of the operator H?. The second theorem then shows, that the latter result is exactly the vector-valued generalization of the famous Shapley result. These theorems are fundamental for the subsequent development of two algorithms, the successive approximations and the Hoffman-Karp algorithm. A numerical example for both algorithms is presented. Chapter 4 finishes with a discussion on other significant results, and the outline of the further research. The Appendix finally presents the main results from general Game Theory, most of which were used for developing both theoretic and algorithmic parts of this thesis. / Das Thema der vorliegenden Arbeit sind vektorwertige Markov-Spiele. Im Kapitel 1 wird die Idee vorgestellt, die zur Entwicklung genereller stochastischer Spiele geführt hat. Die Arbeit von Lloyd S. Shapley wird kurz dargestellt, und die wichtigsten Autoren und Literaturquellen werden genannt. Es wird weiter die Motivation für das Studium der vektorwertigen Spiele erklärt. Kapitel 2 entwickelt ein allgemeines mathematisches Modell vektorwertiger N-Personen Markov-Spiele. Die entsprechenden Definitionen werden angegeben, und es wird auf die Bezeichnungen, sowie den Begriff einer Strategie eingegangen. Weiter wird im entsprechenden Wahrscheinlichkeitsraum ein Wahrscheinlichkeitsmaß konstruiert, das den zugrunde liegenden stochastischen Prozeß steuert. Wie bei allen Modellen gesteuerter stochastischen Prozesse wird eine Auszahlung spezifiziert, konkret der erwartete diskontierte Gesamtertrag. Im Kapitel 3 werden die Prinzipien der Vektoroptimierung erläutert. Es wird der Begriff der Optimalität bezüglich gegebener konvexer Kegel entwickelt. Dieser Begriff wird weiter benutzt, um die Definition der Nash-Gleichgewichte für skalarwertige Spiele auf unser vektorwertiges Modell, die sogenannten D-Gleichgewichte, zu erweitern. Anhand mehrerer Beispiele wird gezeigt, dass diese Definition eine Verallgemeinerung der existierenden Definitionen für skalarwertige Spiele ist. Weiter werden notwendige und hinreichende Bedingungen hinsichtlich des Optimierungskegels D angegeben, wann eine Strategie ein D-Gleichgewicht ist. Anschließend wird gezeigt, dass man sich ? wie bei Markov'schen Entscheidungsprozessen und skalarwertigen stochastischen Spielen - beim Suchen der D-Gleichgewichte auf stationäre Strategien beschränken kann. Das Hauptresultat dieses Kapitels ist die Verallgemeinerung einer schon bekannten Aussage für 2-Personen Markov-Spiele auf N-Personen Markov-Spiele: Ein D-Gleichgewicht im N-Personen Markov-Spiel ist ein Subgradient speziell konstruierter Trägerfunktionen des Gesamtertrags der Spieler. Um im einfachsten Fall der Markov-Spiele, den Zwei-Personen Nullsummenspielen, ein Lösungskonzept entwickeln zu können, wird im Kapitel 4 die Methode des Dynamischen Programmierens benutzt. Es wird der Denardo-Formalismus übernommen, um einen Operator H? im Raum aller p-dimensionalen vektorwertigen Funktionen zu entwickeln. Die Haputresultate dieses Kapitels sind zwei Sätze über optimale Lösungen, bzw. D-Gleichgewichte. Der erste Satz zeigt, dass für eine fixierte stationäre Strategie ?? der erwartete diskontierte Gesamtertrag f(??) der Fixpunkt des Operators H? ist. Anschließend zeigt der zweite Satz, dass diese Lösung genau der vektorwertigen Erweiterung des Resultats von Shapley entspricht. Anhand dieser Resultate werden nun zwei Algorithmen entwickelt: sukzessive Approximationen und Hoffman-Karp-Algorithmus. Es wird ein numerisches Beispiel für beide Algorithmen berechnet. Kapitel 4 schließt mit dem Abschnitt über weitere Resultate und Ansätze für weitere Forschung. Im Anhang werden die Hauptresultate der statischen Spieltheorie vorgestellt, viele von denen werden in der vorliegenden Arbeit benutzt.
506

Detection of crack-like indications in digital radiography by global optimisation of a probabilistic estimation function / Detektion rissartiger Anzeigen in der digitalen Radiographie durch globale Optimierung einer wahrscheinlichkeitstheoretischen Schätzfunktion

Alekseychuk, Oleksandr 01 August 2006 (has links) (PDF)
A new algorithm for detection of longitudinal crack-like indications in radiographic images is developed in this work. Conventional local detection techniques give unsatisfactory results for this task due to the low signal to noise ratio (SNR ~ 1) of crack-like indications in radiographic images. The usage of global features of crack-like indications provides the necessary noise resistance, but this is connected with prohibitive computational complexities of detection and difficulties in a formal description of the indication shape. Conventionally, the excessive computational complexity of the solution is reduced by usage of heuristics. The heuristics to be used, are selected on a trial and error basis, are problem dependent and do not guarantee the optimal solution. Not following this way is a distinctive feature of the algorithm developed here. Instead, a global characteristic of crack-like indication (the estimation function) is used, whose maximum in the space of all possible positions, lengths and shapes can be found exactly, i.e. without any heuristics. The proposed estimation function is defined as a sum of a posteriori information gains about hypothesis of indication presence in each point along the whole hypothetical indication. The gain in the information about hypothesis of indication presence results from the analysis of the underlying image in the local area. Such an estimation function is theoretically justified and exhibits a desirable behaviour on changing signals. The developed algorithm is implemented in the C++ programming language and testet on synthetic as well as on real images. It delivers good results (high correct detection rate by given false alarm rate) which are comparable to the performance of trained human inspectors. / In dieser Arbeit wurde ein neuer Algorithmus zur Detektion rissartiger Anzeigen in der digitalen Radiographie entwickelt. Klassische lokale Detektionsmethoden versagen wegen des geringen Signal-Rausch-Verhältnisses (von ca. 1) der Rissanzeigen in den Radiographien. Die notwendige Resistenz gegen Rauschen wird durch die Benutzung von globalen Merkmalen dieser Anzeigen erzielt. Das ist aber mit einem undurchführbaren Rechenaufwand sowie Problemen bei der formalen Beschreibung der Rissform verbunden. Üblicherweise wird ein übermäßiger Rechenaufwand bei der Lösung vergleichbarer Probleme durch Anwendung von Heuristisken reduziert. Dazu benuzte Heuristiken werden mit der Versuchs-und-Irrtums-Methode ermittelt, sind stark problemangepasst und können die optimale Lösung nicht garantieren. Das Besondere dieser Arbeit ist anderer Lösungsansatz, der jegliche Heuristik bei der Suche nach Rissanzeigen vermeidet. Ein globales wahrscheinlichkeitstheoretisches Merkmal, hier Schätzfunktion genannt, wird konstruiert, dessen Maximum unter allen möglichen Formen, Längen und Positionen der Rissanzeige exakt (d.h. ohne Einsatz jeglicher Heuristik) gefunden werden kann. Diese Schätzfunktion wird als die Summe des a posteriori Informationsgewinns bezüglich des Vorhandenseins eines Risses im jeden Punkt entlang der hypothetischen Rissanzeige definiert. Der Informationsgewinn entsteht durch die Überprüfung der Hypothese der Rissanwesenheit anhand der vorhandenen Bildinformation. Eine so definierte Schätzfunktion ist theoretisch gerechtfertigt und besitzt die gewünschten Eigenschaften bei wechselnder Anzeigenintensität. Der Algorithmus wurde in der Programmiersprache C++ implementiert. Seine Detektionseigenschaften wurden sowohl mit simulierten als auch mit realen Bildern untersucht. Der Algorithmus liefert gute Ergenbise (hohe Detektionsrate bei einer vorgegebenen Fehlalarmrate), die jeweils vergleichbar mit den Ergebnissen trainierter menschlicher Auswerter sind.
507

微分賽局在行銷通路之應用─合作廣告

余俊慶, Yu, Chung-Ching Unknown Date (has links)
在經濟活動發展迅速的情形下,產品的競爭與多樣化使得廠商在價格之外,也須將行銷策略納入考量。研究行銷通路中成員互動的理論模型從靜態模型開始,到用動態微分賽局的模型來研究廠商間的互動,過去的研究文獻得到了兩點結論:第一,通路合作為Pareto最適。第二,在通路無法合作的情況下,利用合作機制的建立,能使均衡結果產生Pareto改善。 然而,過去的文獻卻未說明將通路合作放入合作廣告的模型中,通路合作是否仍為Pareto最適。因此,本研究沿用Jørgensen et al.(2003)合作廣告模型的設定,將通路合作的情況放入模型中,比較通路合作、零售商遠視、零售商短視與合作廣告四種情形均衡時的行銷策略及廠商的利潤,並探討通路合作在合作廣告的模型中是否仍為Pareto最適。
508

WCDMA系統中配置OVSF碼滿足服務品質之研究 / OVSF Code Assignment Based QoS in WCDMA

林淑瑩 Unknown Date (has links)
WCDMA是一個寬頻直接序列分碼多工存取(DS-CDMA)系統,使用正交可變展頻係數(Orthogonal Variable Spreading Factor,OVSF)碼以支援多樣化的資料傳輸速率,提供可變動位元速率和服務品質(Quality of Service,QoS)保障,以滿足使用者對多媒體應用服務的需求。在本研究中我們將訊務分群並配置符合的正交變數展頻係數碼的方式,來處理資源分配的問題,期讓每位使用者都能有滿意的服務品質。在論文中,為提供不同等級的差別服務以兼顧QoS和避免頻寬浪費,我們提出以動態群組配置的方式,從所有提出服務要求的訊務中,依其服務優先等級順序,挑選適合的訊務,將其放置於同一群組。系統會配置一OVSF碼給此群組,透過分時共用來進行資料傳輸,使其能提供多樣化的資料傳輸速率,減少碼阻斷,提高系統頻寬的使用率,並滿足使用者對服務品質的需求。實驗模擬顯示,本研究所提出的方法能提供多樣化的資料傳輸速率,並能有效減少碼阻斷,提高頻寬及系統使用率,並達到QoS的要求。 / WCDMA is a wideband Direct-Sequence Code Division Multiple Access (DS-CDMA) system, it uses Orthogonal Variable Spreading Factor (OVSF) codes to support diverse data transmission rates. Orthogonal variable spreading factor codes have the ability to provide variable bit rates and QoS to meet different multimedia application requirements. In this research, we group the traffic and use Orthogonal Variable Spreading Factor (OVSF) technique to deal with resource allocation problem in order to offer satisfactory quality of service to the users. We propose dynamic group allocation method to provide differentiated services in order to meet the QoS requirement and avoid bandwidth wasting. From all those requested services, we move all those services of the same service priority to the same group. The system will subsequently assign OVSF code to each group. Data transmission of each group is based on time-sharing mechanism. Simulations show that the proposed method can provide diverse data transmission rates, and is able to reduce code blocking rate, increase bandwidth and system utilization, and meet the QoS requirement.
509

Automated Planning and Scheduling for Industrial Construction Processes

Hu, Di Unknown Date
No description available.
510

Graph Theory and Dynamic Programming Framework for Automated Segmentation of Ophthalmic Imaging Biomarkers

Chiu, Stephanie Ja-Yi January 2014 (has links)
<p>Accurate quantification of anatomical and pathological structures in the eye is crucial for the study and diagnosis of potentially blinding diseases. Earlier and faster detection of ophthalmic imaging biomarkers also leads to optimal treatment and improved vision recovery. While modern optical imaging technologies such as optical coherence tomography (OCT) and adaptive optics (AO) have facilitated in vivo visualization of the eye at the cellular scale, the massive influx of data generated by these systems is often too large to be fully analyzed by ophthalmic experts without extensive time or resources. Furthermore, manual evaluation of images is inherently subjective and prone to human error.</p><p>This dissertation describes the development and validation of a framework called graph theory and dynamic programming (GTDP) to automatically detect and quantify ophthalmic imaging biomarkers. The GTDP framework was validated as an accurate technique for segmenting retinal layers on OCT images. The framework was then extended through the development of the quasi-polar transform to segment closed-contour structures including photoreceptors on AO scanning laser ophthalmoscopy images and retinal pigment epithelial cells on confocal microscopy images. </p><p>The GTDP framework was next applied in a clinical setting with pathologic images that are often lower in quality. Algorithms were developed to delineate morphological structures on OCT indicative of diseases such as age-related macular degeneration (AMD) and diabetic macular edema (DME). The AMD algorithm was shown to be robust to poor image quality and was capable of segmenting both drusen and geographic atrophy. To account for the complex manifestations of DME, a novel kernel regression-based classification framework was developed to identify retinal layers and fluid-filled regions as a guide for GTDP segmentation.</p><p>The development of fast and accurate segmentation algorithms based on the GTDP framework has significantly reduced the time and resources necessary to conduct large-scale, multi-center clinical trials. This is one step closer towards the long-term goal of improving vision outcomes for ocular disease patients through personalized therapy.</p> / Dissertation

Page generated in 0.1069 seconds