• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 382
  • 82
  • 52
  • 44
  • 13
  • 12
  • 11
  • 9
  • 8
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 713
  • 713
  • 151
  • 139
  • 119
  • 98
  • 88
  • 85
  • 83
  • 79
  • 76
  • 74
  • 68
  • 66
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Development and Implementation of Stop and Go Operating Strategies in a Test Vehicle

Johansson, Ann-Catrin January 2005 (has links)
The department REI/EP at DaimlerChrysler Research and Technology and the Laboratory for Efficient Energy Systems at Trier University of Applied Science, are developing control functions and fuel optimal strategies for low speed conditions. The goal of this thesis project was to further develop the fuel optimal operating strategies, and implement them into a test vehicle equipped with a dSPACE environment. This was accomplished by making optimal reference signals using dynamic programming. Optimal, in this case, means signals that results in low fuel consumption, comfortable driving, and a proper distance to the preceding vehicle. These reference signals for the velocity and distance are used by an MPC controller (Model Predictive Control) to control the car. In every situation a suitable reference path is chosen, depending on the velocities of both vehicles, and the distance. The controller was able to follow another vehicle in a proper way. The distance was kept, the driving was pleasant, and it also seems like it is possible to save fuel. When accepting some deviations in distance to the preceding car, a fuel reduction of 8 % compared to the car in front can be achieved.
232

The Use of Positioning Systems for Look-Ahead Control in Vehicles / Användning av positioneringssystem för prediktiv reglering av fordon

Gustafsson, Niklas January 2006 (has links)
The use of positioning systems in a vehicle is a research intensive field. In the first part of this thesis an increase in new applications is disclosed through a mapping of patent documents on how positioning systems can support adaptive cruise control, gear changing systems and engine control. Many ideas are presented and explained and the ideas are valued. Furthermore, a new method for selective catalytic reduction (SCR) control using a positioning system is introduced. It is concluded that look-ahead control, where the vehicle position in relation to the upcoming road section is utilized could give better fuel efficiency, lower emissions and less brake, transmission and engine wear. In the second part of this thesis a real time test platform for predictive speed control algorithms has been developed and tested in a real truck. Previously such algorithms could only be simulated. In this thesis an algorithm which utilizes model predictive control (MPC) and dynamic programming (DP) been implemented and evaluated. An initial comparative fuel test shows a reduction in fuel consumption when the MPC algorithm is used.
233

Neural networks, stochastic dynamic programming and a heuristic for valuing flexible manufacturing systems

Feurstein, Markus, Natter, Martin January 1998 (has links) (PDF)
We compare the use of stochastic dynamic programming (SDP), Neural Networks and a simple approximation rule for calculating the real option value of a flexible production system. While SDP yields the best solution to the problem, it is computationally prohibitive for larger settings. We test two approximations of the value function and show that the results are comparable to those obtained via SDP. These methods have the advantage of a high computational performance and of no restrictions on the type of process used. Our approach is not only useful for supporting large investment decisions, but it can also be applied in the case of routine decisions like the determination of the production program when stochastic profit margins occur. (author's abstract) / Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
234

Subseries Join and Compression of Time Series Data Based on Non-uniform Segmentation

Lin, Yi January 2008 (has links)
A time series is composed of a sequence of data items that are measured at uniform intervals. Many application areas generate or manipulate time series, including finance, medicine, digital audio, and motion capture. Efficiently searching a large time series database is still a challenging problem, especially when partial or subseries matches are needed. This thesis proposes a new denition of subseries join, a symmetric generalization of subseries matching, which finds similar subseries in two or more time series datasets. A solution is proposed to compute the subseries join based on a hierarchical feature representation. This hierarchical feature representation is generated by an anisotropic diffusion scale-space analysis and a non-uniform segmentation method. Each segment is represented by a minimal polynomial envelope in a reduced-dimensionality space. Based on the hierarchical feature representation, all features in a dataset are indexed in an R-tree, and candidate matching features of two datasets are found by an R-tree join operation. Given candidate matching features, a dynamic programming algorithm is developed to compute the final subseries join. To improve storage efficiency, a hierarchical compression scheme is proposed to compress features. The minimal polynomial envelope representation is transformed to a Bezier spline envelope representation. The control points of each Bezier spline are then hierarchically differenced and an arithmetic coding is used to compress these differences. To empirically evaluate their effectiveness, the proposed subseries join and compression techniques are tested on various publicly available datasets. A large motion capture database is also used to verify the techniques in a real-world application. The experiments show that the proposed subseries join technique can better tolerate noise and local scaling than previous work, and the proposed compression technique can also achieve about 85% higher compression rates than previous work with the same distortion error.
235

The application of the in-tree knapsack problem to routing prefix caches

Nicholson, Patrick 24 April 2009 (has links)
Modern routers use specialized hardware, such as Ternary Content Addressable Memory (TCAM), to solve the Longest Prefix Matching Problem (LPMP) quickly. Due to the fact that TCAM is a non-standard type of memory and inherently parallel, there are concerns about its cost and power consumption. This problem is exacerbated by the growth in routing tables, which demands ever larger TCAMs. To reduce the size of the TCAMs in a distributed forwarding environment, a batch caching model is proposed and analyzed. The problem of determining which routing prefixes to store in the TCAMs reduces to the In-tree Knapsack Problem (ITKP) for unit weight vertices in this model. Several algorithms are analysed for solving the ITKP, both in the general case and when the problem is restricted to unit weight vertices. Additionally, a variant problem is proposed and analyzed, which exploits the caching model to provide better solutions. This thesis concludes with discussion of open problems and future experimental work.
236

A Study on Architecture, Algorithms, and Applications of Approximate Dynamic Programming Based Approach to Optimal Control

Lee, Jong Min 12 July 2004 (has links)
This thesis develops approximate dynamic programming (ADP) strategies suitable for process control problems aimed at overcoming the limitations of MPC, which are the potentially exorbitant on-line computational requirement and the inability to consider the future interplay between uncertainty and estimation in the optimal control calculation. The suggested approach solves the DP only for the state points visited by closed-loop simulations with judiciously chosen control policies. The approach helps us combat a well-known problem of the traditional DP called 'curse-of-dimensionality,' while it allows the user to derive an improved control policy from the initial ones. The critical issue of the suggested method is a proper choice and design of function approximator. A local averager with a penalty term is proposed to guarantee a stably learned control policy as well as acceptable on-line performance. The thesis also demonstrates versatility of the proposed ADP strategy with difficult process control problems. First, a stochastic adaptive control problem is presented. In this application an ADP-based control policy shows an "active" probing property to reduce uncertainties, leading to a better control performance. The second example is a dual-mode controller, which is a supervisory scheme that actively prevents the progression of abnormal situations under a local controller at their onset. Finally, two ADP strategies for controlling nonlinear processes based on input-output data are suggested. They are model-based and model-free approaches, and have the advantage of conveniently incorporating the knowledge of identification data distribution into the control calculation with performance improvement.
237

Strategic Network Growth with Recruitment Model

Wongthatsanekorn, Wuthichai 10 April 2006 (has links)
In order to achieve stable and sustainable systems for recycling post-consumer goods, it is frequently necessary to concentrate the flows from many collection points to meet the volume requirements for the recycler. This motivates the importance of growing the collection network over time to both meet volume targets and keep costs to a minimum. This research addresses a complex and interconnected set of strategic and tactical decisions that guide the growth of reverse supply chain networks over time. This dissertation has two major components: a tactical recruitment model and a strategic investment model. These capture the two major decision levels for the system, the former for the regional collector who is responsible for recruiting material sources to the network, the latter for the processor who needs to allocate his scarce resources over time and to regions to enable the recruitment to be effective. The recruitment model is posed as a stochastic dynamic programming problem. An exact method and two heuristics are developed to solve this problem. A numerical study of the solution approaches is also performed. The second component involves a key set of decisions on how to allocate resources effectively to grow the network to meet long term collection targets and collection cost constraints. The recruitment problem appears as a sub-problem for the strategic model and this leads to a multi-time scale Markov decision problem. A heuristic approach which decomposes the strategic problem is proposed to solve realistically sized problems. The numerical valuations of the heuristic approach for small and realistically sized problems are then investigated.
238

Efficient Algorithms for the Block Edit Distance and Related Problems

Ann, Hsing-Yen 18 May 2010 (has links)
Computing the similarity of two strings or sequences is one of the most important fundamental in computer field, and it has been widely studied for several decades. In the last decade, it gained the researchers' attentions again because of the improvements of the hardware computation ability and the presence of huge amount of data in biotechnology. In this dissertation, we pay attention to computing the edit distance between two sequences where the block-edit operations are involved in addition to the character-edit operations. Previous researches show that this problem is NP-hard if recursive block moves are allowed. Since we are interested in solving the editing problems by the polynomial-time optimization algorithms, we consider the simplified version of the edit distance problem. We first focus on the longest common subsequence (LCS) of run-length encoded (RLE) strings, where the runs can be seen as a class of simplified blocks. Then, we apply constraints to the problem, i.e. to find the constrained LCS (CLCS) of RLE strings. Besides, we show that the problems which involve block-edit operations can still be solved by the polynomial-time optimization algorithms if some restrictions are applied. Let X and Y be two sequences of lengths n and m, respectively. Also, let N and M, be the numbers of runs in the corresponding RLE forms of X and Y, respectively. In this dissertation, first, we propose a simple algorithm for computing the LCS of X and Y in O(NM + min{ p_1, p_2 }) time, where p_1 and p_2 denote the numbers of elements in the bottom and right boundaries of the matched blocks, respectively. This new algorithm improves the previously known time bound O(min{nM, Nm}) and outperforms the time bounds O(NM log NM) or O((N+M+q) log (N+M+q)) for some cases, where q denotes the number of matched blocks. Next, we give an efficient algorithm for solving the CLCS problem, which is to find a common subsequences Z of X and Y such that a given constrained sequence P is a subsequence of Z and the length of Z is maximized. Suppose X, Y and P are all in RLE format, and the lengths of X, Y and P are n, m and r, respectively. Let N, M and R be the numbers of runs in X, Y, and P, respectively. We show that by RLE, the CLCS problem can be solved in O(NMr + min{q_1 r + q_4, q_2 r + q_5 }) time, where q_1 and q_2 denote the numbers of elements in the south and east boundaries of the partially matched blocks on the first layer, respectively, and q_4 and q_5 denote the numbers of elements of the west and north pillars in the bottom boundaries of all fully matched cuboids in the DP lattice, respectively. When the input strings have good compression ratios, our work obviously outperforms the previously known DP algorithms and the Hunt-Szymanski-like algorithms. Finally, we consider variations of the block edit distance problem that involve character insertions, character deletions, block copies and block deletions, for two given sequences X and Y. In this dissertation, three variations are defined with different measuring functions, which are P(EIS, C), P(EI, L) and P(EI, N). Then we show that with some preprocessing, the minimum block edit distances of these three variations can be obtained by dynamic programming in O(nm), O(nm log m) and O(nm^2) time, respectively, where n and m are the lengths of X and Y.
239

Implementation of Disparity Estimation Using Stereo Matching

Wang, Ying-Chung 08 August 2011 (has links)
General 3D stereo vision is composed of two major phases. In the first phase, an image and its corresponding depth map are generated using stereo matching. In the second phase, depth-based image rendering (DIBR) is employed to generate images of different view angles. Stereo matching, a computation-intensive operation, generates the depth maps from two images captured at two different view positions. In this thesis, we present hardware designs of three different stereo matching methods: pixel-based, window-based, and dynamic programming (DP)-based. Pixel--based and window-based methods belong to the local optimization stereo matching methods while DP, one of the global optimization methods, consists of three main processing steps: matching cost computation, cost aggregation, and back-tracing. Hardware implementation of DP-based stereo matching usually requires large memory space to store the intermediate results, leading to large area cost. In this thesis, we propose a tile-based DP method by partition the original image into smaller tiles so that the processing of each tile requires smaller memory size.
240

Dynamic Programming Approach to Price American Options

Yeh, Yun-Hsuan 06 July 2012 (has links)
We propose a dynamic programming (DP) approach for pricing American options over a finite time horizon. We model uncertainty in stock price that follows geometric Brownian motion (GBM) and let interest rate and volatility be fixed. A procedure based on dynamic programming combined with piecewise linear interpolation approximation is developed to price the value of options. And we introduce the free boundary problem into our model. Numerical experiments illustrate the relation between value of option and volatility.

Page generated in 0.0809 seconds