Spelling suggestions: "subject:"cynamic erogramming"" "subject:"cynamic cprogramming""
251 |
An Experimental Study of Distance Sensitivity OraclesWilliams, Vincent Troy 26 October 2010 (has links)
The paper \A Nearly Optimal Oracle for Avoiding Failed Vertices and Edges" by
Aaron Bernstein and David Karger lays out a nearly optimal algorithm for nding the
shortest distances and paths between vertices with any given single failure in constant
time without reconstructing the oracle. Using their paper as a guideline, we have
implemented their algorithm in C++ and recorded each step in this thesis. Each step
has its own pseudo-code and its own analysis to prove that the entire oracle construction
stays within the stated running time and total space bounds, from the authors. The
effciency of the algorithm is compared against that of the brute-force methods total
running time and total space needed. Using multiple test cases with an increasing
number of vertices and edges, we have experimentally validated that their algorithm
holds true to their statements of space, running time, and query time.
|
252 |
Evaluation of basis functions for generating approximate linear programming (ALP) average cost solutions and policies for multiclass queueing networksGurfein, Kate Elizabeth 16 August 2012 (has links)
The average cost of operating a queueing network depends on several factors such as the complexity of the network and the service policy used. Approximate linear programming (ALP) is a method that can be used to compute an accurate lower bound on the optimal average cost as well as generate policies to be used in operating the network. These average cost solutions and policies are dependent on the type of basis function used in the ALP. In this paper, the ALP average cost solutions and policies are analyzed for twelve networks with four different types of basis functions (quadratic, linear, pure exponential, and mixed exponential). An approximate bound on the optimality gap between the ALP average cost solution and the optimal average cost solution is computed for each system, and the size of this bound is determined relative to the ALP average cost solution. Using the same set of networks, the performance of ALP generated policies are compared to the performance of the heuristic policies first-buffer-first-served (FBFS), last-buffer-first-served (LBFS), highest-queue-first-served (HQFS), and random-queue-first-served (RQFS). In general, ALP generated average cost solutions are considerably smaller than the simulated average cost under the corresponding policy, and therefore the approximate bounds on the optimality gaps are quite large. This bound increases with the complexity of the queueing network. Some ALP generated policies are not stabilizing policies for their corresponding networks, especially those produced using pure exponential and mixed exponential basis functions. For almost all systems, at least one of the heuristic policies results in mean average cost less than or nearly equal to the smallest mean average cost of all ALP generated policies in simulation runs. This means that generally there exists a heuristic policy which can perform as well as or better than any ALP generated policy. In conclusion, a useful bound on the optimality gap between the ALP average cost solution and the optimal average cost solution cannot be computed with this method. Further, heuristic policies, which are more computationally tractable than ALP generated policies, can generally match or exceed the performance of ALP generated policies, and thus computing such policies is often unnecessary for realizing cost benefits in queueing networks. / text
|
253 |
Contract-driven data structure repair : a novel approach for error recoveryNokhbeh Zaeem, Razieh 02 July 2014 (has links)
Software systems are now pervasive throughout our world. The reliability of these systems is an urgent necessity. A large degree of research effort on increasing software reliability is dedicated to requirements, architecture, design, implementation and testing---activities that are performed before system deployment. While such approaches have become substantially more advanced, software remains buggy and failures remain expensive. We take a radically different approach to reliability from previous approaches, namely contract-driven data structure repair for runtime error recovery, where erroneous executions of deployed software are corrected on-the-fly using rich behavioral contracts. Our key insight is to transform the software contract---which gives a high level description of the expected behavior---to an efficient implementation which repairs the erroneous data structures in the program state upon an error. To improve efficiency, scalability, and effectiveness of repair, in addition to rich behavioral contracts, we leverage the current erroneous state, dynamic behavior of the program, as well as repair history and abstraction. A core technical problem our approach to repair addresses is construction of structurally complex data that satisfy desired properties. We present a novel structure generation technique based on dynamic programming---a classic optimization approach---to utilize the recursive nature of the structures. We use our technique for constraint-based testing. It provides better scalability than previous work. We applied it to test widely-used web browsers and found some known and unknown bugs. Our use of dynamic programming in structure generation opens a new future direction to tackle the scalability problem of data structure repair. This research advances our ability to develop correct programs. For programs that already have contracts, error recovery using our approach can come at a low cost. The same contracts can be used for systematically testing code before deployment using existing as well as our new techniques. Thus, we enable a novel unification of software verification and error recovery. / text
|
254 |
Algorithms and data structures for cache-efficient computation: theory and experimental evaluationChowdhury, Rezaul Alam 28 August 2008 (has links)
Not available / text
|
255 |
Utilizing Look-Ahead Information to Minimize Fuel Consumption and NOx Emissions in Heavy Duty VehiclesFlorell, Christoffer January 2015 (has links)
Producing more fuel efficient vehicles as well as lowering emissions are of high importance among heavy duty vehicle manufactures. One functionality of lowering fuel consumption is to use a so called \emph{look-ahead control strategy}, which uses the GPS and topography data to determine the optimal velocity profile in the future. When driving downhill in slopes, no fuel is supplied to the engine which lowers the temperature in the aftertreatment system. This results in a reduced emission reduction capability of the aftertreatment system. This master thesis investigates the possibilities of using preheating look-ahead control actions to heat the aftertreatment system before entering a downhill slope, with the purpose of lowering fuel consumption and $NO_x$ emissions. A temperature model of a heavy duty aftertreatment system is produced, which is used to analyse the fuel consumption and $NO_x$ reduction performance of a Scania truck. A Dynamic Programming algorithm is also developed with the purpose of defining an optimal control trajectory for minimizing the fuel consumption and released $NO_x$ emissions. It is concluded that the Dynamic Programming optimization initiates preheating control actions with results of fuel consumption reduction as well as $NO_x$ emissions reductions. The best case for reducing the maximum amount of fuel consumption results in 0.14\% lower fuel consumption and 5.2\% lower $NO_x$ emissions.
|
256 |
The design of feedback channels for wireless networks : an optimization-theoretic viewGanapathy, Harish 23 September 2011 (has links)
The fundamentally fluctuating nature of the strength of a wireless link poses a significant challenge when seeking to achieve reliable communication at high data rates. Common sense, supported by information theory, tells us that one can move closer towards achieving higher data rates if the transmitter is provided with a priori knowledge of the channel. Such channel knowledge is typically provided to the transmitter by a feedback channel that is present between the receiver and the transmitter. The quality of information provided to the transmitter is proportional to the bandwidth of this feedback channel. Thus, the design of feedback channels is a key aspect in enabling high data rates. In the past, these feedback channels have been designed locally, on a link-by-link basis. While such an approach can be globally optimal in some cases, in many other cases, this is not true. In this thesis, we identify various settings in wireless networks, some already a part of existing standards, others under discussion in future standards, where the design of feedback channels is a problem that requires global, network-wide optimization. In general, we propose the treatment of feedback bandwidth as a network-wide resource, as the next step en route to achieving Gigabit wireless.
Not surprisingly, such a global optimization initiative naturally leads us to the important issue of computational efficiency. Computational efficiency is critical from the point-of-view of a network provider. A variety of optimization techniques are employed in this thesis to solve the large combinatorial problems that arise in the context of feedback allocation. These include dynamic programming, sub-modular function maximization, convex relaxations and compressed sensing. A naive algorithm to solve these large combinatorial problems would typically involve searching over a exponential number of possibilities to find the optimal feedback allocation. As a general theme, we identify and exploit special application-specific structure to solve these problems optimally with reduced complexity. Continuing this endeavour, we search for more intricate structure that enables us to propose approximate solutions with significantly-reduced complexity. The accompanying analysis of these algorithms studies the inherent trade-offs between accuracy, efficiency and the required structure of the problem. / text
|
257 |
Προβλήματα επιτάχυνσης διεργασιών : αλγόριθμοι και πολυπλοκότηταΦίλος Ράτσικας, Αλέξης 05 February 2015 (has links)
Η διπλωµατική εργασία αποτελεί συνέχεια της µελέτης προβληµάτων χρονοπρογραµµατισµού µε αυστηρές προθεσµίες που ξεκίνησε η Αµαλία Στούµπου στην δικιά της διπλωµατική εργασία µε όνοµα "Προβλήµατα Επιτάχυνσης ∆ιεργασιών σε Grid Computing: Αλγόριθµοι και Πολυπλοκότητα". Εξετάζονται προβλήµατα δροµολόγησης διεργασιών σε περισσότερους από έναν, ίδιους µεταξύ τους, επεξεργαστές. ∆ίνονται αλγόριθµοι που λύνουν το πρόβληµα ελαχιστοποίησης του συνολικού χρόνου εκτέλεσης, µε αυστηρές προθεσµίες, αρχικά για 2 και στη συνέχεια για m επεξεργαστές. Οι αλγόριθµοι αυτοί έχουν ψευδοπολυωνυµική πολυπλοκότητα. Στη συνέχεια εξετάζονται προβλήµατα δροµολόγησης και επιτάχυνσης διεργασιών µε ίδιο χρόνο εκτέλεσης, σε περιβάλλοντα µε ίδιους µεταξύ τους επεξεργαστές και δίνονται πολυωνυµικοί αλγόριθµοι που τα λύνουν. Τέλος αναφέρονται συνοπτικά ορισµένα προβλήµατα του ευρύτερου χώρου προϐληµάτων χρονοπρογραµµατισµού που µπορούν να προσεγγιστούν ή να λυθούν µε τεχνικές που εφαρµόστηκαν για τη λύση των προηγούµενων προβληµάτων που αναφέρθηκαν. / This thesis is a continuation of the study of scheduling problems with strict deadlines that begun in the thesis "Προβλήµατα Επιτάχυνσης ∆ιεργασιών σε Grid Computing : Αλγόριθµοι και Πολυπλοκότητα" by Amalia Stoumpou. We study scheduling problems on more than one parallel processors. Algorithms are given that solve the problem of minimizing the makespan with strict deadlines, first for 2 and then for m processors. These algorithms are pseudopolynomial in complexity. We also study problems of scheduling and speedup of processes with the same execution time in parallel processor environments and we give pseudopolynomial algorithms that solve them. Finally, we mention briefly other problems that can be solved or approached using the techniques that we applied to solve the previous problems.
|
258 |
Cooperative control of autonomous underwater vehicles.Savage, Elizabeth 30 September 2004 (has links)
The proposed project is the simulation of a system to search for air vehicles which
have splashed-down in the ocean. The system comprises a group of 10+ autonomous
underwater vehicles, which cooperate in order to locate the aircraft. The search algorithm
used in this system is based on a quadratic Newton method and was developed
at Sandia National Laboratories. The method has already been successfully applied
to several two dimensional problems at Sandia.
The original 2D algorithm was converted to 3D and tested for robustness in the
presence of sensor error, position error and navigational error. Treating the robots as
point masses, the system was found to be robust for all such errors.
Several real-life adaptations were necessary. A round-robin communication strategy
was implemented on the system to properly simulate the dissemination of information
throughout the group. Time to convergence is delayed but the system still
functioned adequately.
Once simulations for the point masses had been exhausted, the dynamics of the
robots were included. The robot equations of motion were described using Kane's
equations. Path-planning was investigated using optimal control methods. The Variational
Calculus approach was attempted using a line search tool "fsolve" found in
Matlab and a Genetic Algorithm. A dynamic programming technique was also investigated using a method recently developed by Sandia National Laboratories. The Dynamic
Programming with Interior Points (DPIP) method was a very effcient method
for path planning and performed well in the presence of system constraints.
Finally all components of the system were integrated. The motion of the robot
exactly matched the motion of the particles, even when subjected to the same robustness
tests carried out on the point masses. This thesis provides exciting developments
for all types of cooperative projects.
|
259 |
Simulation Optimization for the Stochastic Economic Lot Scheduling ProblemLöhndorf, Nils, Minner, Stefan 10 April 2013 (has links) (PDF)
We study simulation optimization methods for the stochastic economic lot scheduling problem. In contrast to
prior research, we focus on methods that treat this problem as a black box. Based on a large-scale numerical
study, we compare approximate dynamic programming with a global search for parameters of simple control
policies. We propose two value function approximation schemes based on linear combinations of piecewise-
constant functions as well as control policies that can be described by a small set of parameters. While
approximate value iteration worked well for small problems with three products, it was clearly outperformed
by the global policy search as soon as problem size increased. The most reliable choice in our study was a
globally optimized fixed-cycle policy. An additional analysis of the response surface of model parameters on
optimal average cost revealed that the cost effect of product diversity was negligible. (authors' abstract)
|
260 |
Decision Models for Corporate SustainabilityMendoza, Alvaro January 2013 (has links)
<p>This dissertation explores decision problems faced by organizations willing to address or support the incorporation of sustainability aspects on their "business as usual" activities. We study to specific problems. First, we analyze the decision problem of a forest manager who, in addition to selling timber, has the option of selling carbon offsets for the carbon sequestered by the forest. We study both the single-rotation and the multiple-rotations harvesting problems, and develop stochastic dynamic programming models to find the optimal harvesting and offset-selling policy, the expected optimal harvesting time, and the expected optimal reward under different offset-trading schemes. Then, we study the case in which an organization (sustainability buyer) outsources sustainability efforts to another organization (sustainability seller). While buyers cannot directly exert sustainability efforts, they can provide economic or technical support to their sellers in order to incentivize these efforts. We investigate how the effort and support decisions change according to characteristics of stakeholders, buyers, and sellers. Considering that buyers can compete on the sustainability effort exerted by their sellers, we extend our analysis to the case of competing buyers, and we determine conditions under which sharing sellers is preferred by the buyers to having separate sellers for each buyer.</p> / Dissertation
|
Page generated in 0.0924 seconds