11 |
Estimating reliability with discrete growth modelsChandler, James D. 03 1900 (has links)
Approved for public release; distribution is unlimited / Determining the reliability of newly designed systems is one of the most important functions of the acquisition process in the military. Tracking the growth in reliability of a system as it is developed and modified repeatedly is an important part of the acquisition process. This thesis extends and expands a reliability growth simulation program written previously. It analyzes the capabilities and limitations of two discrete reliability growth models to determine which models are most applicable in estimating system reliability under a variety of different growth patterns. Negative growth patterns are also considered. The result ot this thesis is a FORTRAN simulation which enables a more accurate estimate of system reliability using test data generated during the development phase of an acquisition program. Keywords: Theses; Charts; Mathematical models. (Author) / http://archive.org/details/estimatingreliab00chan / Captain, United States Army
|
12 |
Multilevel flow modelling as a generalized modelling techniqueBracher, Stephen 18 August 2016 (has links)
A project report submitted to the Faculty of Engineering,
University of the Witwatersrand, Johannesburg, in fulfilment of
the requirements for the degree of Master of Science in
Engineering by course work and a project
Johannesburg, 1993 / With the increasing size and complexity of industrial plants
there is a growing need for modelling techniques and tools that
can be used to aid the operator in the daily running of the
plant. This research investigates a modelling technique, known
as multilevel flow modelling (MFM) and assesses its suitability
for fault diagnostic applications. A rule base for measurement
validation, alarm analysis and fault diagnosis is developed which
is independent of the process structure and the number of sensors
used, so as not to Lose the generality of the modelling
technique. A case study Shows that it is feasible to use MFM for
fault diagnosis and that this technique has the flexibility to
accommodate alterations.
|
13 |
GENERALIZED FRACTIONAL PROGRAMMINGUnknown Date (has links)
Consider the nonlinear programming problem that involves the product of two functionals and that takes the form: / Maximize / P(x) = {(phi)(,1)(x)}('(alpha)(,1)) (.) {(phi)(,2)(x)}('(alpha)(,2)) / subject to / g(x) (LESSTHEQ) 0 / where x (epsilon) R('n), (alpha)(,1), (alpha)(,2) (epsilon) R, (phi)(,i)((.)), i = 1, 2, and each component of g((.)) (epsilon) R('m), are scalar functions, continuously differentiable. / Kuhn-Tucker type necessary conditions for optimality are established, following the Dubovitskii-Milyutin formalism and a duality theory is developed, which represents an extension of that for the Nonlinear Fractional Programming problem (for which (alpha)(,1) = 1 and (alpha)(,2) =-1) as well as the ordinary Nonlinear Programming problem (with (alpha)(,1) = 1 and (phi)(,2)(x) (TBOND) 1). / As an extension from the finite dimensional case, a new class of Continuous Nonlinear Programming problems is introduced, including, in particular, the class of Continuous Fractional Programming problems, which encounters applications on the study of aerodynamic shapes, particularly that of flap top wings in hypersonic flow. As a representative of this class, we consider the problem / Maximize / (DIAGRAM, TABLE OR GRAPHIC OMITTED...PLEASE SEE DAI) / subject to / (DIAGRAM, TABLE OR GRAPHIC OMITTED...PLEASE SEE DAI) / where (alpha)(,1), (alpha)(,2) (epsilon) R, z(t) is an n-dimensional vector function with each component p-integrable on {0, T}, T finite 1 < p < (INFIN); f(z(t), t) and g(z(t), t) are m- and -dimensional vector functions, respectively; c(t) and H(t, s) are, respectively, m x 1 and m x time dependent matrices whose entries are p-integrable on {0, T} and {0, T} x {0, T}, respectively; (phi)(,i)((.), t), i = 1, 2, and each component of f and g are scalar functions, continuously differentiable in its first argument throughout {0, T}. / Following the same approach as the finite case, Kuhn-Tucker type necessary conditions and a duality theory are developed, extending those of the ordinary Continuous Programming problem, for which (alpha)(,1) = 1 and (alpha)(,2) = 0. / Source: Dissertation Abstracts International, Volume: 42-06, Section: B, page: 2492. / Thesis (Ph.D.)--The Florida State University, 1981.
|
14 |
THE STABILITY OF CONTINUOUS PROGRAMMINGUnknown Date (has links)
Source: Dissertation Abstracts International, Volume: 41-01, Section: B, page: 0323. / Thesis (Ph.D.)--The Florida State University, 1980.
|
15 |
SOLUTIONS TO CONTINUOUS TIME PROGRAMMING PROBLEMSUnknown Date (has links)
Consider the linear continuous time programming problem / (DIAGRAM, TABLE OR GRAPHIC OMITTED...PLEASE SEE DAI) / subject to / (DIAGRAM, TABLE OR GRAPHIC OMITTED...PLEASE SEE DAI) / and / z(t) (GREATERTHEQ) 0, t (ELEM) {0, T}. / The vector function z(t) maps {0, T} into R('n), a(t) is an n-dimensional row-vector and c(t) is an m-dimensional column vector. The matrices B(t) and K(t, s) are of dimension mxn. / We show that under certain conditions the optimal solution contains at most m positive elements, for almost all t (ELEM) {0, T}. This makes it possible to express the solution as an infinite matrix series. One property of the solutions is that the vector function z(t) is not necessarily smooth for all t at optimum. Rather, different components may be positive over different intervals. The points at which the change occurs are called join points. We show that all join points are natural, that is they occur when a variable goes to zero either in the original problem or in an associated dual problem. / Our approach leads to a confirmation of a conjecture about the smoothness of solutions between join points. We also prove a conjecture which had been made about the behavior of the solutions as t (--->) (INFIN). / The previous developments lead to an algorithm for the problem. The algorithm makes use of the simplex method to obtain solutions using a matrix series. The method is incorporated in a computer program and examples are given. / We discuss the nonlinear problem and prove a theorem on the convergence of a penalty function approach on a precompact metric space. / Source: Dissertation Abstracts International, Volume: 42-10, Section: B, page: 4168. / Thesis (Ph.D.)--The Florida State University, 1981.
|
16 |
CONTRIBUTIONS TO THE THEORY OF CONTINUOUS TIME PROGRAMMINGUnknown Date (has links)
Source: Dissertation Abstracts International, Volume: 36-06, Section: B, page: 2989. / Thesis (Ph.D.)--The Florida State University, 1975.
|
17 |
CONTRIBUTIONS TO DUALITY THEORY IN CONTINUOUS NONLINEAR PROGRAMMINGUnknown Date (has links)
Source: Dissertation Abstracts International, Volume: 38-09, Section: B, page: 4420. / Thesis (Ph.D.)--The Florida State University, 1977.
|
18 |
Large scale prediction models and algorithmsMonsch, Matthieu (Matthieu Frederic) January 2013 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Operations Research Center, 2013. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 129-132). / Over 90% of the data available across the world has been produced over the last two years, and the trend is increasing. It has therefore become paramount to develop algorithms which are able to scale to very high dimensions. In this thesis we are interested in showing how we can use structural properties of a given problem to come up with models applicable in practice, while keeping most of the value of a large data set. Our first application provides a provably near-optimal pricing strategy under large-scale competition, and our second focuses on capturing the interactions between extreme weather and damage to the power grid from large historical logs. The first part of this thesis is focused on modeling competition in Revenue Management (RM) problems. RM is used extensively across a swathe of industries, ranging from airlines to the hospitality industry to retail, and the internet has, by reducing search costs for customers, potentially added a new challenge to the design and practice of RM strategies: accounting for competition. This work considers a novel approach to dynamic pricing in the face of competition that is intuitive, tractable and leads to asymptotically optimal equilibria. We also provide empirical support for the notion of equilibrium we posit. The second part of this thesis was done in collaboration with a utility company in the North East of the United States. In recent years, there has been a number of powerful storms that led to extensive power outages. We provide a unified framework to help power companies reduce the duration of such outages. We first train a data driven model to predict the extent and location of damage from weather forecasts. This information is then used in a robust optimization model to optimally dispatch repair crews ahead of time. Finally, we build an algorithm that uses incoming customer calls to compute the likelihood of damage at any point in the electrical network. / by Matthieu Monsch. / Ph.D.
|
19 |
Essays in Consumer Choice Driven Assortment PlanningSaure, Denis R. January 2011 (has links)
Product assortment selection is among the most critical decisions facing retailers: product variety and relevance is a fundamental driver of consumers' purchase decisions and ultimately of a retailer's profitability. In the last couple of decades an increasing number of firms have gained the ability to frequently revisit their assortment decisions during a selling season. In addition, the development and consolidation of online retailing have introduced new levels of operational flexibility, and cheap access to detailed transactional information. These new operational features present the retailer with both benefits and challenges. The ability to revisit the assortment decision frequently over time allows the retailer to introduce and test new products during the selling season, and adjust on the fly to unexpected changes in consumer preferences, and use customer profile information to customize (in real time) online shopping experience. Our main objective in this thesis is to formulate and solve assortment optimization models addressing the challenges present in modern retail environments. We begin by analyzing the role of the assortment decision in balancing information collection and revenue maximization, when consumer preferences are initially unknown. By considering utility maximizing consumers, we establish fundamental limits on the performance of any assortment policy whose aim is to maximize long run revenues. In addition, we propose adaptive assortment policies that attain such performance limits. Our results highlight salient features of this dynamic assortment problem that distinguish it from similar problems of sequential decision making under model uncertainty. Next, we extend the analysis to the case when additional consumer profile information is available; our primary motivation here is the emerging area of online advertisement. As in the previous setup, we identify fundamental performance limits and propose adaptive policies attaining these limits. Finally, we focus on the effects of competition and consumers' access to information on assortment strategies. In particular, we study competition among retailers when they have access to common products, i.e., products that are available to the competition, and where consumers have full information about the retailers' offerings. Our results shed light on equilibrium properties in such settings and the effect common products have on this behavior.
|
20 |
Algorithms for Sparse and Low-Rank Optimization: Convergence, Complexity and ApplicationsMa, Shiqian January 2011 (has links)
Solving optimization problems with sparse or low-rank optimal solutions has been an important topic since the recent emergence of compressed sensing and its matrix extensions such as the matrix rank minimization and robust principal component analysis problems. Compressed sensing enables one to recover a signal or image with fewer observations than the "length" of the signal or image, and thus provides potential breakthroughs in applications where data acquisition is costly. However, the potential impact of compressed sensing cannot be realized without efficient optimization algorithms that can handle extremely large-scale and dense data from real applications. Although the convex relaxations of these problems can be reformulated as either linear programming, second-order cone programming or semidefinite programming problems, the standard methods for solving these relaxations are not applicable because the problems are usually of huge size and contain dense data. In this dissertation, we give efficient algorithms for solving these "sparse" optimization problems and analyze the convergence and iteration complexity properties of these algorithms. Chapter 2 presents algorithms for solving the linearly constrained matrix rank minimization problem. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast and solved as a semidefinite programming problem, such an approach is computationally expensive when the matrices are large. In Chapter 2, we propose fixed-point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems. Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10<sup>-5</sup> in about 3 minutes by sampling only 20 percent of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms. In Chapter 3, we study the convergence/recoverability properties of the fixed point continuation algorithm and its variants for matrix rank minimization. Heuristics for determining the rank of the matrix when its true rank is not known are also proposed. Some of these algorithms are closely related to greedy algorithms in compressed sensing. Numerical results for these algorithms for solving linearly constrained matrix rank minimization problems are reported. Chapters 4 and 5 considers alternating direction type methods for solving composite convex optimization problems. We present in Chapter 4 alternating linearization algorithms that are based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require at most O(1/ε) iterations to obtain an ε-optimal solution, while our accelerated (i.e., fast) versions require at most O(1/√ε) iterations, with little change in the computational effort required at each iteration. For more general problem, i.e., minimizing the sum of K convex functions, we propose multiple-splitting algorithms for solving them. We propose both basic and accelerated algorithms with O(1/ε) and O(1/√ε) iteration complexity bounds for obtaining an ε-optimal solution. To the best of our knowledge, the complexity results presented in these two chapters are the first ones of this type that have been given for splitting and alternating direction type methods. Numerical results on various applications in sparse and low-rank optimization, including compressed sensing, matrix completion, image deblurring, robust principal component analysis, are reported to demonstrate the efficiency of our methods.
|
Page generated in 0.0297 seconds