• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1409
  • 107
  • 73
  • 54
  • 26
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 5
  • Tagged with
  • 2125
  • 2125
  • 556
  • 389
  • 328
  • 277
  • 259
  • 225
  • 209
  • 204
  • 175
  • 162
  • 157
  • 141
  • 137
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
691

Unmanned aerial vehicle survivability the impacts of speed, detectability, altitude, and enemy capabilities

McMindes, Kevin L. 09 1900 (has links)
Warfighters are increasingly relying on Unmanned Aerial Vehicle (UAV) systems at all levels of combat operations. As these systems weave further into the fabric of our tactics and doctrine, their loss will seriously diminish combat effectiveness. This makes the survivability of these systems of utmost importance. Using Agent-based modeling and a Nearly Orthogonal Latin Hypercube design of experiment, numerous factors and levels are explored to gain insight into their impact on, and relative importance to, survivability. Factors investigated include UAV speed, stealth, altitude, and sensor range, as well as enemy force sensor ranges, probability of kill, array of forces, and numerical strength. These factors are varied broadly to ensure robust survivability results regardless of the type of threat. The analysis suggests that a speed of at least 135 knts should be required and that increases in survivability remain appreciable up to about 225 knts. The exception to speed's dominance is in the face of extremely high capability enemy assets. In this case, stealth becomes more important than speed alone. However, the interactions indicate that as both speed and stealth increase, speed yields a faster return on overall survivability and that speed mitigates increased enemy capabilities.
692

Exploring the effectiveness of the marine expeditionary rifle squad

Sanders, Todd M. 09 1900 (has links)
This study explores the effectiveness of the Marine Expeditionary Rifle Squad (MERS) in support of Distributed Operations in urban terrain. The Marine Corps is evaluating the Distributed Operations concept as a solution to new threats posed in current operations. In order to employ distributed tactics, a more effective and capable Marine Rifle Squad is needed. The MERS concept seeks to increase the effectiveness of the current rifle squad, enabling smaller, more lethal, and more survivable units. Those issues are explored using agent-based modeling and data analysis. The most significant finding is that the MERS must be evaluated as a system; factors cannot be analyzed in isolation. The two factors that most affect the effectiveness are survivability and lethality. Maximizing these two factors leads to the lowest friendly casualties, highest enemy casualties, and highest probability of mission success. Agent-based modeling provides the maximum flexibility and responsiveness required for timely insights into small unit combat.
693

Forecasting U.S. Marine Corps reenlistments by military occupational specialty and grade

Conatser, Dean G. 09 1900 (has links)
Each year, manpower planners at Headquarters Marine Corps must forecast the enlisted force structure in order to properly shape it according to a goal, or target force structure. Currently the First Term Alignment Plan (FTAP) Model and Subsequent Term Alignment Plan (STAP) Model are used to determine the number of required reenlistments by Marine military occupational specialty (MOS) and grade. By request of Headquarters Marine Corps, Manpower and Reserve Affairs, this thesis and another, by Captain J.D. Raymond, begin the effort to create one forecasting model that will eventually perform the functions of both the FTAP and STAP models. This thesis predicts the number of reenlistments for first- and subsequent-term Marines using data from the Marine Corps' Total Force Data Warehouse (TFDW). Demographic and service-related variables from fiscal year 2004 were used to create logistic regression models for the FY2005 first-term and subsequent-term reenlistment populations. Classification trees were grown to assist in variable selection and modification. Logistic regression models were compared based on overall fit of the predictions to the FY2005 data. Combined with other research, this thesis can provide Marine manpower planners a means to forecast future force structure by MOS and grade.
694

Evaluation of fleet ownership versus global allocation of ships in the Combat Logistics Force

Doyle, David E. 09 1900 (has links)
Military Sealift Command (MSC) introduced its new Dry Cargo and Ammunition Ship (T-AKE) in June 2006, to replace its retiring ammunition and fast combat stores supply ships. MSC seeks new ways to use T-AKEs, fleet replenishment oilers, and fast combat support ships to better support the U.S. Navy. We evaluate two alternate ways to manage these ships, one where each ship operates under a particular "fleet ownership," and another where these ships are "globally allocated," serving any fleet customer as needed worldwide. We introduce an optimization-based scheduling tool, and use it to evaluate an expository 181-day peacetime scenario. We track daily inventories of 13 battle groups - carrier strike groups, expeditionary strike groups, surface strike groups, and a littoral combat ship squadron - to gain insight into how to best employ CLF ships. We determine that, in our scenario, global allocation provides significantly better service to fleet customers.
695

Computational Models for Scheduling in Online Advertising

Arkhipov, Dmitri I. 27 October 2016 (has links)
<p> Programmatic advertising is an actively developing industry and research area. Some of the research in this area concerns the development of optimal or approximately optimal contracts and policies between publishers, advertisers and intermediaries such as ad networks and ad exchanges. Both the development of contracts and the construction of policies governing their implementation are difficult challenges, and different models take different features of the problem into account. In programmatic advertising decisions are made in real time, and time is a scarce resource particularly for publishers who are concerned with content load times. Policies for advertisement placement must execute very quickly once content is requested; this requires policies to either be pre-computed and accessed as needed, or for the policy execution to be very efficient. We formulate a stochastic optimization problem for per publisher ad sequencing with binding latency constraints. Within our context an ad request lifecycle is modeled as a sequence of one by one solicitations (OBOS) subprocesses/lifecycle stages. From the viewpoint of a supply side platform (SSP) (an entity acting in proxy for a collection of publishers), the duration/span of a given lifecycle stage/subprocess is a stochastic variable. This stochasticity is due both to the stochasticity inherent in Internet delay times, and the lack of information regarding the decision processes of independent entities. In our work we model the problem facing the SSP, namely the problem of optimally or near-optimally choosing the next lifecycle stage of a given ad request lifecycle at any given time. We solve this problem to optimality (subject to the granularity of time) using a classic application of Richard Bellman's dynamic programming approach to the 0/1 Knapsack Problem. The DP approach does not scale to a large number of lifecycle stages/subprocesses so a sub-optimal approach is needed. We use our DP formulation to derive a focused real time dynamic programming (FRTDP) implementation, a heuristic method with optimality guarantees for solving our problem. We empirically evaluate (through simulation) the performance of our FRTDP implementation relative to both the DP implementation (for tractable instances) and to several alternative heuristics for intractable instances. Finally, we make the case that our work is usefully applicable to problems outside the domain of online advertising.</p>
696

Task assignment algorithms for teams of UAVs in dynamic environments

Alighanbari, Mehdi, 1976- January 2004 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics; and, (S.M.)--Massachusetts Institute of Technology, Operations Research Center, 2004. / Includes bibliographical references (p. 113-118). / For many vehicles, obstacles, and targets, coordination of a fleet of Unmanned Aerial Vehicles (UAVs) is a very complicated optimization problem, and the computation time typically increases very rapidly with the problem size. Previous research proposed an approach to decompose this large problem into task assignment and trajectory problems, while capturing key features of the coupling between them. This enabled the control architecture to solve an assignment problem first to determine a sequence of waypoints for each vehicle to visit, and then concentrate on designing paths to visit these pre-assigned waypoints. Although this approach greatly simplifies the problem, the task assignment optimization was still too slow for real-time UAV operations. This thesis presents a new approach to the task assignment problem that is much better suited for replanning in a dynamic battlefield. The approach, called the Receding Horizon Task Assignment (RHTA) algorithm, is shown to achieve near-optimal performance with computational times that are feasible for real-time implementation. Further, this thesis extends the RHTA algorithm to account for the risk, noise, and uncertainty typically associated with the UAV environment. This work also provides new insights on the distinction between UAV coordination and cooperation. The benefits of these improvements to the UAV task assignment algorithms are demonstrated in several simulations and on two hardware platforms. / by Mehdi Alighanbari. / S.M.
697

Multiple part type decomposition method in manufacturing processing line

Jang, Young Jae, 1974- January 2001 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and, (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2001. / "June 2001." / Includes bibliographical references (leaf 67). / by Young Jae Jang. / S.M.
698

Optimization Algorithms for Structured Machine Learning and Image Processing Problems

Qin, Zhiwei January 2013 (has links)
Optimization algorithms are often the solution engine for machine learning and image processing techniques, but they can also become the bottleneck in applying these techniques if they are unable to cope with the size of the data. With the rapid advancement of modern technology, data of unprecedented size has become more and more available, and there is an increasing demand to process and interpret the data. Traditional optimization methods, such as the interior-point method, can solve a wide array of problems arising from the machine learning domain, but it is also this generality that often prevents them from dealing with large data efficiently. Hence, specialized algorithms that can readily take advantage of the problem structure are highly desirable and of immediate practical interest. This thesis focuses on developing efficient optimization algorithms for machine learning and image processing problems of diverse types, including supervised learning (e.g., the group lasso), unsupervised learning (e.g., robust tensor decompositions), and total-variation image denoising. These algorithms are of wide interest to the optimization, machine learning, and image processing communities. Specifically, (i) we present two algorithms to solve the Group Lasso problem. First, we propose a general version of the Block Coordinate Descent (BCD) algorithm for the Group Lasso that employs an efficient approach for optimizing each subproblem exactly. We show that it exhibits excellent performance when the groups are of moderate size. For groups of large size, we propose an extension of the proximal gradient algorithm based on variable step-lengths that can be viewed as a simplified version of BCD. By combining the two approaches we obtain an implementation that is very competitive and often outperforms other state-of-the-art approaches for this problem. We show how these methods fit into the globally convergent general block coordinate gradient descent framework in (Tseng and Yun, 2009). We also show that the proposed approach is more efficient in practice than the one implemented in (Tseng and Yun, 2009). In addition, we apply our algorithms to the Multiple Measurement Vector (MMV) recovery problem, which can be viewed as a special case of the Group Lasso problem, and compare their performance to other methods in this particular instance; (ii) we further investigate sparse linear models with two commonly adopted general sparsity-inducing regularization terms, the overlapping Group Lasso penalty l1/l2-norm and the l1/l_infty-norm. We propose a unified framework based on the augmented Lagrangian method, under which problems with both types of regularization and their variants can be efficiently solved. As one of the core building-blocks of this framework, we develop new algorithms using a partial-linearization/splitting technique and prove that the accelerated versions of these algorithms require $O(1 sqrt(epsilon) ) iterations to obtain an epsilon-optimal solution. We compare the performance of these algorithms against that of the alternating direction augmented Lagrangian and FISTA methods on a collection of data sets and apply them to two real-world problems to compare the relative merits of the two norms; (iii) we study the problem of robust low-rank tensor recovery in a convex optimization framework, drawing upon recent advances in robust Principal Component Analysis and tensor completion. We propose tailored optimization algorithms with global convergence guarantees for solving both the constrained and the Lagrangian formulations of the problem. These algorithms are based on the highly efficient alternating direction augmented Lagrangian and accelerated proximal gradient methods. We also propose a nonconvex model that can often improve the recovery results from the convex models. We investigate the empirical recoverability properties of the convex and nonconvex formulations and compare the computational performance of the algorithms on simulated data. We demonstrate through a number of real applications the practical effectiveness of this convex optimization framework for robust low-rank tensor recovery; (iv) we consider the image denoising problem using total variation regularization. This problem is computationally challenging to solve due to the non-differentiability and non-linearity of the regularization term. We propose a new alternating direction augmented Lagrangian method, involving subproblems that can be solved efficiently and exactly. The global convergence of the new algorithm is established for the anisotropic total variation model. We compare our method with the split Bregman method and demonstrate the superiority of our method in computational performance on a set of standard test images.
699

Excluding Induced Paths: Graph Structure and Coloring

Maceli, Peter Lawson January 2015 (has links)
An induced subgraph of a given graph is any graph which can be obtained by successively deleting vertices, possible none. In this thesis, we present several new structural and algorithmic results on a number of different classes of graphs which are closed under taking induced subgraphs. The first result of this thesis is related to a conjecture of Hayward and Nastos on the structure of graphs with no induced four-edge path or four-edge antipath. They conjectured that every such graph which is both prime and perfect is either a split graph or contains a certain useful arrangement of simplicial and antisimplicial vertices. We give a counterexample to their conjecture, and prove a slightly weaker version. This is joint work with Maria Chudnovsky, and first appeared in Journal of Graph Theory. The second result of this thesis is a decomposition theorem for the class of all graphs with no induced four-edge path or four-edge antipath. We show that every such graph can be obtained from pentagons and split graphs by repeated application of complementation, substitution, and split graph unification. Split graph unification is a new graph operation we introduced, which is a generalization of substitution and involves "gluing" two graphs along a common induced split graph. This is a combination of joint work with Maria Chudnovsky and Irena Penev, together with later work of Louis Esperet, Laetitia Lemoine and Frederic Maffray, and first appeared in. The third result of this thesis is related to the problem of determining the complexity of coloring graphs which do not contain some fixed induced subgraph. We show that three-coloring graphs with no induced six-edge path or triangle can be done in polynomial-time. This is joint work with Maria Chudnovsky and Mingxian Zhong, and first appeared in. Working together with Flavia Bonomo, Oliver Schaudt, and Maya Stein, we have since simplified and extended this result.
700

Online Algorithms for Dynamic Resource Allocation Problems

Wang, Xinshang January 2017 (has links)
Dynamic resource allocation problems are everywhere. Airlines reserve flight seats for those who purchase flight tickets. Healthcare facilities reserve appointment slots for patients who request them. Freight carriers such as motor carriers, railroad companies, and shipping companies pack containers with loads from specific origins to destinations. We focus on optimizing such allocation problems where resources need to be assigned to customers in real time. These problems are particularly difficult to solve because they depend on random external information that unfolds gradually over time, and the number of potential solutions is overwhelming to search through by conventional methods. In this dissertation, we propose viable allocation algorithms for industrial use, by fully leveraging data and technology to produce gains in efficiency, productivity, and usability of new systems. The first chapter presents a summary of major methodologies used in modeling and algorithm design, and how the methodologies are driven by the size of accessible data. Chapters 2 to 5 present genuine research results of resource allocation problems that are based on Wang and Truong (2017); Wang et al. (2015); Stein et al. (2017); Wang et al. (2016). The algorithms and models cover problems in multiple industries, from a small clinic that aims to better utilize its expensive medical devices, to a technology giant that needs a cost-effective, distributed resource-allocation algorithm in order to maintain the relevance of its advertisements to hundreds of millions of consumers.

Page generated in 0.1174 seconds