• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1409
  • 107
  • 73
  • 54
  • 26
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 5
  • Tagged with
  • 2125
  • 2125
  • 556
  • 389
  • 328
  • 277
  • 259
  • 225
  • 209
  • 204
  • 175
  • 162
  • 157
  • 141
  • 137
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1021

Structural Analysis and Design of Lightweight Composite Mortar Barrel

Unknown Date (has links)
A 81-mm mortar barrel that is at least 50% lighter than the current steel barrel used in the M252 mortar system would prove to be advantageous for the army. The desire for the weight reduction was based on the army's vision of the future combat systems. The current barrel has a maximum rated pressure of 109 MPa (15,800 psi) and is capable of sustained fire rates of 15 rounds per minute. The concept of sheathing a steel liner with a lightweight material to meet the weight saving goal while satisfying the performance requirements was investigated. The perceived need for lightweight mortars led to the study of composite materials. Composites are increasingly being used because of their lightweight, high strength to stiffness ratio and high durability under severe loading environments. High temperatures around 550oC (1022oF) are produced in the mortar barrel during firing. Very few resins are now available that are susceptible to working temperatures of as high as 350oC (662oF). A thermal barrier material was introduced between the steel liner and the composite sheath to reduce the transmission of high temperatures to the sheath, hence reducing the working temperature of the resin. Viable materials for the barrel were investigated and identified. 4340 steel was considered for the liner material, Nextel 610/Sialyte composite for the thermal barrier material and IM7/cyanate ester composite for the sheath material. The lightweight composite mortar barrel was modeled and analyzed using the finite element analysis software ABAQUS. Finite element analysis was conducted on the mortar barrel to determine the integrity of the design against the maximum expected pressure and temperature loads. The failure strength analysis determined that the design was susceptible to the rated loads. The weight of the composite mortar barrel was evaluated to be 5.3 Kg (11.68 lb) while the weight of the current steel barrel is 12.4 Kg (27.4 lb). The composite mortar barrel design achieved a potential weight reduction of 57% compared to that of the current steel barrel. / A Thesis submitted to the Department of Industrial Engineering in partial fulfillment of the requirements for the degree of Master of Science. / Summer Semester, 2003. / July 11, 2003. / Composite Materials, Combat Systems, Weight Reduction / Includes bibliographical references. / Ben Wang, Professor Directing Thesis; Okenwa Okoli, Committee Member; Zhiyong Liang, Committee Member.
1022

Robust optimization for network-based resource allocation problems under uncertainty

Marla, Lavanya January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering; and, (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2007. / Includes bibliographical references (p. 129-131). / We consider large-scale, network-based, resource allocation problems under uncertainty, with specific focus on the class of problems referred to as multi-commodity flow problems with time-windows. These problems are at the core of many network-based resource allocation problems. Inherent data uncertainty in the problem guarantees that deterministic optimal solutions are rarely, if ever, executed. Our work examines methods of proactive planning, that is, robust plan generation to protect against future uncertainty. By modeling uncertainties in data corresponding to service times, resource availability, supplies and demands, we can generate solutions that are more robust operationally, that is, more likely to be executed or easier to repair when disrupted. The challenges are the following: approaches to achieve robustness 1) can be extremely problem-specific and not general; 2) suffer from issues of tractability; or 3) have unrealistic data requirements. We propose in this work a modeling and algorithmic framework that addresses the above challenges. / (cont.) Our modeling framework involves a decomposition scheme that separates problems involving multi-commodity flows with time-windows into routing (that is, a routing master problem) and scheduling modules (that is, a scheduling sub-problem), and uses an iterative scheme to provide feedback between the two modules, both of which are more tractable than the integrated model. The master problem has the structure of a multi-commodity flow problem and the sub-problem is a set of network flow problems. This decomposition allows us to capture uncertainty while maintaining tractability. Uncertainty is captured in part by the master problem and in part by the sub-problem. In addition to solving problems under uncertainty, our decomposition scheme can also be used to solve large-scale resource allocation problems without uncertainty. As proof-of-concept, we apply our approach to a vehicle routing and scheduling problem and compare its solutions to those of other robust optimization approaches. Finally, we propose a framework to extend our robust, decomposition approach to the more complex problem of network design. / by Lavanya Marla. / S.M.
1023

Real-time Multi-period truckload routing problems

Limpaitoon, Tanachai January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering; and, (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2008. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Includes bibliographical references (p. 99-102). / In this thesis we consider a multi-period truckload pick-up and delivery problem dealing with real-time requests over a finite time horizon. We introduce the notion of postponement of requests, whereby the company can postpone some requests to the next day in order to improve its operational efficiency. The postponed requests must then be served on the next day. The daily costs of operation include costs associated with the trucks' empty travel distances and costs associated with postponement. The revenues are directly proportional to the length of job requests. We evaluate the profits of various re-optimization policies with the possibility of postponement. Another important notion of trucking operation corresponds to repositioning strategies which exploit probabilistic knowledge about future demands. A new repositioning strategy is proposed here to provide better decisions. For both notions, extensive computational results are provided under a general simulation framework. / by Tanachai Limpaitoon. / S.M.
1024

Discrete Optimization Problems in Popular Matchings and Scheduling

Powers, Vladlena January 2020 (has links)
This thesis focuses on two central classes of problems in discrete optimization: matching and scheduling. Matching problems lie at the intersection of different areas of mathematics, computer science, and economics. In two-sided markets, Gale and Shapley's model has been widely used and generalized to assign, e.g., students to schools and interns to hospitals. The goal is to find a matching that respects a certain concept of fairness called stability. This model has been generalized in many ways. Relaxing the stability condition to popularity allows to overcome one of the main drawbacks of stable matchings: the fact that two individuals (a blocking pair) can prevent the matching from being much larger. The first part of this thesis is devoted to understanding the complexity of various problems around popular matchings. We first investigate maximum weighted popular matching problems. In particular, we show various NP-hardness results, while on the other hand prove that a popular matching of maximum weight (if any) can be found in polynomial time if the input graph has bounded treewidth. We also investigate algorithmic questions on the relationship between popular, stable, and Pareto optimal matchings. The last part of the thesis deals with a combinatorial scheduling problem arising in cyber-security. Moving target defense strategies allow to mitigate cyber attacks. We analyze a strategic game, PLADD, which is an abstract model for these strategies.
1025

Tractable Policies in Dynamic Robust Optimization

El Housni, Omar January 2020 (has links)
In many sequential decision problems, uncertainty is revealed over time and we need to make decisions in the face of uncertainty. This is a fundamental problem arising in many applications such as facility location, resource allocation and capacity planning under demand uncertainty. Robust optimization is an approach to model uncertainty where we optimize over the worst-case realization of parameters within an uncertainty set. While computing an optimal solution in dynamic robust optimization is usually intractable, affine policies (or linear decision rules) are widely used as an approximate solution approach. However, there is a stark contrast between the observed good empirical performance and the bad worst-case theoretical performance bounds. In the first part of this thesis, we address this stark contrast between theory and practice. In particular, we introduce a probabilistic approach in Chapter 2 to analyze the performance of affine policies on randomly generated instances and show they are near-optimal with high probability under reasonable assumptions. In Chapter 3, we study these policies under important models of uncertainty such as budget of uncertainty sets and intersection of budgeted sets and show that affine policies give an optimal approximation matching the hardness of approximation. In the second part of the thesis and based on our analysis of affine policies, we design new tractable policies for dynamic robust optimization. In particular, in Chapter 4, we present a tractable framework to design piecewise affine policies that can be computed efficiently and improve over affine policies for many instances. In Chapter 5, we introduce extended affine policies and threshold policies and show that their performance guarantees are significantly better than previous policies. Finally, in Chapter 6, we study piecewise static policies and their limitations for solving some classes of dynamic robust optimization problems.
1026

Extending and Simplifying Existing Piecewise-Linear Homotopy Methods for Solving Nonlinear Systems of Equations

Unknown Date (has links)
This dissertation research extends and simplfiies existing piecewise-linear homotopy (PL) methods to solve G(x) = 0, with G : ℝⁿ → ℝ[superscript m]. Existing PL methods are designed to solve F(x) = 0, with F : ℝⁿ → ℝⁿ and some related point-to-set mappings. PL methods are a component of what is also known as numerical continuation methods, and they are known for being globally convergent methods. First, we present a new PL method for computing zeros of functions of the form ƒ : ℝⁿ → ℝ by mimicking classical PL methods for computing zeros of functions of the form ƒ : ℝ → ℝ. Our PL method avoids traversing subdivisions of ℝⁿ x [0, 1] and instead uses an object that we refer to as triangulation-graph, which is essentially a triangulation of ℝ x [0, 1] with hypercubes of ℝⁿ as its vertices. The hypercubes are generated randomly, and a sojourn time of an associated discrete-time Markov chain is used to show that not too many cubes are generated. Thereafter, our PL method is applied to solving G(x) = 0 for G : ℝⁿ → ℝ[superscript m] under inequality constraints. The resultant method for solving G(x) = 0 translates into a new type of iterative method for solving systems of linear equations. Some computational illustrations are reported. A possible application to optimization problems is also indicated as a direction for further work. / A Dissertation submitted to the Department of Industrial and Manufacturing Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Spring Semester 2017. / March 13, 2017. / discrete-time Markov chain, PL homotopy, sojourn time, solving equations / Includes bibliographical references. / Samuel Awoniyi, Professor Directing Dissertation; Simon Foo, University Representative; Chiwoo Park, Committee Member; Arda Vanli, Committee Member.
1027

Evolutionary Dynamics of Large Systems

Nikhil Nayanar (10702254) 06 May 2021 (has links)
<div><div><div><p>Several socially and economically important real-world systems comprise large numbers of interacting constituent entities. Examples include the World Wide Web and Online Social Networks (OSNs). Developing the capability to forecast the macroscopic behavior of such systems based on the microscopic interactions of the constituent parts is of considerable economic importance.</p><p>Previous researchers have investigated phenomenological forecasting models in such contexts as the spread of diseases in the real world and the diffusion of innovations in the OSNs. The previous forecasting models work well in predicting future states of a system that are at equilibrium or near equilibrium. However, forecasting non-equilibrium states – such as the transient emergence of hotspots in web traffic – remains a challenging problem. In this thesis we investigate a hypothesis, rooted in Ludwig Boltzmann's celebrated H-theorem, that the evolutionary dynamics of a large system – such as the World Wide Web – is driven by the system's innate tendency to evolve towards a state of maximum entropy.</p><p>Whereas closed systems may be expected to evolve towards a state of maximum entropy, most real-world systems are not closed. However, the stipulation that if a system is closed then it should asymptotically approach a state of maximum entropy provides a strong constraint on the inverse problem of formulating the microscopic interaction rules that give rise to the observed macroscopic behavior. We make the constraint stronger by insisting that, if closed, a system should evolve monotonically towards a state of maximum entropy and formulate microscopic interaction rules consistent with the stronger constraint.</p><p>We test the microscopic interaction rules that we formulate by applying them to two real world phenomena: the flow of web traffic in the gaming forums on Reddit and the spread of Covid-19 virus. We show that our hypothesis leads to a statistically significant improvement over the existing models in predicting the traffic flow in gaming forums on Reddit. Our interaction rules are also able to qualitatively reproduce the heterogeneity in the number of COVID-19 cases across the cities around the globe. The above experiments provide supporting evidence for our hypothesis, suggesting that our approach is worthy of further investigation.</p><p>In addition to the above stochastic model, we also study a deterministic model of attention flow over a network and establish sufficient conditions that, when met, signal imminent parabolic accretion of attention at a node<br></p></div></div></div>
1028

Modern Dynamic Programming Approaches to Sequential Decision Making

Min, Seungki January 2021 (has links)
Dynamic programming (DP) has long been an essential framework for solving sequential decision-making problems. However, when the state space is intractably large or the objective contains a risk term, the conventional DP framework often fails to work. In this dissertation, we investigate such issues, particularly those arising in the context of multi-armed bandit problems and risk-sensitive optimal execution problems, and discuss the use of modern DP techniques to overcome these challenges such as information relaxation, policy gradient, and state augmentation. We develop frameworks formalize and improve existing heuristic algorithms (e.g., Thompson sampling, aggressive-in-the-money trading), while shedding new light on the adopted DP techniques.
1029

Managing Stochastic Uncertainty in Dynamic Marketplaces

Lu, Jiaqi January 2021 (has links)
Firms' operations management decisions are often complicated by various types of uncertainties, ranging from micro level customer behavior to macro level economic conditions. Operating in the presence of uncertainties and volatilities is a challenging task, one that requires careful mathematical analysis and tailored treatment based on the uncertainty's characteristics. In this thesis we provide three distinct studies on managing stochastic uncertainty in dynamic marketplaces. The first study considers agents' dynamic interactions in a large matching market. A pair needs to inspect for their compatibility in order to form a match. We study a type of market failure called 'information deadlock' that may arise when pairs are only willing to inspect their most preferred prevailing partner. Under information deadlock, a large fraction of agents wait in the market for long (if not forever) in spite of there being opportunities remaining in their consideration sets. Using advanced tools in statistical physics and random graph theory, we derive how the size of the deadlock is affected by the market's primitives. We also show that information deadlock is prevalent in a wide range of markets. Our second study tackles a service firm's problem of choosing between a safe service mode and a risky service mode when serving a customer who might probabilistically churn. One key behavioral feature of the customer that we consider is named recency bias --- his happiness with the firm (that crucially determines his churn risk at the time) depends more heavily on his more recent experience. We show, by solving a stochastic control problem, that the firm should be risk-averse when the customer is marginally satisfied and risk-seeking when the customer is marginally unsatisfied. The optimal sandwich policy can significantly outperform the naive myopic policy in terms of customer lifetime value. Our third study deals with a dual sourcing problem under fluctuating economic conditions. We model this via an underlying Markov modulated state-of-the-world which affects the two suppliers’ cost structures, capacity limits and demands. We develop two approaches to show how the optimal combined ordering strategy from the two suppliers, along with a salvaging policy, can be efficiently computed, and characterize the relatively simple structure of the optimal policies. Interestingly, we find that the firm can, by exploiting the dual sourcing options, benefit from increased environmental volatilities that affect the suppliers’ cost structures or capacity limits.
1030

Harmonization of aviation user charges in the North Atlantic airspace

Gaudet, Megan Brett January 2008 (has links)
Thesis (S.M. in Transportation)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering; and, (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2008. / Includes bibliographical references (p. 91-93). / The purpose of this thesis is to explore various harmonization scenarios for North Atlantic en route user charges. The current charging system involves eight countries, each with their own method for computing user charges. The scope of the research is limited to revenue neutral approaches for service providers, meaning each air navigation service provider (ANSP) receives constant total charges in 2006. Therefore, the viability of different scenarios is compared in terms of its impact on airspace users. Two different interpretation of a "harmonized" system are considered. The first explores the harmonization of only the charging methodology, allowing service providers to set and collect their own charges. The second harmonization alternative fully harmonizes the North Atlantic user charges resulting in a single charge per flight. Within each of these alternatives four different charge scenarios were modeled using 2006 data. The four alternatives are a flat charge, distance-based rate, a combination weight and distance charge, and a fixed-plus-variable charge. Utilizing 47,516 North Atlantic flights drawn from a systematic random sampling of days in 2006, the average North Atlantic user charge was determined to be $393 and ranged from less than $1 to $3,868. The magnitude of the average North Atlantic user charge is small relative to the total flight costs airlines incur, thus all harmonization approaches will have only second order effects on the airlines' bottomline. Thus, the harmonization of the regions' user charges allows for the unique opportunity to develop a more rational system of charges without large disruptions to the majority of users. The thesis explores the impact of the various charge scenarios on user stakeholder groups in terms of aircraft size, North Atlantic distance, and origin-destination regions. / (cont.) The results show a distance-based rate imposed at the ANSP-level would result in the smallest disruption to users' charges compared to the baseline system. However, any semi-independent harmonization approach sacrifices the efficiencies which could be realized under a fully harmonized system. Of the fully harmonized methods, the Eurocontrol formula with a service unit rate of $7.28 is the least disruptive to the baseline user charges. / by Megan Brett Gaudet. / S.M. / S.M.in Transportation

Page generated in 0.083 seconds