• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1409
  • 107
  • 73
  • 54
  • 26
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 5
  • Tagged with
  • 2125
  • 2125
  • 556
  • 389
  • 328
  • 277
  • 259
  • 225
  • 209
  • 204
  • 175
  • 162
  • 157
  • 141
  • 137
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
831

An analysis of building a submarine base in the Arctic

Best, Truman J. 03 1900 (has links)
Approved for public release; distribution is unlimited / This analysis addresses the value of a submarine base in the Arctic in relation to the growing Soviet threat in that region and the feasibility of constructing and operating such a submarine base. Location, command and control, force operation, logistic support and appropriate force size are elements of the analysis. Also included is the cost effectiveness of the Arctic submarine base both in peacetime and in wartime situations. Based upon this limited analysis, such a base appears to be only marginally cost effective in peacetime but substantially so in wartime. / http://archive.org/details/analysisofbuildi00best / Lieutenant, United States Navy
832

A GAMS-based model of the U.S. Army Wartime Ammunition Distribution System for the Corps level

Cain, Mark J. 03 1900 (has links)
Approved for public release; distribution is unlimited / The U.S. Army Wartime Ammunition Distribution System (WADS) will experience an unprecedented demand for ammunition under the operational concept of Airland Battle. To meet demand, proper storage facility location and an efficient flow through the distribution network will be required. Using information from Army Field Manuals, maps and simulation data for demand, both a mixed integer program (MIP) and a sequential, optimization-based heuristic are developed to model the WADS. The Generalized Algebraic Modelling System is used to implement both models. The sequential heuristic locates ammunition facilities with a binary integer program and then directs ammunition through those facilities utilizing a network flow model with side constraints. The MIP integrates location and flow decisions in the same model. For a general scenario, the sequential heuristic locates a 21 node, 30 arc network with ammunition flows over 30 time periods in 22 CPU seconds on an IBM 3033AP. For the same scenario the MIP obtains a solution for only a 3 time period problem in 87 CPU seconds. Keywords: Ammunition, Integer programming, Heuristic, Networks / http://archive.org/details/gamsbasedmodelof00cain / Captain, United States Army
833

Model Characteristics and Properties of Nanorobots in the Bloodstream

Unknown Date (has links)
Many researchers have various visions and concepts about what the nanorobot will be like and what they will do. Most people see nanorobots doing a lot of functions in the medical field, having ideas of them doing cell repair, seek-and-destroy harmful diseases, clean arteries of cholesterol buildup, and much more. There are many questions that need to be answered as to what exactly is needed for the nanorobot to perform these medical functions. This project is not interested in the design of the nanorobot, but focuses on the characteristics and parameters that should be considered for a nanorobot to function through the bloodstream of a human body, specifically. To do this, a mobile robot was being used to traverse through a scaled model of the bloodstream. The scale model consisted of clear tubing or piping enclosed in a loop filled with liquid to nearly the exact viscosity of blood. The liquid had particles to emulate the various obstacles that a nanorobot would encounter like red blood cells and other molecules. The simulation had a continuous flow at the appropriate rate and pressure expected in the bloodstream. The pipe size was calculated setting the ratio of the diameter of a particular blood vessel over the diameter face of the assumed size of a nanorobot (DBV / DNR) equaling the diameter of the pipe (unknown variable) to the diameter face of the mobile robot (DPipe / Dsub). The pipe size came to be 6.66 inches, however pipe sizes come in increments of 2 inches larger than 4 inch pipes. It was settled to use 6 inch pipes. With this variable, the Reynolds number is the diameter of pipe times the velocity of the fluid over the kinematic viscosity of the fluid (R = (DPipe * ν) / υ). Setting the Reynolds value of the bloodstream equal to the Reynolds value of the model, the velocity of the pipe could be isolated. With that the flow rate was evaluated by multiplying the velocity to the cross-sectional area of the pipe (Flow Rate was equal to 0.2021392 gallon/minute). With all conditions met for an accurate model of the bloodstream, the physical model was designed and constructed then testing with the mobile robot was done to determine how the robot functions in the simulated environment. The results of the experiment showed that the mobile robot is influenced by the environment. The speed it travels decreases as viscosity of the fluid increases. The particles in the fluid also affect the speed along with the flow of the fluid. Mobility and control of the mobile robot were hindered with the increase of viscosity and the presence of particles. When traveling against the flow of the fluid it was further hindered. Stability of the craft increased along with viscosity but was chaotic traveling with particles. The performance of the mobile robot was affected by the conditions and parameters involved in the bloodstream. / A Thesis submitted to the Department of Industrial and Manufacturing Engineering in partial fulfillment of the requirements for the degree of Master of Science. / Degree Awarded: Spring Semester 2005. / Date of Defense: April 4, 2005. / Bloodstream, Nanorobots, Nanorobot / Includes bibliographical references. / Yaw A. Owusu, Professor Directing Thesis; Rodney G. Roberts, Outside Committee Member; Reginald Parker, Committee Member; Chun Zhang, Committee Member.
834

A profit maximization model in a two-echelon supply chain management : distribution and pricing strategies

Mao, Ye, 1978- January 2003 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering; and, (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2003. / Includes bibliographical references (p. 109-111). / Distribution and pricing strategies play a central role in the field of supply chain management. Heuristic approaches to the vehicle routing problem (VRP) are usually used to design optimal delivery routes to serve geographically dispersed customers, who are price elastic. There is a rich literature discussing either the manufacturer's distribution strategy or its pricing initiatives. The purpose of this thesis is to develop a profit maximization model that presents an integrated distribution and pricing strategy for any company facing such issues. We first examine a simplified scenario when all customers are located in the same delivery region and their demand is deterministic. Both truckload (TL) and less-than-truckload (LTL) shipment strategies are analyzed and compared. We later extend our findings to multiple delivery regions and discuss the impact of the manufacturer's pricing flexibility on its profit. Then we relax the assumption of deterministic customer demand and introduce the safety stock cost. Finally the application on across delivery region situations is shown. Although some of our assumptions simplify our model, we believe that it provides insight into more complex supply chain management problems. / by Ye Mao. / S.M.
835

Polypyrrole as a Smart Material for Phosphate Contaminate Detection in Water

Unknown Date (has links)
Polypyrrole is a conductive polymer that has the potential to be used in many systems where conductivity can be studied. Polypyrrole when combined with a catalyst such as calcium acetate maybe able to provide a method of detecting phosphate in water systems. The hypothesis of this research explores the concept that a polypyrrole sensor could be manufactured via a casting method to produce a sensor that detects phosphates in water. Varying three primary factors produced a designed experiment and ANOVA analysis and comparison of means for three response variables: voltage, resistance and conductivity (calculated). Careful attention was paid to the values of the response variables across the geometry of the sensor prototypes. The sensor was evaluated for accuracy, sensitivity after multiple uses, and selectivity. After examining all of the data, the information obtained did not disprove the hypothesis, however it pointed to calcium acetate as the most powerful factor in the polypyrrole sensor in the accuracy test. Sensitivity and Selectivity tests had mixed findings. The samples not containing calcium acetate near the surface did not produce great changes in the response variables. The work presented in this thesis is an analysis of the raw data and materials used for generating the polypyrrole sensor prototype in order to introduce a new concept for manufacturing sensors using advanced materials; namely smart structures as sensors. / A Thesis submitted to the Department of Industrial Engineering in partial fulfillment of the requirements for the degree of Master of Science. / Degree Awarded: Spring Semester, 2006. / Date of Defense: March 17, 2006. / Polypyrrole, Water Detection, Sensors, Smart Material, sMart Sensor, Phosphate, Calcium Acetate / Includes bibliographical references. / Yaw A. Owusu, Professor Directing Thesis; Reginald Parker, Committee Member; Peter N. Kalu, Committee Member.
836

Developing institutional decision support systems: A system study, simulation analysis, and DSS design for taxicab dispatching

Unknown Date (has links)
The nature of decision support systems (DSS) when applied to complex operational control problems was examined, with particular attention given to the case of taxicab dispatching. The traditional perspective has been that DSS are primarily relevant to strategic planning problems, thus much research concerning the design of DSS has focussed on the problems and decision making needs of upper-level managers. The major premise of this research was that DSS concepts are not restricted to the domain of strategic planning problems (ad hoc DSS), but have significant implications for operational control problems also (institutional DSS). The case of taxicab dispatching is representative of these types of problems. / The objectives/achievements of this research were fourfold. First, a DSS design was proposed to serve as an effective and practical approach to improving the decision effectiveness of taxicab dispatchers. Two research topics emanating from this design were pursued and constitute the other three objectives of this study. / The first research topic can be viewed as a "sub"-problem emanating from the DSS design and comprised the second and third objectives of the study. First, the taxi dispatching system was modeled using black box and causal loop diagramming techniques to identify and isolate the crucial variable relationships in the system. This was converted into a parametric, experimental computer simulation model using the simulation language SLAMII (second objective), which provided a tool to experiment with a variety of dispatching policies and fleet sizes to assess their impact on the two primary objectives of the system: profit and service quality (third objective). / The second research topic stemming from the DSS design, and fourth objective of the research, was a "super"-problem requiring the development of a general DSS design framework to aid in the development of institutional DSS. Extrapolating from the taxicab dispatching DSS design and other case examples, and drawing from the theoretical literature, a framework was developed which identified the major design characteristics of institutional DSS. / Source: Dissertation Abstracts International, Volume: 50-06, Section: A, page: 1466. / Major Professor: Thomas D. Clark, Jr. / Thesis (Ph.D.)--The Florida State University, 1989.
837

Optimization of Ultraviolet Lamp Placement for the Curing of Composite Manufactured by the Ridft Process

Unknown Date (has links)
The ultraviolet curing technique, when applied to composite manufacturing processes, allows UV rays to radiate the component reducing the curing time of composite materials from hours to minutes. This technique has been demonstrated with the Resin Infusion between Double Flexible Tooling (RIDFT) process for flat components. However, the curing of composites other than flat components remains a challenge. Applying UV curing to three-dimensional geometries mostly requires utilization of several lamps positioned around the component. Not only is this expensive, but it may allow for UV exposure overlap resulting in excessive curing of some areas on the component. Trial-and-error positioning, utilization of large arrays of UV lamps with components moving on a conveyor, and robotically actuated UV lamps have been employed in some quarters. These methods are too expensive, time consuming, and complex, thus negating the idea of simplicity that composite manufacturing processes tend to portray. To tackle this problem, a general-purpose model was proposed. Solving this model involves two stages: numerical integration using Gauss quadrature method, and optimization problem using Davidon-Fletcher Powell (DFP) algorithm. The model predicts the UV lamps' optimum positions and generates the UV intensity on the predefined sections on the composite substrates. Furthermore, three-dimensional composite materials were manufactured using different manufacturing parameters. Mechanical and rheological tests were carried out to determine uniformity of curing; the results of these tests were compared with three-dimensional catalytic cured composite components. The UV cured composites have mechanical properties that are comparable with the catalytic cured composite. / A Thesis submitted to the Department of Industrial and Manufacturing Engineering in partial fulfillment of the requirements for the degree of Masters of Science. / Degree Awarded: Spring Semester, 2009. / Date of Defense: April 10, 2009. / Photoinitiator, forming / Includes bibliographical references. / Okenwa I. Okoli, Professor Directing Thesis; Samuel A. Awoniyi, Committee Member; David A. Jack, Committee Member.
838

RETROSPECTIVE APPROXIMATION ALGORITHMS FOR MULTI-OBJECTIVE SIMULATION OPTIMIZATION ON INTEGER LATTICES

Kyle Cooper (6482990) 10 June 2019 (has links)
We consider multi-objective simulation optimization (MOSO) problems, that is, nonlinear optimization problems in which multiple simultaneous objective functions can only be observed with stochastic error, e.g., as output from a Monte Carlo simulation model. In this context, the solution to a MOSO problem is the efficient set, which is the set of all feasible decision points for which no other feasible decision<br>point is at least as good on all objectives and strictly better on at least one objective. We are concerned primarily with MOSO problems on integer lattices, that is, MOSO<br><div>problems where the feasible set is a subset of an integer lattice. <br></div><div><br></div><div>In the first study, we propose the Retrospective Partitioned Epsilon-constraint with Relaxed Local Enumeration (R-PεRLE) algorithm to solve the bi-objective simulation optimization problem on integer lattices. R-PεRLE is designed for sampling efficiency. It uses a retrospective approximation (RA) framework to repeatedly call<br></div>the PεRLE sample-path solver at a sequence of increasing sample sizes, using the solution from the previous RA iteration as a warm start for the current RA iteration.<br>The PεRLE sample-path solver is designed to solve the sample-path problem only to within a tolerance commensurate with the sampling error. It comprises a call to<br>each of the Pε and RLE algorithms, in sequence. First, Pε searches for new points to add to the sample-path local efficient set by solving multiple constrained single-<br>objective optimization problems. Pε places constraints to locate new sample-path local efficient points that are a function of the standard error away, in the objective space, from those already obtained. Then, the set of sample-path local efficient points found by Pε is sent to RLE, which is a local crawling algorithm that ensures the set is a sample-path approximate local efficient set. As the number of RA iterations increases, R-PεRLE provably converges to a local efficient set with probability one under appropriate regularity conditions. We also propose a naive, provably-convergent<br>benchmark algorithm for problems with two or more objectives, called R-MinRLE. R-MinRLE is identical to R-PεRLE except that it replaces the Pε algorithm with an<br>algorithm that updates one local minimum on each objective before invoking RLE. R-PεRLE performs favorably relative to R-MinRLE and the current state of the art, MO-COMPASS, in our numerical experiments. Our work points to a family of<br><div>RA algorithms for MOSO on integer lattices that employ RLE for certification of a sample-path approximate local efficient set, and for which the convergence guarantees are provided in this study.</div><div><br></div><div>In the second study, we present the PyMOSO software package for solving multi-objective simulation optimization problems on integer lattices, and for implementing<br></div>and testing new simulation optimization (SO) algorithms. First, for solving MOSO problems on integer lattices, PyMOSO implements R-PεRLE and R-MinRLE, which<br>are developed in the first study. Both algorithms employ pseudo-gradients, are designed for sampling efficiency, and return solutions that, under appropriate regularity<br>conditions, provably converge to a local efficient set with probability one as the simulation budget increases. PyMOSO can interface with existing simulation software and<br>can obtain simulation replications in parallel. Second, for implementing and testing new SO algorithms, PyMOSO includes pseudo-random number stream management,<br>implements algorithm testing with independent pseudo-random number streams run in parallel, and computes the performance of algorithms with user-defined metrics.<br>For convenience, we also include an implementation of R-SPLINE for problems with one objective. The PyMOSO source code is available under a permissive open source<br>license.
839

Optimized Reservoir Management for Downstream Environmental Purposes

Adams, Lauren 16 March 2019 (has links)
<p> In regulated rivers, reservoir operation decisions largely determine downstream river temperature and flow. Computational methods can minimize the risk and uncertainty of making regrettable environmental release decisions and aid operations planning and performance prediction. Mathematical modeling in particular can optimize the timing and magnitude of reservoir release decisions for downstream benefit while accounting for seasonal uncertainty, water storage impact, and competing water demands. This dissertation uses optimization and modeling techniques, modifying traditional optimization modeling to include temporal correlation in outcome variables and incorporating long-term planning and risk management into prescribed reservoir operations. The proposed method is implemented in one case, a) with a state variable that tracks outcome benefits over time (fish population size) and, in another case, b) with a maximin stochastic dynamic program solution algorithm that maximizes net operational benefit and minimizes worst-case outcomes (for cold water habitat delivery). This method is particularly useful for environmental flow management, when the water quality and quantity of the river and reservoir in one time step affect the quantity and quality in the reservoir and the river for later periods. Better solutions with these methods internalize risk and hedge releases at the beginning of an operating season to maximize downstream benefit and reduce the probability of catastrophe for the season and future years. Maximizing the minimum cold-water habitat area over months of a season or multiple years, or maximizing a river indicator variable explicitly, could likely help, for example, maximize an out-migrating salmon smolt population downstream. The method is demonstrated with a case study optimizing environmental releases from Folsom Dam and another optimizing temperature management from Shasta Dam in northern California. These results inform general rules for environmental flow management and temperature management of reservoirs, with specific policy recommendations for both Folsom and Shasta reservoirs. In both cases, the added value from employing hedging rules help reservoir operations minimize the risk of environmental catastrophe and conserve storage both within an operating season and across years.</p><p>
840

Marginal social cost auctions for congested airport facilities

Schorr, Raphael Avram, 1976- January 2002 (has links)
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering; and, (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2002. / "September 2002." / Includes bibliographical references (p. 96-97). / by Raphael Avram Schorr. / S.M. / M.Eng.

Page generated in 0.0801 seconds