• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 63
  • 1
  • Tagged with
  • 227
  • 85
  • 71
  • 44
  • 30
  • 30
  • 27
  • 26
  • 26
  • 22
  • 22
  • 21
  • 21
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Sequential and simultaneous lifting in the node packing polyhedron

Pavelka, Jeffrey William January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Todd W. Easton / Integer programs (IPs) are a commonly researched class of decision problems. These problems are used in various applications to help companies, governments, or individuals make better decisions by determining optimal resource allocations. While IPs are practical tools, they require an exponential amount of effort to solve, unless P = NP. This fact has led to much research focused on reducing the time required to solve IPs. Cutting planes are a commonly used tool for reducing IP solving time. Lifting, a process of changing the coefficients in an inequality, is often employed to strengthen cutting planes. When lifting, the goal is often to create a facet defining inequality, which is theoretically the strongest cutting plane. This thesis introduces two new lifting procedures for the Node Packing problem. The Node Packing problem seeks to select the maximum number of nodes in a graph such that no two nodes are adjacent. The first lifting method, the Simultaneous Lifting Expansion, takes two inequalities and combines them to make a stronger cut. It works for any two general classes of inequalities, as long as the requisite graph structures are met. The second method, the Cliques On Odd-holes Lifting (COOL) procedure, lifts from an odd-hole inequality to a facet defining inequality. COOL makes use of the Odd Gap Lifting procedure, an efficient method for finding lifting coefficients on odd holes. A computational study shows COOL to be effective in creating cuts in graphs with low edge densities.
112

Modeling and analysis of telemental health systems with Petri nets

Aeschliman, Ryan January 1900 (has links)
Master of Science / Industrial & Manufacturing Systems Engineering / David H. Ben-Arieh / Telemental health systems, a form of telemedicine, use electronic communication media to provide patients in remote locations access to psychological and psychiatric specialists. The structure of telemental health systems has a major impact on their performance. Discrete-event simulations offer useful results concerning capacities and utilization of specific resources. Simulation, however, cannot provide theoretical properties of analyzed systems. Petri net representations of systems can overcome this shortfall, offering a wide range of easily-analyzed and useful properties. Their ability to model resource conflict, parallel activities, and failure modes fits nicely with the reality of telemental health systems. Analysis of behavioral properties of Petri nets can provide meaningful information for system analysts. The most useful properties include net boundedness, liveness, and non-reachability of certain undesirable states. The thesis discusses methods to find all these properties. Specifically, it provides property-preserving net reductions to facilitate analysis of boundedness and liveness and describes an integer programming model to solve reachability and coverability problems. Moreover, this thesis outlines a simulation analysis of synchronous and asynchronous telemental health systems. The paper then describes a Petri net model of a generic telemental health delivery system. The paper subjects the model to an integer programming model and net reduction. The integer programming model indicated that the number of resources in the system remains static, full utilization of resources at a given time is possible, conflict over resources is possible, and improper work prioritization is possible within the model. Net reduction and analysis with open-source software showed that the model is bounded and live. These results can aid telemedicine system architects in diagnosing potential process issues. Additionally, the methods described in the paper provide an excellent tool for further, more granular analysis of telemedicine systems.
113

Laser welding of biodegradable polyglycolic acid (PGA) based polymer felt scaffolds

Rout, Soumya Sambit January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Shuting Lei / Polyglycolic acid (PGA) is an important polymer in the field of tissue engineering. It has many favorable properties such as biocompatibility, bioabsorbability, high melting point, low solubility in organic solvents, high tensile strength and is used in a variety of medical related applications. Currently there are various methods such felting, stitching, use of binder/adhesive for joining the non woven meshes of PGA polymer in order to make suitable three dimensional scaffolds. The existing methods for joining the non woven meshes of PGA polymer are usually time consuming and not very flexible. Thus there is a need for a better technique that would overcome the drawbacks of the existing methods. Laser welding offers potential advantages such as high welding rates, easy to automate, improved seam and single sided access such that welds can be performed under various layers of fabric. Therefore, the main objective of this research is to conduct a fundamental study on laser welding of non woven PGA scaffold felts. An experimental setup for spot welding is built that would assist in the formation of tubular structures. A factorial design of experiments is used to study the effects of the operating parameters such as laser power, beam diameter, time duration and pressure on the weld quality. The weld quality is assessed in terms of weld strength and weld diameter. Based on the parametric study, a regression analysis is carried out to form correlations between weld quality and the operating parameters, which could be used to select the optimal operating conditions. The successful welds obtained by the laser welding process have no discoloration and are stronger than the tensile strength of the original non woven sheets of PGA biofelt.
114

Simultaneously lifting multiple sets in binary knapsack integer programs

Kubik, Lauren Ashley January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Todd W. Easton / Integer programs (IPs) are mathematical models that can provide organizations with the ability to optimally obtain their goals through appropriate utilization and allocation of available resources. Unfortunately, IPs are NP-complete in the strong sense, and many integer programs cannot be solved. Introduced by Gomory, lifting is a technique that takes a valid inequality and strengthens it. Lifting can result in facet defining inequalities, which are the theoretically strongest inequalities; because of this, lifting techniques are commonly used in commercial IP software to reduce the time required to solve an IP. This thesis introduces two new algorithms for exact simultaneous up lifting multiple sets into binary knapsack problems and introduces sequential simultaneous lifting. The Dynamic Programming Multiple Lifting Set Algorithm (DPMLSA) is a pseudopolynomial time algorithm bounded by O(nb) effort that can exactly uplift an arbitrary number of sets. The Three Set Simultaneous Lifting Algorithm (TSSLA) is a polynomial time algorithm bounded by O(n2) and can exact simultaneously up lift three sets. The simultaneously lifted inequalities generated by the DPMLSA and the TSSLA can be facet defining, and neither algorithm requires starting with a minimal cover. A brief computational study shows that the DPMLSA is fast and required an average of only 0.070 seconds. The computational study also shows these sequential simultaneously lifted inequalities are useful, as the solution time decreased by an overall average of 18.4%. Therefore, implementing the DPMLSA should be beneficial for large IPs.
115

Optimizing quarantine regions through graph theory and simulation

Carlyle, Kyle R. January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Todd W. Easton / Epidemics have been modeled mathematically as a way to safely understand them. For many of these mathematical models, the underlying assumptions they make provide excellent mathematical results, but are unrealistic for practical use. This research branches out from previous work by providing a model of the spread of infectious diseases and a model of quarantining this disease without the limiting assumptions of previous research. One of the main results of this thesis was the development of a core simulation that rapidly simulates the spread of an epidemic on a contact network. This simulation can be easily adapted to any disease through the adjustment of many parameters. This research provides the first definition for a quarantine cut and an ellipsoidal geographic network. This thesis uses the ellipsoidal geographic network to determine what is, and what is not, a feasible quarantine region. The quarantine cut is a new approach to partitioning quarantined and saved individuals in an optimized way. To achieve an optimal quarantine cut, an integer program was developed. Although this integer program runs in polynomial time, the preparation required to execute this algorithm is unrealistic in a disease outbreak scenario. To provide implementable results, a heuristic and some general theory are provided. In a study, the heuristic performed within 10% of the optimal quarantine cut, which shows that the theory developed in this thesis can be successfully used in a disease outbreak scenario.
116

Simulating epidemics in rural areas and optimizing preplanned quarantine areas using a clustering heuristic

Anderson, Joseph Edward January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Todd W. Easton / With the present threat of bioterrorist attacks and new natural disease strains developing, efficient and effective countermeasures must be in place in case of an epidemic outbreak. The best strategy is to stop the attack or natural phenomenon before it happens, but governments and individual citizens must have measures in place to limit the spread of a biological threat or infectious disease if it is ever introduced into society. The objective of this research is to know, before an outbreak, the best quarantine areas. Quarantines force similar individuals together and can be mathematically modeled as clustering people into distinct groups. In order to effectively determine the clustering solution to use as a quarantine plan, this research developed a simulation core that is highly adaptable to different disease types and different contact networks. The input needed for the simulation core is the characteristics of the disease as well as the contact network of the area to be modeled. Clustering is a mathematical problem that groups entities based on their similarities while keeping dissimilar entities in separate groups. Clustering has been widely used by civilian and military researchers to provide quality solutions to numerous problems. This research builds a mathematical model to find clusters from a community’s contact network. These clusters are then the preplanned quarantine areas. To find quality clusters a Clustering Heuristic using Integer Programming (CHIP) is developed. CHIP is a large neighborhood, hill-climbing heuristic and some computational results verify that it quickly generates good clustering solutions. CHIP is an effective heuristic to group people into clusters to be used as quarantine areas prior to the development of a disease or biological attack. Through a small computational study, CHIP is shown to produce clustering solutions that are about 25% better than the commonly used K-means clustering heuristic. CHIP provides an effective tool to combat the spread of an infectious disease or a biological terroristic attack and serves as a potential deterrent to possible terrorist attacks due to the fact that it would limit their destructive power. CHIP leads to the next level of preparation that could save countless lives in the event of an epidemic.
117

Pricing of collateralized debt obligations and credit default swaps using Monte Carlo simulation

Neier, Mark January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Chih-Hang Wu / The recent economic crisis has been partially blamed on the decline in the housing market. This decline in the housing market resulted in an estimated 87% decline in value of collateralized debt obligations (CDOs) between 2007 and 2008. This drastic decline in home values was sudden and unanticipated, thus it was incomprehensible for many investors how this would affect CDOs. This shows that while analytical techniques can be used to price CDOs, these techniques cannot be used to demonstrate the behavior of CDOs under radically different economic circumstances. To better understand the behavior of CDOs under different economic circumstances, numerical techniques such as Monte Carlo simulation can be used instead of analytical techniques to price CDOs. Andersen et al (2005) proposed a method for calculating the probability of defaults that could then be used in the Monte Carlo simulation to price the collateralized debt obligation. The research proposed by Andersen et al (2005) demonstrates the process of calculating correlated probability of defaults for a group of obligors. This calculation is based on the correlations between the obligors using copulas. Using this probability of default, the price of a collateralized debt obligation can be evaluated using Monte Carlo simulation. Monte Carlo simulation provides a more simple yet effective approach compared to analytical pricing techniques. Simulation also allows investors to have a better understanding of the behaviors of CDOs compared to analytical pricing techniques. By analyzing the various behaviors under uncertainty, it can be observed how a downturn in the economy could affect CDOs. This thesis extends on the use of copulas to simulate the correlation between obligors. Copulas allow for the creation of one joint distribution using a set of independent distributions thus allowing for an efficient way of modeling the correlation between obligors. The research contained within this thesis demonstrates how Monte Carlo simulation can be used to effectively price collateralized debt obligations. It also shows how the use of copulas can be used to accurately characterize the correlation between obligor defaults for pricing collateralized debt obligations. Numerical examples for both the obligor defaults and the price of collateralized debt obligations are presented to demonstrate the results using Monte Carlo simulation.
118

The theory of simultaneous lifting: constellations in conflict hypergraphs

Pahwa, Samir January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Todd W. Easton / Integer programming (IP) is a powerful technique used by many companies and organizations to determine optimal strategies for making decisions and managing resources to achieve their goals. One class of IP problems is the multiple knapsack (MK) problem. However, MK and other IP problems, are extremely complicated since they are ${\cal NP}$-hard problems. Furthermore, there exist numerous instances that can not be solved. One technique commonly used to reduce the solution time for IP problems is lifting. This method, introduced by Gomory, takes an existing valid inequality and strengthens it. Lifting has the potential to form facet defining inequalities, which are the strongest inequalities to solve an IP problem. As a result, lifting is frequently used in integer programming applications. This research takes a broad approach to simultaneous lifting and provides its theoretical background for. The underlying hypergraphic structure for simultaneous lifting in an MK problem is identified and called a constellation. A constellation contains two hypercliques and multiple hyperstars from various conflict hypergraphs. Theoretical results demonstrate that a constellation induces valid inequalities that could be obtained by simultaneous lifting. Moreover, these constellation inequalities can be facet defining. The primary advancements, constellations and the associated valid inequalities, of this thesis are theoretical in nature. By providing the theory behind simultaneous lifting, researchers should be able to apply this knowledge to develop new algorithms that enable simultaneous lifting to be performed faster and over more complex integer programs.
119

Accelerating Successive Approximation Algorithm Via Action Elimination

Jaber, Nasser M. A. Jr. 20 January 2009 (has links)
This research is an effort to improve the performance of successive approximation algorithm with a prime aim of solving finite states and actions, infinite horizon, stationary, discrete and discounted Markov Decision Processes (MDPs). Successive approximation is a simple and commonly used method to solve MDPs. Successive approximation often appears to be intractable for solving large scale MDPs due to its computational complexity. Action elimination, one of the techniques used to accelerate solving MDPs, reduces the problem size through identifying and eliminating sub-optimal actions. In some cases successive approximation is terminated when all actions but one per state are eliminated. The bounds on value functions are the key element in action elimination. New terms (action gain, action relative gain and action cumulative relative gain) were introduced to construct tighter bounds on the value functions and to propose an improved action elimination algorithm. When span semi-norm is used, we show numerically that the actual convergence of successive approximation is faster than the known theoretical rate. The absence of easy-to-compute bounds on the actual convergence rate motivated the current research to try a heuristic action elimination algorithm. The heuristic utilizes an estimated convergence rate in the span semi-norm to speed up action elimination. The algorithm demonstrated exceptional performance in terms of solution optimality and savings in computational time. Certain types of structured Markov processes are known to have monotone optimal policy. Two special action elimination algorithms are proposed in this research to accelerate successive approximation for these types of MDPs. The first algorithm uses the state space partitioning and prioritize iterate values updating in a way that maximizes temporary elimination of sub-optimal actions based on the policy monotonicity. The second algorithm is an improved version that includes permanent action elimination to improve the performance of the algorithm. The performance of the proposed algorithms are assessed and compared to that of other algorithms. The proposed algorithms demonstrated outstanding performance in terms of number of iterations and omputational time to converge.
120

Femtosecond laser micromachining of advanced materials

Bian, Qiumei January 1900 (has links)
Doctor of Philosophy / Department of Industrial and Manufacturing Systems Engineering / Shuting Lei / Shuting Lei / Femtosecond (fs) laser ablation possesses unique characteristics for micromachining, notably non-thermal interaction with materials, high peak intensity, precision and flexibility. In this dissertation, the potential of fs laser ablation for machining polyurea aerogel and scribing thin film solar cell interconnection grooves is studied. In a preliminary background discussion, some key literature regarding the basic physics and mechanisms that govern ultrafast laser pulse interaction with materials and laser micromachining are summarized. First, the fs laser pulses are used to micromachine polyurea aerogel. The experimental results demonstrate that high quality machining surface can be obtained by tuning the laser fluence and beam scanning speed, which provides insights for micromachining polymers with porous structures. Second, a new fs laser micro-drilling technique is developed to drill micro-holes in stainless steel, in which a hollow core fiber is employed to transmit laser pulses to the target position. The coupling efficiency between the laser and the fiber is investigated and found to be strongly related to pulse energy and pulse duration. Third, the fs laser with various energy, pulse durations, and scanning speeds has been utilized to pattern Indium Tin Oxide (ITO) glass for thin film solar cells. The groove width decreases with increasing pulse duration due to the shorter the pulse duration the more effective of the energy used to material removal. In order to fully remove ITO without damaging the glass, the beam scanning speed need to precisely be controlled. Fourth, fs laser has been utilized to scribe Molybdenum thin film on Polyimide (PI) flexible substrate for Copper Indium Gallium Selenide (CIGS) thin film solar cells. The experimental parameters and results including ablation threshold, single- and multiple-pulse ablation shapes and ablation efficiency were discussed in details. In order to utilize the advantages of the fs lasers, the fabrication process has to be optimized for thin film patterning and structuring applications concerning both efficiency and quality. A predictive 3D Two Temperature Model (TTM) was proposed to predict ablation characteristics and help to understand the fs laser metal ablation mechanisms. 3D temperature field evolution for both electrons and lattice were demonstrated. The ablation model provides an insight to the physical processes occurring during fs laser excitation of metals. Desired processing fluence and process speed regime can be predicted by calculating the ablation threshold, ablation rate and ablation crater geometry using the developed model.

Page generated in 0.0134 seconds