• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 476
  • 71
  • 37
  • 37
  • 37
  • 37
  • 37
  • 37
  • 2
  • Tagged with
  • 604
  • 604
  • 59
  • 52
  • 47
  • 46
  • 42
  • 42
  • 40
  • 33
  • 32
  • 30
  • 25
  • 25
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Estimation and control of part weight and relevant process parameters in injection molding of amorphous thermoplastics

Varela, Alfredo E. (Alfredo Enrique) January 1996 (has links)
No description available.
182

Two stochastic control problems in finance: American options and illiquid investments.

Karahan, Cenk Cevat. Unknown Date (has links)
Stochastic control problems are ubiquitous in modern finance. However, explicit solutions and policies of such problems faced by investors receive disproportionately little attention. This dissertation focuses on characterizing and solving the policies for two stochastic control problems that buy-side investors face in the market, exercising American options and optimal redemption of illiquid investments such as hedge funds. / The return an investor realizes from his investment in an American or Bermudan style derivative is highly dependent on the exercise policy he employs. Despite the fact that the exercise policy is as crucial to the option buyer as the price, constructing these policies has not received as much attention vis-a-vis the pricing problem. The existing research on the optimal exercise policies is complex, unpractical and conducted to the extent it is utilized to reach accurate prices. Here we present a simple and practical new heuristic to develop exercise policies for American and Bermudan style derivatives, which are immensely valuable to buy-side entities in the market. Our heuristic relies on a simple look-ahead algorithm, which yields comparatively good exercise policies for Bermudan options with few exercise opportunities. We resort to policy improvement to construct more accurate exercise frontiers for American options with frequent exercise opportunities. This exercise frontier is in turn used to estimate the price of the derivative via simulation. Numerical examples yield prices comparable to the existing sophisticated simulation methods in terms of accuracy. Chapter 1 introduces the problem and lays out the valuation framework, Chapter 2 defines and describes our heuristic approach, chapter 3 provides algorithms for implementation with numerical examples provided in Chapter 4. / Optimal redemption policies for illiquid investments are studied in Chapter 5, where we consider a risk-averse investor whose investable assets are held in a perfectly liquid asset (a portfolio of cash and liquid assets or a mutual fund) and another investment that has liquidity restrictions. The illiquidity could be due to restrictions on the investments (such as hedge funds) or due to nature of the asset held (such as real estate). The investor's objective is to maximize the utility he derives from his terminal wealth at a future end date of his investment horizon. Furthermore the investor wants to hold his liquid wealth above a certain subsistence level, below which he incurs hefty borrowing costs or shortfall penalty. We consider the optimal conditions under which the investor must liquidate his illiquid assets. The redemption notification problem for hedge fund investors has certain affinity with the optimal control methods used in widely-studied inventory management problems. The optimal policy has a monotone structure similar in nature to inventory management problems. / Chapter 6 concludes the study and suggests possible extensions.
183

Factors Affecting Forecast Accuracy of Scrap Price in the U.S. Steel Industry.

Hardin, Kristin. Unknown Date (has links)
The volatility of steel scrap pricing makes formulating accurate scrap price forecasts difficult if not entirely impossible. While literature abounds regarding price forecasts for oil, electricity, and other commodities, no reliable scrap price forecasting models exist. The purpose of this quantitative futuristic study was (1) to assess factors that influence scrap prices and (2) based on this information, to develop an effective scrap price forecast model to help steel managers make effective purchasing decisions. The theoretical foundation for the study was provided by probability theory, which has traditionally informed futures research. Probability theory draws conclusions of a dataset based on statistical, logical consequence but without attempting to identify physical causation. Secondary data from the American Metal Market were subjected to time series techniques and auto-regressive moving averages. The research led to the development of two key indices---the West Texas Index and the Advanced Decliner Index---that should improve the reliability of this scrap price forecast model. The literary, business, and social change implications of this work include a unique price forecasting technique for scrap material; a globally more competitive, profitable, and sustainable steel industry in America; and consequently, increased employment opportunities in this industrial sector so vital for the health of the entire American economy and society.
184

Barriers to green supply chain implementation in the electronics industry.

Khiewnavawongsa, Sorraya. Unknown Date (has links)
Environmental issues have gained the public's attention significantly in the last several years, and organizations around the United States have increased awareness of green concepts and attempted to be environmentally friendly. Green supply chain management (GSCM) is one of many areas influenced by green awareness. GSCM integrates environmental thinking into supply chain management. In the United States, many organizations adopt GSCM principles into their systems. GSCM has been adopted by various business sectors such as manufacturers, transportation, government, and services. Environmental concepts are embraced by many industries and businesses, specifically in the electronics industry, is one sector most affected due to its unique characteristics such as raw materials, processes, and regulations. / The present study identifies barriers to implementation of green supply chain management in the electronics industry. The study explores factors that prevent companies from implementing green projects, and identifies green implementation types that the electronics industry has adopted in the United States. Data was gathered by survey methods. Data from the interview and the literature review were analyzed and used as input for generating survey questions. Participants represented manufacturers and distributors in the electronics industry in the United States. Survey questions included 33 barriers to green implementation, and readiness to green implementation of the electronics industry using six implementation types. Data was analyzed by SPSS using multivariate techniques. Company characteristics were tested to see the differences in green barriers and implementation readiness. Results showed that the financial impact is the major factor for green implementation by electronics companies. Company size did not impact green supply chain barriers. However, small-sized companies were less ready to implement green initiatives for most projects. Generally, distributors were more affected from barriers to green implementation than manufacturers. Electronics companies were most ready to implement with green manufacturing and packaging projects while projects related to green suppliers were least ready for implementation. Furthermore, companies that have been aware of environmental concerns for a longer period of time were more ready to implement green projects than companies with more recent environmental awareness. The amount of future investment can be predicted from past investment.
185

Decomposition algorithms for stochastic combinatorial optimization: Computational experiments and extensions

Ntaimo, Lewis January 2004 (has links)
Some of the most important and challenging problems in computer science and operations research are stochastic combinatorial optimization (SCO) problems. SCO deals with a class of combinatorial optimization models and algorithms in which some of the data are subject to significant uncertainty and evolve over time, and often discrete decisions need to be made before observing complete future data. Therefore, under such circumstances it becomes necessary to develop models and algorithms in which plans are evaluated against possible future scenarios that represent alternative outcomes of data. Consequently, SCO models are characterized by a large number of scenarios, discrete decision variables and constraints. This dissertation focuses on the development of practical decomposition algorithms for large-scale SCO. Stochastic mixed-integer programming (SMIP), the optimization branch concerned with models containing discrete decision variables and random parameters, provides one way for dealing with such decision-making problems under uncertainty. This dissertation studies decomposition algorithms, models and applications for large-scale two-stage SMIP. The theoretical underpinnings of the method are derived from the disjunctive decomposition (D 2) method. We study this class of methods through applications, computations and extensions. With regard to applications, we first present a stochastic server location problem (SSLP) which arises in a variety of applications. These models give rise to SMIP problems in which all integer variables are binary. We study the performance of the D2 method with these problems. In order to carry out a more comprehensive study of SSLP problems, we also present certain other valid inequalities for SMIP problems. Following our study with SSLP, we also discuss the implementation of the D2 method, and also study its performance on problems in which the second-stage is mixed-integer (binary). The models for which we carry out this experimental study have appeared in the literature as stochastic matching problems, and stochastic strategic supply chain planning problems. Finally, in terms of extensions of the D 2 method, we also present a new procedure in which the first-stage model is allowed to include continuous variables. We conclude this dissertation with several ideas for future research.
186

Active vision inspection: Planning, error analysis, and tolerance design

Yang, Christopher Chuan-Chi, 1968- January 1997 (has links)
Inspection is a process used to determine whether a component deviates from a given set of specifications. In industry, we usually use a coordinate measuring machine (CMM) to inspect CAD-based models, but inspection using vision sensors has recently drawn more attention because of advances that have been made in computer and imaging technologies. In this dissertation, we introduce active vision inspection for CAD-based three-dimensional models. We divide the dissertation into three major components: (i) planning, (ii) error analysis, and (iii) tolerance design. In inspection planning, the inputs are boundary representation (object centered representation) and an aspect graph (viewer centered representation) of the inspected component; the output is a sensor arrangement for dimensioning a set of topologic entities. In planning, we first use geometric reasoning and object oriented representation to determine a set of topologic entities (measurable entities) to be dimensioned based on the manufactured features on the component (such as slot, pocket, hole etc.) and their spatial relationships. Using the aspect graph, we obtain a set of possible sensor settings and determine an optimized set of sensor settings (sensor arrangement) for dimensioning the measurable entities. Since quantization errors and displacement errors are inherent in an active vision system, we analyze and model the density functions of these errors based on their characteristics and use them to determine the accuracy of inspection for a given sensor setting. In addition, we utilize hierarchical interval constraint networks for tolerance design. We redefine network satisfaction and constraint consistency for the application in tolerance design and develop new forward and backward propagation techniques for tolerance analysis and tolerance synthesis, respectively.
187

Managing business workflows using a database approach: A formal model, a case study and a prototype

Bajaj, Akhilesh January 1997 (has links)
Workflows are an integral part of an organization, and managing them has long been recognized as important. With recent advances in information systems, there has been a great deal of commercial and research interest in developing workflow management systems (WFMS) to help businesses manage their workflows. In the database literature, much of this work has concentrated on developing advanced transaction models that can essentially handle long-lived transactions. Many WFMS tools have been developed in industry, each usually supporting different abstractions. The current process of constructing a WFMS application consists of obtaining user requirements informally, and writing the WFMS code using a WFMS tool. Since WFMS tools are evolving, and an accepted set of abstractions that should be supported by a WFMS tool does not exist, this process is unstructured and sensitive to the WFMS tool used. This dissertation aims at providing structure to the process of developing a workflow application. Borrowing from the established process of developing a database application, we follow a "top-down" approach: use a formally defined conceptual model to capture user requirements, and then map the conceptual model to the implementation model. We first developed and formally defined a conceptual workflow model (SEAM). Since the completeness of a conceptual model in a new domain (such as workflow requirements) is important, we have also developed and tested a methodology to test the completeness of conceptual workflow models. The next step is to show how SEAM can be mapped to an implementation model. We have selected the current abstractions of computationally complete data manipulation languages, triggers, stored procedures and support for embedded data manipulation languages as the target implementation model. SEAM is mapped to this model, and a prototype is implemented as an example. Thus, this dissertation provides sufficient information to construct an automated WFMS, built on currently available abstractions. In addition, the dissertation also provides a methodology that can be used to empirically measure the completeness of conceptual workflow models.
188

A neural network approach for the solution of Traveling Salesman and basic vehicle routing problems

Ghamasaee, Rahman, 1953- January 1997 (has links)
Easy to explain and difficult to solve, the traveling salesman problem, TSP, is to find the minimum distance Hamiltonian circuit on a network of n cities. The problem cannot be solved in polynomial time, that is, the maximum number of computational steps needed to find the optimum solution grows with n faster than any power of n. Very good combinatoric solution approaches including heuristics with worst case lower bounds, exist. Neural network approaches for solving TSP have been proposed recently. In the elastic net approach, the algorithm begins from m nodes on a small circle centered on the centroid of the distribution of cities. Each node is represented by the coordinates of the related point in the plane. By successive recalculation of the position of nodes, the ring is gradually deformed, and finally it describes a tour around the cities. In another approach, the self organizing feature map, SOFM, which is based on Kohonen's idea of winner takes all, fewer than m nodes are updated at each iteration. In this dissertation I have integrated these two ideas to design a hybrid method with faster convergence to a good solution. On each iteration of the original elastic net method two nx m matrices of connection weights and inter node-city distances must be calculated. In our hybrid method this has been reduced to the calculation of one row and one column of each matrix, thus, If the computational complexity of the elastic net is O(n x m) then the complexity of the hybrid method is O(n+m). The hybrid method then is used to solve the basic vehicle routing problem, VRP, which is the problem of routing vehicles between customers so that the capacity of each vehicle is not violated. A two phase approach is used. In the first phase clusters of customers that satisfy the capacity constrain are formed by using a SOFM network, then in the second phase the above hybrid algorithm is used to solve the corresponding TSP. Our improved method is much faster than the elastic net method. Statistical comparison of the TSP tours shows no difference between the two methods. Our computational results for VRP indicate that our heuristic outperforms existing methods by producing a shorter total tour length.
189

Operating policies for manufacturing cells

Iyer, Anand, 1968- January 1996 (has links)
Manufacturing cells consisting of an empowered team of workers and the resources required to produce a family of related products have become popular in recent years. Such cells require significant changes in organizational policies for personnel, wage administration, accounting and scheduling. For example, there are usually fewer workers than machines and as a result cells are staffed by cross-trained workers. However, little is known about operating these cells since much of the research in this area has concentrated on the cell formation problem. This thesis discusses the issue of determining good operating policies for manufacturing cells. Operating policy refers to a protocol for setting lot sizes, transfer batch sizes, cell Work-In-Process limits and machine queue dispatching as well as worker assignment rules. Specific components of operating policies have been examined in isolation previously in different contexts. However, cell performance is determined not only by the individual components of policies but also by the nature of the interactions between them. Thus, it is imperative to study policies in an integrated manner in order to determine how best to utilize the limited resources of the cell. The initial part of the thesis is devoted to discussing a general framework which has been developed to parameterize operating policies. Specific policies can be recovered by assigning values to the parameters of the framework. A few examples illustrate the use of the framework. The remainder of the thesis focuses on the various ways in which the framework representation of policies can be used. This includes the development of a general purpose simulator using the Object-Oriented paradigm and analytical models for some policies. A comparison of various operating strategies using simulation and analytical models is also presented. The thesis concludes with a discussion of the insights gleaned from this work as well as directions for future work.
190

A software laboratory and comparative study of computational methods for Markov decision processes

Choi, Jongsup, 1956- January 1996 (has links)
Dynamic programming (DP) is one of the most important mathematical programming methods. However, a major limitation in the practical application of DP methods to stochastic decision and control problems has been the explosive computational burden. Significant amounts of research have been focused on improving the speed of convergence and allowing for larger state and action spaces. The principal methods and algorithms of DP are surveyed in this dissertation. The rank-one correction method for value iteration (ROC) recently proposed by Bertsekas was designed to increase the speed of convergence. In this dissertation we have extended the ROC method proposed by Bertsekas to problems with multiple policies. This method is particularly well-suited to systems with substochastic matrices, e.g., those arising in shortest path problems. In order to test, verify, and compare different computational methods we developed a FORTRAN software laboratory for Stochastic s (YS)tems (CO)ntrol and (DE)cision algorithms for discrete time, finite Markov decision processes (SYSCODE). This is a user-friendly, interactive software laboratory. SYSCODE provides the user with a choice of 39 combinations of DP algorithms for testing and 1 comparison. SYSCODE has also been endowed with sophisticated capabilities for random problem data generation. We present a comprehensive computational comparison of many of the algorithms provided by SYSCODE using well-known test problems as well as randomly generated problem data.

Page generated in 0.0906 seconds