• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 422
  • Tagged with
  • 422
  • 422
  • 422
  • 422
  • 422
  • 422
  • 416
  • 413
  • 152
  • 98
  • 73
  • 73
  • 68
  • 66
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

The analysis and optimization of the Alcoa Mill Products supply chain for European customers / Analysis and optimization of the AMP supply chain for European customers

Urkiel, Brian A. (Brian Alexander), 1971- January 2000 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering; in conjunction with the Leaders for Manufacturing Program, Massachusetts Institute of Technology, 2000. / Also available online at the MIT Theses Online homepage <http://thesis.mit.edu>. / Includes bibliographical references (p. 100-101). / This thesis examines the challenges of managing a global supply chain in a large, well-established organization and outlines certain techniques that can be utilized to achieve more effective supply chain management. The research was conducted at Davenport Works, which is part of the Alcoa Mill Products (AMP) Business Unit, and examined the business unit's global supply chain with its European customers. The presence of inventory can hide many of the root cause problems within a supply chain and the project driver for this work was clearly inventory reduction. However, while excessive inventory is clearly a problem and organizations should strive to reduce unnecessary inventory as much as possible, there is an optimal amount of inventory that should be maintained and that amount is rarely zero. Inventory is held for a variety of reasons and can be utilized as a tool to countermeasure the primary factors that influence inventory requirements: customer demand, demand variability, production yield, production yield variability, lead time, lead time variability, and desired customer service levels. Alcoa utilizes inventory as a countermeasure within their supply chain for a variety of reasons. Customers are demanding increasing levels of service; and their demand patterns are variable. Replenishment lead times are long (on the order of months) and variable. Davenport Works is striving to achieve economies of scale; and their production yields are variable and often times deviate significantly from the customer's forecasted consumption rate. Currently, high levels of inventory are being maintained throughout the supply chain; and desired customer service level targets are not being met. AMP has no formal methodologies to both characterize the reasons why inventory is being maintained and to determine what inventory requirements they need to satisfy each specific customer program. In addition, AMP is driving cost reductions throughout the entire organization. This is forcing the organization to justify the inventory they currently have and also putting pressure on the organization to reduce inventories throughout the supply chain. This thesis has three primary objectives. Firstly, to provide a detailed analysis of the entire AMP supply chain for its European customers and articulate the reasons why AMP is maintaining inventory. This includes a discussion about supply chains, supply chain management, and the role of inventory in the supply chain. Secondly, to describe a methodology, which can be applied to engineer inventory levels for each product. The base stock model was used for this and is an excellent tool to demonstrate how supply chain variables impact inventory requirements, target areas for improvement, and quantify inventory requirements in a systematic manner. Thirdly, to provide recommendations to improve overall supply chain performance and optimize inventories. / by Brian A. Urkiel. / S.M.
142

Redefining manufacturing quality control in the electronics industry

Simington, Maureen Fresquez, 1970- January 2000 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering; in conjunction with the Leaders for Manufacturing Program, Massachusetts Institute of Technology, 2000. / Also available online at the MIT Theses Online homepage <http://thesis.mit.edu>. / Includes bibliographical references (p. 103). / The most time consuming and capital intensive portion in the assembly of power electronic devices is the test system. A comprehensive test system including functional and stress screening technologies can significantly increase assembly times and more than double the capital investment required in a new assembly line. The primary purpose of the test system is to screen components for early life failures and to verify proper assembly. Determination of key performance characteristics and the resultant test system are developed during the product design phase and are seldom revised after the product has been released to manufacturing. This thesis explores best practices in testing methods and develops new methods to analyze test system performance. Both efforts were conducted in an effort to optimize existing test regimes. Upon completion of the above analyses the existing test sequence was reduced by 50%. This was primarily due to a discovery in the Burn In test cycle which indicated that failures correlated strongly with the on/off cycles inherent in the test sequence. A new test cycle was proposed to accommodate this finding and test results verified the initial hypothesis. Additionally, the summary of best practices identified new forms of product testing including Highly Accelerated Stress Testing (HAST), moving additional product testing into the development phase consequently reducing testing requirements during assembly. / by Maureen Fresquez Simington. / S.M.
143

Seisan! ... Ichi ... Ni ... San! : the kick of Design for Six Sigma in the automotive industry / DFSS in the automotive industry

Sabia, Tracy L., 1976- January 2002 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering; and, (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; in conjunction with the Leaders for Manufacturing Program at the Massachusetts Institute of Technology, 2002. / Includes bibliographical references (leaves 56-57). / In today's aggressive business environment, many manufacturing firms are searching for new strategies or methodologies that will provide some type of competitive advantage. Recently, in order to address that issue, the automotive industry has adopted the process of Design for Six Sigma (DFSS). Based upon the philosophies of Six Sigma quality management, Design for Six Sigma focuses on the design and research phases of product design, as its name implies. Consequent to accurately identifying the customer requirements, the Design for Six Sigma process insists upon data-driven design decisions coherent with the consumer defined quality metrics. While the concepts of Design for Six Sigma and Six Sigma in general have been very successful for a number of large manufacturing firms such as General Electric and Motorola, it is not clear whether it will offer the same benefits for the automotive industry. Using the Parallel Hybrid Truck Program at General Motors Corporation, the largest US automotive manufacturer, as a case study, the implementation of Design for Six Sigma within the automotive industry is explored. It is obvious Design for Six Sigma will provide both advantages and disadvantages. Therefore, in order for Design for Six Sigma to be successful in the automotive industry, the following insights need to be captured and delivered upon. Leadership must be strong and demonstrate a consistent commitment to the process. Both the technical and cultural elements of the process need to be implemented successfully. Integration of Design for Six Sigma needs to occur with current improvement efforts, and coordination of efforts between various groups in the organization needs to exist Interestingly, these are classical problems facing the automotive industry for many years now, and they require a complete paradigm shift from the current automotive practices in order to be successful. Furthermore, to better substantiate the impact of Design for Six Sigma, the following improvements to the standard Six Sigma practices and strategy are recommended. A high level manufacturing position should be created to compliment the product engineering representative for the DFSS process. In addition, DFSS projects should be encouraged from the Manufacturing Organization to create buy-in and to leverage their day-to-day understanding of the product issues. Finally, like product specifications, Design for Six Sigma specifications should follow a product through the design cycle. / by Tracy L. Sabia. / M.B.A. / S.M.
144

The fabrication and market analysis of lattice mismatched devices

McGrath, John F. (John Francis), 1976- January 2004 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering; and, (S.M.)--Massachusetts Institute of Technology, Sloan School of Management; in conjunction with the Leaders for Manufacturing Program at MIT, 2004. / Includes bibliographical references (p. 80). / Lattice mismatched semiconductor substrates provide a platform for higher performance semiconductor devices. Through epitaxial growth on GaAs, the lattice constant of the film can be expanded resulting in a desired InxGalxAs film on which devices can be fabricated. The resulting device exhibits enhanced performance characteristics not achievable on the initial substrate. A self-aligned mesa structure process was developed to fabricate a prototype HBT device utilizing an InxGalxAs lattice mismatched semiconductor substrate. The self-aligned mesa process eliminated the need for complicated metal etch steps by using a lift-off process and deposited contact metal as an etch mask. Selecting etches that highlight the selectivity of the device layers was critical to the success of the process. In addition to developing a process to fabricate a device, a market analysis is performed of the possible application space of the technology and derived products. In assessing the feasibility of the possible products, two main areas were addressed, the markets and the competition within each market. The technology innovation has the ability to attract a variety of markets already served by compound semiconductors. The possible markets and the competition, companies and other materials, already serving the markets are identified and characterized. To determine what markets would be attractive, the full landscape of semiconductor applications is developed. After which we were able to list the specifications and customer needs of each application as well as what materials and what companies were serving these markets. The expected performance for the innovation was then benchmarked against what we projected for the current providers in each application space. / (cont.) Through the benchmarking process we were able to highlight markets in which we had a clear performance and cost advantage. / by John Francis McGrath. / S.M.
145

Critical process parameter determination during production start-up

Lindsey, Christine M. (Christine Marie), 1977- January 2004 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering; and, (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; in conjunction with the Leaders for Manufacturing Program at MIT, 2004. / Includes bibliographical references (p. 83-84). / Production start-up data is consistently utilized in a reactive manner during the initial stages of a product's lifecycle. However, if proactive information systems are created before full scale production starts, ramp-up cycles can be shortened considerably. This project attempts to develop a framework for analyzing process data quickly and efficiently during a new product start-up in order to provide information for the short term goals relating to attaining stable processes as well as provide guidance on long term handles for process improvement. First, a summary of previous literature regarding start-up process data as well as typical stable process data usage will be presented. This will provide adequate background for evaluating typical gaps present during production ramp-up. Then, solutions to these gaps will be discussed in order to develop tools for better data analysis in shorter periods of time. These methods will then be applied to a case study involving the. new production of Kodak's DCS Pro 14N digital camera. The Kodak Professional DCS Pro 14N was an amazing leap in technology: a camera with double the resolution for roughly half the price of any product available. Unfortunately, it soon became apparent that the original demand estimates were grossly underestimated, straining original resource allocations. Manufacturing struggled to start and was already a year behind in backorders. With over 1.500 process attributes collected on each camera, the key drivers of quality had yet to be determined. The surrounding circumstances made the quick analysis of start-up data vital to effective resource management and yield improvement of the camera. / (cont.) After using the new process modeling framework and modified control techniques on the example Kodak case, two additional topics will be discussed. First, the many classifications of return on investment in proactive start-up data analysis will be presented. Ranging from waste minimization to higher customer satisfaction, these incentives justify early preparation for start- up data analysis. Finally, future areas of study will be recommended to augment the findings within the thesis. / by Christine M. Lindsey. / M.B.A. / S.M.
146

Predictive capacity planning modeling with tactical and strategic applications

Zeppieri, Michael A. (Michael Anthony), 1975- January 2004 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering; and, (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; in conjunction with the Leaders for Manufacturing Program at MIT, 2004. / Includes bibliographical references (p. 78). / The focus of my internship was the development of a predictive capacity planning model to characterize the storage requirements and space utilization for Amazon's Campbellsville (SDF) Fulfillment Center (FC). Amazon currently has a functional model that serves the purpose of capacity planning, but they were looking for something that would provide more insight into their storage requirements against various demand forecasts and time horizons. With a more comprehensive model in place, Amazon would then have the ability to initiate studies, with the intent of optimizing the key parameters of their capacity flow Amazon utilizes a fairly robust and complex software solution for allocating items to storage locations as it receives shipments from its network of manufacturers and distributors. Amazon designates its capacity storage areas as being Prime, Reserve or Random Stow. Prime storage locations are bins from which workers pick items to fulfill orders. Reserve storage consists of pallet locations from which workers replenish Prime bins. Random Stow is a special case form of storage not taken into consideration for the purposes of my internship. The algorithm that determines the capacity allocation for a particular item is driven by two key parameters. The first parameter Amazon refers to as Days of Cover, which serves as a cycle and safety stock control variable. The maximum Days of Cover setting dictates the quantity of a particular item allocated to Prime locations, with any remaining items in the shipment being sent to Reserve. The minimum Days of Cover serves as the trigger resulting in a replenishment move from Reserve to Prime. / (cont.) The second parameter, designated as Velocity to Bin Type Mapping, associates Prime bin locations to item demand in terms of outbound cubic volume per week. Amazon's Campbellsville facility has three tiers of Prime storage: Library Deep. Case Flow and Pallet Prime, with each tier representing a larger physical bin size in terms of capacity. The Velocity to Bin Type mapping parameter essentially sets ranges of demand to each of the bin types, such that items are sent to the bin type matching their demand. Amazon's capacity constraints are seasonal, with the holiday season posing the greatest challenge in terms of having enough Prime locations to fulfill demand. Amazon currently has no means of predicting the effects on their capacity allocation when operators make adjustments to either Days of Cover or Velocity to Bin Type Mapping. While operators do have some degree of insight into the dynamics these parameters have on capacity allocation, they are unable to optimize capacity utilization. The challenge for my internship was to develop a model that would provide insight into the dynamics driving these system parameters. The scope of the internship was to convert a demand prediction based on Sales figures into a forecast on capacity storage requirements. The focus of my work centered on providing the logic flow to perform this conversion, driven by a model built using Microsoft Excel and driven by a series of Visual Basic macros ... / by Michael A. Zeppieri. / M.B.A. / S.M.
147

Reducing the cost of quality (COQ) through increased product reliability and reduced process variability / Reducing the COQ through increased product reliability and reduced process variability

Schiveley, Steven C. (Steven Charles), 1974- January 2004 (has links)
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2004. / Includes bibliographical references (p. 61). / Today, Dell, Inc. (Dell) spends millions of dollars each year to prevent product defects from reaching the end customer and to manage those product defects that have escaped to the end customer. The cost of the equipment, labor, and materials to prevent and manage product defects is referred to as Dell's Cost Of Quality (COQ). A large percentage of Dell's COQ is spent by warranty-support and customer service organizations (Services). While the costs of defects most directly affects Dell through the expenditures in such Service organizations, the causes of are found predominantly in other organizations, such as Design, Materials, and Manufacturing. Without a causal link which directly ties the costs of quality defects to the causes of quality defects, it is difficult to justify increased quality-improvement expenditures in product design, product validation, component selection, or manufacturing processes. The Cost Of Quality methodology described here is a practical approach to a) quantifying the financial impact of quality defects, b) shifting the problem-solving focus from "find-and-fix" to prevention, and c) prioritizing quality improvement investments based on expected financial return. This methodology aligns well with Dell's focus on financial performance and has provided the foundation for a COQ program which has been adopted by Dell's Executive Office as one of five top cost reduction projects for fiscal years 2005-2007. This paper provides background on the current tools, process, and culture affecting quality at Dell, describes the financial impact of quality on Dell's business, details the evolution of Dell's COQ initiative, and analyzes five possible methods to sustain the COQ effort. / by Steven C. Schiveley. / S.M. / M.B.A.
148

Optimization of in-line semiconductor measurement rates : balancing cost and risk in a high mix, low volume environment

Pandolfo, Christopher R. (Christopher Robert), 1975- January 2004 (has links)
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2004. / Includes bibliographical references (p. 99-101). / Due to a number of market development over the last decade, semiconductor manufacturing companies, including Intel Corporation, have added significant numbers of primarily high growth rate, high-mix, low-volume (HMLV) products to their portfolios. The rapid transition from high-volume manufacturing (HVM) to HMLV manufacturing has caused significant problems. Foremost, the needs of many HMLV customers are different from HVM customers and require different operational tradeoffs. Moreover, many of the HVM focused metrics, tools, systems and processes have proven ill-suited for managing the added complexities and more varied needs of HMLV customers. This thesis examines many of the problems caused by introducing HMLV products into an HVM wafer fabrication facility (commonly referred to as a fab), and explores potential solutions such as improved cultural and organizational alignment; capacity management and setup elimination; and scheduling and work-in-process management to name a few. Although the discussion focuses on semiconductor operations, the concepts easily generalize to other companies struggling with achieving operational excellence (OpX) in an HMLV environment. In addition to exploring the macroscopic HMLV issues, we also feature an in-depth analysis of one aspect of achieving OpX in the HMLV environment: the optimization of in-line metrology skip rates. Based on a review of the current methods, a new approach is suggested based on a Bayesian economic skip-lot model we call MOST/2. In general, MOST/2 suggests that significant cost savings can be realized with only modest increases in the material at risk per excursion if measurement rates are further reduced. Compared with the other methods analyzed, the data indicates that MOST/2 / (cont.) provides superior cost/risk balanced results. For the 27 operations analyzed, results include annual costs savings of over $95,000, cycle time savings of over 5.3 hours per lot, operator savings of over 4.2 people per year and metrology capacity utilization rate reductions of over 65%. Finally, a brief organizational study was conducted to identify political, cultural and strategic design changes that would bolster long-term operational excellence (OpX) in the HMLV environment. Suggested changes include better identification of customer needs, improved communication and linking between groups, modification and alignment of factory and performance metrics and the creation of a stand-alone HMLV organization. / by Christopher R. Pandolfo. / S.M. / M.B.A.
149

Applying supply chain methodology to a centralized software licensing strategy

Sheinbein, Rachel Felice, 1975- January 2004 (has links)
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2004. / Includes bibliographical references (p. 76). / Eleven percent of companies spend between $150K and $200K per year per engineer on software development tools and nine percent spend more than $200K, according to a Silicon Integration Initiative/Gartner/EE Times study from 2002. For Agilent Technologies, these costs result in spending tens of millions of dollars each year on software, and for Motorola, the costs are more than $100M each year. From the current trends in software spending, one can infer that companies will pay even more for software in the future, because the cost of the software itself is rising and because of the complexity of the technology needed for innovation. In order to understand whether the total spending on software is appropriate and necessary, Agilent sponsored this project to create a model that analyzes the trade-offs between the cost of software and the cost of software unavailability. The model treats software licenses as supplies to the development of a product, and thus, supply chain methodologies such as inventory (cost of licenses), stock outs (cost of unavailability) and service level are applied. The goal of the model is to minimize software costs while maintaining a satisfactory level of service. The thesis explains the model and then shows the results from applying it to four software products that Agilent currently uses. The results show that in the absence of this type of analysis, Agilent spends more than necessary for software licenses. In fact, Agilent can reduce costs by at least 5%. This model can be used by Agilent and other companies to optimize software purchases. / by Rachel Felice Sheinbein. / S.M. / M.B.A.
150

Enterprise level value stream mapping and analysis for aircraft carrier components

Frenkel, Yuliya M., 1977- January 2004 (has links)
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2004. / Includes bibliographical references (p. 95-96). / Northrop Grumman Newport News is committed to implementing lean on the enterprise level. This thesis is focused around work toward creating a global, high-level information and material value stream map for a specified pipe assembly. It identifies the largest areas of waste in the value stream and their root causes. The recommendations assist with the reduction and elimination of the major time delays, inventory buildups, re-work, excessive processes and other waste in the system. The pipe assembly chosen as the basis for the enterprise value stream map is part of a system, newly developed for the current aircraft carrier. The pipe assembly is representative of other pipe assemblies fabricated in the shipyard, so challenges experienced with the manufacturing and flow of the selected assembly are likely to be seen in many other pipe assemblies in the facility. A large number of assemblies was examined to determine the root causes of delivery problems. The analysis was based on the criticality of the ship need date. The root causes for the late assembly delivery were found to be inadequate material inventory levels in the warehouses, lack of fabrication timeline coordination between fabrication shops, late engineering drawing revisions, underestimated fabrication durations, late supplier delivery, late material purchase order placement, and lost material. Suggestions are provided to improve operational efficiencies by targeting the elimination of these root causes that result in the delay of assembly fabrication. Some include material ordering process reorganization, shop loading variability elimination, fabrication timeline alignment, metric realignment, and rework system prioritization. Recommendations for future work focus are / (cont.) concentrated on the control of the stock material inventory levels, alignment of the incentives across the enterprise, and reorganization of the planning processes. / by Yuliya M. Frenkel. / S.M. / M.B.A.

Page generated in 0.1814 seconds