• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 1
  • Tagged with
  • 43
  • 43
  • 17
  • 6
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Autonomous system for cylindrical plunge grinding

Xiao, Guoxian 01 January 1996 (has links)
An autonomous system has been developed for cylindrical plunge grinding to optimize the operating parameters while taking complete sets of part quality and machine related constraints into account. The system is capable of adjusting the operating parameters from part to part to minimize cycle time while satisfying part quality and machine related constraints in response to in-process and post-process measurements which characterize the processing conditions and part quality. Two important constraint models for out-of-roundness and taper have been developed analytically and verified experimentally. Existing process and constraint models have also been improved and verified experimentally. Two optimization strategies have been developed to find closed-form solutions for real-time control to minimize the grinding time and also to minimize the production time which includes optimization of the dressing interval. The system is capable of coping with the quantitative uncertainty of the process by using a newly developed predictive model for the uncertain parameters and employing parameter estimation to update the models from part to part. Modified strategies for accelerated control have also been developed to reduce transient times. Optimization strategies were first evaluated in simulation. Practical implementation and testing of the autonomous system was then performed on an internal grinder, retrofitted with electrical drives and sensors and interfaced to a personal computer for data acquisition, system identification, and machine control. The system has also been successfully applied on a production machine in industry. The results of this investigation provide the scientific and technological basis for commercial development of a new generation of grinding systems and for retrofitting of older grinders.
12

Design of plantwide control systems

Mahajanam, Rama Venkata 01 January 2001 (has links)
Plantwide control refers to the control of an entire plant, consisting of many interconnected unit operations. The design of a plantwide control system is an open-ended problem, complicated by the large number of choices available for control structure, controller type and so on. A systematic method is thus needed to decompose this large-scale problem into more manageable sub-problems. It is the goal of this thesis to develop such methods. We have developed a hierarchical procedure for synthesizing a plantwide control system. The thrust of the procedure is to design a plantwide control system that ensures optimal operation in the presence of unmeasured disturbances and model uncertainty. The procedure decomposes the plantwide control problem into six levels of decisions. As one proceeds down the hierarchy, more modeling, not structural details, are added. Alternatives are generated at each level and evaluated based on profitability. Both steady-state economics and dynamic controllability are important criteria in the selection of an optimal control strategy. Several practical issues arise in the application of the procedure to typical industrial problems. Short-cut methods and new tools have been proposed to deal with these issues more effectively and efficiently.
13

Process selection for assembled products based on an overall evaluation of cost, time and performance

Hu, Weiyi 01 January 1996 (has links)
Design For Manufacturing (DFM) and Design For Assembly (DFA) have become important design philosophies in product design practice. Component process selection and assembly combination decisions are key issues in DFM and DFA. As philosophies, it is important and imperative to derive some general guidelines in these aspects to support product design at early design stages. Since low product cost, short time-to-market, and high product performance are the three most important goals in the development of consumer products, and since injection molding and stamping are the two most commonly used manufacturing processes in the production of consumer products, in this dissertation, we focus on the process selection of injection molding and stamping based on the comparison of product costs (including tooling cost, processing cost, material cost and assembly cost), production time periods (including tooling time period, processing time period, and assembly time period), and product performance (including strength, stiffness, corrosion resistance, aesthetics, etc.). Some general guidelines and conclusions to determine if a component should be injection molded or stamped, and if an assembly should be combined into a single component or not are generated. A hybrid evaluation method based on the Pahl-Beitz method and the Dominic method is developed for product performance evaluation. A product evaluation method based on an overall evaluation of cost, time, and performance is proposed as well. In addition, several examples with components and assemblies from real products are analyzed and evaluated by using the guidelines and the overall evaluation method to illustrate the concepts and the feasibility of the approaches introduced.
14

A knowledge-based manufacturing advisory system for the economical design of metal stampings

Dastidar, Pratip Ghose 01 January 1991 (has links)
Knowledge about the design of parts for metal stamping was obtained through extensive discussions with expert metal stampers. This information was organized in a form of a relative cost model which provides a systematic approach to the analysis of the manufacturing complexity of metal stampings. The resultant Design for Stamping model permits designers to evaluate part designs in the early concept stage of the design process, and to quantitatively assess the effect of changing part attributes on relative cost. A coding and classification system and a database of cost-related information are key features of the Design for Stamping model. The relative cost model enables one to compare design options in terms of total relative cost, and to determine the relative cost savings associated with design changes.
15

Interpretation of CAD models for an automatic machine programming planner

Kang, Tzong-Shyan 01 January 1991 (has links)
A generalized approach to fast interpretation of objects and their features has so far eluded researchers. In manufacturing, this interpretation can be approached from a vision point of view or from a CAD data perspective. Presently, CAD systems are widely used in several aspects of manufacturing, and such systems will eventually be data driven. It is therefore more efficient to use CAD data for object reasoning in manufacturing. Components can be modelled on a CAD system using various modeling techniques. The method of representation of their geometric information however differs from one CAD system to another. The advent of Initial Graphics Exchange Specification (IGES) now makes it possible to represent CAD data in a neutral and standard manner. This research concentrates on developing an object interpretation system. This interpretation system contains an automatic form feature extractor and a product modeller which acquired the technological information, such as tolerances (including position and form tolerances) and surface finish through users' interface. This research describes a scheme for recognizing and representing features from any CAD system using a neutral graphics data interface. The scheme developed is based on Graph-based feature representation, where features are represented by a set of faces as well as their topological adjacency. Based on theories related to group and graph theory, the strategies for classifying and representing features and methods of decomposing a complicated feature into several simpler features for recognition purposes has been developed. Finally, this work concentrates on ideas related to developing a product modeller. This modeller assists the interpretation of features by providing non-geometric information obtained through the designer. Such information can not be obtained directly from current CAD systems but are nevertheless important in the final attainment of the integration of design and manufacturing systems.
16

Preliminary bidding model using analytic hierarchy process and genetic algorithms

Liu, Xiaoyu 08 July 2016 (has links)
<p>A bidding process is one of the main activities of construction project management. For general contractors, the chance of awarding projects in a bidding process directly relies on the success of selecting the best subcontractor. Therefore, an effective and accurate method of choosing subcontractors affect directly on the profit earned by the general contractors. Traditionally, a general contractor makes selection decision based on its managers&rsquo; subjective opinion regarding subcontractors&rsquo; price quotations. This approach in selecting subcontractors may result in low efficiency and quality of work. This thesis presents an Analytic Hierarchy Process (AHP) method considering the weight of each factor when a subcontractor is valued. The Genetic Algorithm method is on the basis of AHP for dealing with the subcontractor selection to establish a subcontractor value rating system. A real project example is explained and evaluated to demonstrate the feasibility of the method. This thesis is of value to general contractor with a regard to providing a tool for bidding process in the early stage of construction projects. </p>
17

A qualitative phenomenological study| Enhanced, risk-based FAA oversight on part 145 maintenance practices

Sheehan, Bryan G. 22 November 2016 (has links)
<p> The purpose of this qualitative phenomenological study was to examine the phenomenon of enhanced, risk-based Federal Aviation Administration (FAA) oversight of Part 145 repair stations that performed aircraft maintenance for Part 121 air carriers between 2007 and 2014 in Oklahoma. Specifically, this research was utilized to explore what operational changes have occurred in the domestic Part 145 repair station industry such as variations in management or hiring practices, training, recordkeeping and technical data, inventory and aircraft parts supply-chain logistics, equipment, and facilities. After interviewing 12 managers from Part 145 repair stations in Oklahoma, six major theme codes emerged from the data: <i>quality of oversight before 2007, quality of oversight after 2007, advantages of oversight, disadvantages of oversight, status quo of oversight,</i> and <i>process improvement </i>. Of those six major theme codes, 17 subthemes appeared from the data that were used to explain the phenomenon of enhanced oversight in the Part 145 repair station industry. Forty-two percent of the participants indicated a weak FAA oversight system that has hindered the continuous process improvement program in their repair stations. Some of them were financially burdened after hiring additional full-time quality assurance inspectors to specifically manage enhanced FAA oversight. Notwithstanding, the participants of the study indicated that the FAA must apply its surveillance on a more standardized and consistent basis. They want to see this standardization in how FAA inspectors interpret regulations and practice the same quality of oversight for all repair stations, particularly those that are repeat violators and fail to comply with federal aviation regulations. They believed that when the FAA enforces standardization on a consistent basis, repair stations can become more efficient and safer in the performance of their scope of work for the U.S. commercial air transportation industry.</p>
18

Properties of Mixing SAC Solder Alloys with Bismuthcontaining Solder Alloys for a Low Reflow Temperature Process

Swanson, Tayler John 03 May 2019 (has links)
<p> The subject of extensive research has been the establishing of lower temperature soldering of electronic assemblies that are similar to the once common yet still preferred eutectic Tin-Lead (SnPb) soldering manufacturing processes that are below 217 &deg;C. This research opportunity will contribute data on mixed solder alloy assemblies that can be formed at lower process temperatures. There are many environmental and economic benefits of avoiding the current reliability concerns of assembling electronics at the standard high temperatures which peak at 230 &deg;C 260 &deg;C. To reduce this temperature the use of Bismuth containing solder pastes are mixing with the standard high temperature SAC solders for electronic assemblies. The materials evaluated are the (in weight percentages) 96.5Tin/3Silver/.5Copper (Sn/Ag/Cu) solder ball mixed with each solder paste, the eutectic 58Bismuth/42Tin (58Bi/42Sn), 57Bi/42Sn /1Ag and a propriety alloy that has a lower Bismuth content along with various micro alloys, 40-58Bi/Sn/X (X representing proprietary micro alloys or doping). In the assembly portion of this research the solder alloys were exposed to three different peak temperatures 180 &deg;C, 195 &deg;C, 205 &deg;C. Another reflow profile attribute of focus was times above 138 &deg;C the melting point of the eutectic Sn58Bi alloy. The ball and paste assembly portion of this research used the times above melting of 120sec and 240sec to represent process extremes and verify their significance on improving mixing level results. These times above melting did not consistently improve the mixing levels and therefore are not recommended or required during mixed low temperature solder assemblies. The results in this study suggest the recommended and optimum reflow profile to have a time above the melting point to be less than or equal to 90 seconds for mixed solder alloy assemblies in &ldquo;low&rdquo; (&lt; 200 &deg;C) peak temperature reflow oven profiles. This attribute ensures a reflow window similar to that of the eutectic SnPb processing. The second leg of this research was with a component assembly of a large ball grid array at the same various peak temperatures with a single time above 138 &deg;C, 90sec. This &ldquo;large&rdquo; (> 20mm a side) component is a SAC405 solder balled BGA with the dimensions of 42 &times; 28 &times; 0.8mm. With any large component the temperature gradient across the component is a risk factor and the results show that there are significantly differences of mixing from the center of the component to the edge due to an average 2.3 &deg;C temperature difference during convection reflow. The average mixing % levels recorded for T<sub>peak</sub>= 180 &deg;C for the solder pastes with a 58Bi = 47%, 57Bi = 47% and 40-58Bi = 44%. The average mixing % levels recorded for T<sub>peak</sub>= 195 &deg;C for the solder pastes with a 58Bi = 69%, 57Bi = 77% and 40-58Bi = 57%. The conclusions found also match previous work identifying the reflow peak temperatures remain a significant factor on the mixing %. This work&rsquo;s goal was to add to the knowledge of the electronics industry to better understanding the microstructure and mixing mechanisms of Bi/Sn/X-SAC solder joints for low temperature reflow assembly processes. </p><p>
19

Model Predictive Control for Energy Efficient Buildings

Ma, Yudong 11 October 2013 (has links)
<p> The building sector consumes about 40% of energy used in the United States and is responsible for nearly 40% of greenhouse gas emissions. Energy reduction in this sector by means of cost-effective and scalable approaches will have an enormous economic, social, and environmental impact. Achieving substantial energy reduction in buildings may require to rethink the entire processes of design, construction, and operation of buildings. This thesis focuses on advanced control system design for energy efficient commercial buildings. </p><p> Commercial buildings are plants that process air in order to provide comfort for their occupants. The components used are similar to those employed in the process industry: chillers, boilers, heat exchangers, pumps, and fans. The control design complexity resides in adapting to time-varying user loads as well as occupant requirements, and quickly responding to weather changes. Today this is easily achievable by over sizing the building components and using simple control strategies. Building controls design becomes challenging when predictions of weather, occupancy, renewable energy availability, and energy price are used for feedback control. Green buildings are expected to maintain occupants comfort while minimizing energy consumption, being robust to intermittency in the renewable energy generation and responsive to signals from the smart grid. Achieving all these features in a systematic and cost-effective way is challenging. The challenge is even greater when conventional systems are replaced by innovative heating and cooling systems that use active storage of thermal energy with critical operational constraints.</p><p> Model predictive control (MPC) is the only control methodology that can systematically take into account future predictions during the control design stage while satisfying the system operating constraints. This thesis focuses on the design and implementation of MPC for building cooling and heating systems. The objective is to develop a control methodology that can 1) reduce building energy consumption while maintaining indoor thermal comfort by using predictive knowledge of occupancy loads and weather information, (2) easily and systematically take into account the presence of storage devices, demand response signals from the grid, and occupants feedback, (3) be implemented on existing inexpensive and distributed building control platform in real-time, and (4) handle model uncertainties and prediction errors both at the design and implementation stage.</p><p> The thesis is organized into six chapters. Chapter 1 motivates our research and reviews existing control approaches for building cooling and heating systems. </p><p> Chapter 2 presents our approach to developing low-complexity control oriented models learned from historical data. Details on models for building components and spaces thermal response are provided. The thesis focuses on the dynamics of both the energy conversion and storage as well as energy distribution by means of heating ventilation and air conditioning (HVAC) systems.</p><p> In Chapter 3, deterministic model predictive control problems are formulated for the energy conversion systems and energy distribution systems to minimize the energy consumption while maintaining comfort requirement and operational constraints. Experimental and simulative results demonstrate the effectiveness of the MPC scheme, and reveal significant energy reduction without compromising indoor comfort requirement.</p><p> As the size and complexity of buildings grow, the MPC problem quickly becomes computationally intractable to be solved in a centralized fashion. This limitation is addressed in Chapter 4. We propose a distributed algorithm to decompose the MPC problem into a set of small problems using dual decomposition and fast gradient projection. Simulation results show good performance and computational tractability of the resulting scheme.</p><p> The MPC formulation in Chapter 3 and 4 assumes prefect knowledge of system model, load disturbance, and weather. However, the predictions in practice are different from actual realizations. In order to take into account the prediction uncertainties at control design stage, stochastic MPC (SMPC) is introduced in Chapter 5 to minimize expected costs and satisfy constraints with a given probability. In particular, the proposed novel SMPC method applies feedback linearization to handle system nonlinearity, propagates the state statistics of linear systems subject to finite-support (non Gaussian) disturbances, and solves the resulting optimization problem by using large-scale nonlinear optimization solvers.</p>
20

A validation of the enterprise management engineering approach to knowledge management systems engineering

Mac Garrigle, Ellen F. 18 June 2014 (has links)
<p> Knowledge management is one of the current "buzzwords" gaining popularity on an almost-daily basis within the business world. Much attention has been paid to the theory and justification of knowledge management (KM) as an effective business and organizational practice. However, much less attention has been paid to the more specific issues of effective <u>implementation</u> of knowledge management, or to the potential financial benefit or payoff that could potentially result from an effective system implementation. As the concept of KM becomes more generally accepted, knowledge management systems (KMS) are becoming more prevalent. A KMS is often considered simply another information system to be designed, built, and supported by the IT department. In actual implementation, many KM system development efforts are not successful. There is frequently a perception that strict adherence to development processes produces an excessive time lag, rigor, and formality which will "disrupt" the desired free flow of knowledge. Professor Michael Stankosky of GWU has posited a more flexible variation of the usual systems engineering (SE) approach, tailored specifically to the KM domain and known as Enterprise Management Engineering<sup>&copy;</sup> (EME). This approach takes the four major pillars of KM as identified by GWU research in this area&mdash;Leadership, Organization, Technology, and Learning&mdash;and adapts eighteen key SE steps to accommodate the more flexible and imprecise nature of "knowledge". </p><p> Anecdotal study of successful KMS developments has shown that many of the more formal processes imposed by systems engineering (such as defining strategic objectives before beginning system development) serve a useful purpose. Consequently, an integrated systems engineering process tailored specifically to the KM domain should lead to more successful implementations of KM systems. If this is so, organizations that have followed some or all of the steps in this process will have designed and deployed more "successful" KMS than those organizations that have not done so. To support and refine this approach, a survey was developed to determine the usage of the 18 steps identified in EME. These results were then analyzed against a objective financial measurement of organizational KM to determine whether a correlation exists. This study is intended to test the validity of the efficacy of the EME approach to KM implementation. </p><p> For the financial measurement data, the subject list of organizations for this study used a measure of intangible valuation developed by Professor Baruch Lev of NYU called Knowledge Capital Earnings <sup>&copy;</sup> (KCE). This is the amount of earnings that a company with good "knowledge" has left over once its earnings based on tangible financial and physical assets have been subtracted from overall earnings. KCE can then be used to determine the Knowledge Capital (KC) of an organization. This in turn provides two quantitative measures (one relative, one absolute) that can be used to define a successful knowledge company. </p><p> For this study, Lev's research from 2001 was updated, using more recent financial data. Several of these organizations completed a survey instrument based upon the 18 points of the EME approach. The results for the 18 steps were compared against each other and against each organization's KC scores. The results show that there <u>is</u> a significant correlation between EME and the relative KC measurement, and select EME steps do correlate significantly with a high KC value. Although this study, being the first validation effort, does not show provable <u>causation</u>, it does demonstrate a quantifiable <u>correlation</u> and association between EME and successful KM implementation. This in turn should contribute to the slim body of objective knowledge on the design, deployment, and measurement of KM systems.</p>

Page generated in 0.1143 seconds