• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 51
  • 51
  • 19
  • 18
  • 12
  • 11
  • 7
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

The Application of Gage R&R Analysis in s Six Sigma Case of Improving and Optimizing an Automotive Die Casting Product’s Measurement System

Ren, Qizheng 01 October 2015 (has links)
With the rapid development of automation technology in automotive manufacturing processes, massive and efficient production is a current trend. Therefore, measurement systems with accurate and automated measuring instruments are sought by automotive companies and suppliers. However, the problematic measuring instruments with unreliable accuracy and stability lead to erroneous measurements and wrong quality decisions that cause manufacturers huge profit losses. An effective method called “measurement system analysis” can be applied to define and eliminate erroneous measurements to ensure adequate reliability. An automotive transmission die casting parts supplier called company T was suffering a serious profit loss due to the erroneous measurements from one type of their product’s measurement system. These erroneous measurements caused the company to deliver nonconforming products to their customers. The researcher conducted a study applying Six Sigma methodology to find out the root cause of the erroneous measurements and eliminate the erroneous measurements to retain adequate reliability. The researcher used DMAIC (Define, Measure, Analysis, Improve, and Control) process as framework to conduct the study and the measurement system analysis, Gage R&R method, to process several experiments for data collection and analysis. Through processing the experiments and analyzing the results, the researcher was able to detect the source of variation and find the root cause that caused the erroneous measurements. Based on the findings, the researcher then corrected the erroneous measurements and improved the problematic measurement system’s performance.
22

Factors Affecting the Longevity of the Department of Industrial Technology and Education at Utah State University 1985-2005: A Case Study

Cloward, Jerry 01 May 2009 (has links)
A qualitative case study method was used to discover the factors involved with the longevity of the Technology Education program at Utah State University. The problem was that while there were studies reporting the many Technology Education programs that have been closed there had been no studies on individual programs that have remained open. This study also contains a consolidation of relative information on the program. The primary data was obtained from interviews with the professors involved with the program during the timeframe of the study. The data obtained from the interviews was evaluated and set into themes. The factors were derived from the themes. The many factors presented in this study are evidence of the need to do this holistic study of the problem. The findings from this study provide a basis for study of other successful Technology Education programs.
23

Strategic Evaluation of University Knowledge and Technology Transfer Effectiveness

Tran, Thien Anh 07 June 2013 (has links)
Academic knowledge and technology transfer has been growing in importance both in academic research and practice. A critical question in managing this activity is how to evaluate its effectiveness. The literature shows an increasing number of studies done to address this question; however, it also reveals important gaps that need more research. One novel approach is to evaluate the effectiveness of this activity from an organizational point of view, which is to measure how much knowledge and technology transfer from a university fulfills the mission of the institution. This research develops a Hierarchical Decision Model (HDM) to measure the contribution values of various knowledge and technology transfer mechanisms to the achievement of the mission. The performance values obtained from the university under investigation are applied to the model to develop a Knowledge and Technology Transfer Effectiveness Index for that university. The Index helps an academic institution assess the current performance of its knowledge and technology transfer with respect to its mission. This robust model also helps decision makers discover areas where the university is performing well, or needs to pay more attention. In addition, the university can benchmark its own performance against its peers in order to set up a roadmap for improvement. It is proved that this is the first index in the literature which truly evaluates the effectiveness of university knowledge and technology transfer from an organizational perspective. It is also the first method that incorporates hard data of university technology transfer and expert judgments into the evaluation of the effectiveness of the activity. Practitioners in the area of academic technology transfer can also apply this evaluation model to quantitatively evaluate the performance of their institutions for strategic decision making purposes.
24

The Economic Impacts of Technical Change in Carbon Capture

Rasmussen, Peter G. 01 January 2012 (has links) (PDF)
There is a general consensus in the literature that carbon capture and storage (CCS), a technology that controls CO2 emissions from fossil fuel power plants, figures to be a critical technology to reduce CO2 emissions to CO2 concentration stabilization levels prescribed in the literature. We completed three projects that advance the understanding of how technical change in carbon capture affects both near-future costs of CCS and the economy in the long term. First, we conducted a literature review of near-future capture cost estimates in order to get an idea of how expensive carbon capture will be in the near-future. The literature indicates that pre-combustion capture is the least expensive carbon capture technology because its combustion process best facilitates carbon capture. Second, we explored the limits of incremental technical change in each near-future capture technology using a performance-cost model. The results of the sensitivity analysis showed that pre-combustion capture could be the least expensive capture technology after incremental technical change has occurred. Third, we used an integrated assessment model (IAM) to investigate how rapid incremental and breakthrough technical change in carbon capture could impact the electric energy market, total CO2 abatement cost and CO2 price over time. We modeled breakthrough technical change using data from a paper in the literature that provides cost and performance estimates for a radical carbon capture technology still in the early stages of research and development (R&D) (Baker, Chon, & Keisler, 2009). CCS dominates electricity market share over time given a chemical looping breakthrough.
25

Service Dependency Analysis via TCP/UDP Port Tracing

Clawson, John K 01 June 2015 (has links) (PDF)
Enterprise networks are traditionally mapped via layers two or three, providing a view of what devices are connected to different parts of the network infrastructure. A method was developed to map connections at layer four, providing a view of interconnected systems and services instead of network infrastructure. This data was graphed and displayed in a web application. The information proved beneficial in identifying connections between systems or imbalanced clusters when troubleshooting problems with enterprise applications.
26

Integrating Process Mining with Discrete-Event Simulation Modeling

Liu, Siyao 01 November 2015 (has links) (PDF)
Discrete-event simulation (DES) is an invaluable tool which organizations can use to help better understand, diagnose, and optimize their operational processes. Studies have shown that for the typical DES exercise, the greatest amount of time is spent on developing an accurate model of the process that is to be studied. Process mining, a similar field of study, focuses on using historical data stored in software databases to accurate recreate and analyze business processes. Utilizing process mining techniques to help rapidly develop DES models can drastically reduce the amount of time spent building simulation models, which ultimately will enable organizations to more quickly identify and correct shortcomings in their operations. Although there have been significant advances in process mining research, there are still several issues with current process mining methods which prevent them from seeing widespread industry adoption. One such issue, which this study examines, is the lack of cross-compatibility between process mining tools and other process analysis tools. Specifically, this study develops and characterizes a method through which mined process models can be converted into discrete-event simulation models. The developed method utilizes a plugin written for the ProM Framework, an existing collection of process mining tools, which takes a mined process model as its input and outputs an Excel workbook which provides the process data in a format more easily read by DES packages. Two event logs which mimic real-world processes were used in the development and validation of the plugin. The developed plugin successfully extracted the critical process data from the mined process model and converted it into a format more easily utilized by DES packages. There are several limitations which will limit model accuracy, but the plugin developed by this study shows that the conversion of process models to basic simulation models is possible. Future research can focus on addressing the limitations to improve model accuracy.
27

The Effect of Stretch Wrap Pre-stretch on Unitized Load Containment

Cernokus, Evan A 01 August 2012 (has links) (PDF)
There are three main factors affecting the stability of a palletized load that is unitized by a stretch wrapping mechanism. These factors include the type of unitized load, wrapping configuration and shipping method. The wrapping configuration is determined on the basis of the type of unitized load and shipping method. For this study, the aforementioned components were referred to as the package, the product, and the distribution environment. These components come together to make up a stretch wrapping system. The package corresponds to the stretch wrap film that is packaging the unitized load and pallet. The product corresponds to the goods placed on the pallet to be packaged by the stretch wrapper. The distribution environment corresponds to the hazards that the packaged product will encounter in transit. This study was designed to observe and understand the interactions between each of the components of the stretch wrap system. Prior to stretch wrapping a pallet of product, the film is elongated or pre-stretched. The elastic nature of the stretch wrap forces the film to conform around the palletized load. It is hypothesized that the film force that the stretch wrap applies to the palletized load contributes to improved load containment. Hence, the objective of this study was to determine the existence of a correlation between percentage pre-stretch to change in film force and load containment. For the study, a range of pre-stretched unitized loads were subjected to ISTA 3E distribution testing. Simultaneously the film force was monitored during the period of distribution testing. Subsequent to distribution testing, the load dispersion was quantified. The data obtained from this test suggested that there is no correlation between percentage pre-stretch and change in film force or load containment. The study also compared three methods of calculating pre-stretch: the marking wheel procedure, tapeless measure, and film cut and weigh. It was found that the most consistent method was the marking wheel procedure, followed by the cut and weigh procedure, and the tapeless measure procedure.
28

Predicting Rheology Of UV-Curable Nanoparticle Ink Components And Compositions For Inkjet Additive Manufacturing

Lutz, Cameron D 01 June 2024 (has links) (PDF)
Inkjet additive manufacturing is the next step toward ubiquitous manufacturing by enabling multi-material printing that can exhibit various mechanical, electronic, and thermal properties. These characteristics are realized in the careful formulation of the inks and their functional materials, but there are many constraints that need to be satisfied to allow optimal jetting performance and build quality when used in an inkjet 3-D printer. Previous research has addressed the desirable rheology characteristics to enable stable drop formation and how the metallic nanoparticles affect the viscosity of inks. The contending goals of increasing nanoparticle-loading to improve material deposition rates while trying to maintain optimal flow dynamics is the closely held trade secret in formulating these inkjet compositions. We use data from previous experiments and the CRC Handbook of Chemistry and Physics to train machine learning regression models to predict the relevant factors of inkjet printability at a standardized temperature of 25ºC: viscosity, surface tension, and density. These models were used to predict the rheological factors of the main components of a UV-curable inkjet ink formulation: UV-curable monomers and oligomers, photoinitiators, dispersants, and humectants. This paper compares the relative performance of five machine learning algorithms to assess the effectiveness of each approach for chemoinformatics regression tasks.
29

A qualificaÃÃo do trabalhador em faces das tecnologias industriais: estudo numa montadora automobilÃstica no Estado do ParanÃ

Jean Mari Felizardo 14 July 2009 (has links)
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior / Esta pesquisa aborda o avanÃo tecnolÃgico numa montadora automobilÃstica e suas relaÃÃes com a qualificaÃÃo do trabalhador, utilizando como anÃlise um estudo de caso do setor industrial automobilÃstico na RegiÃo Metropolitana de Curitiba (RMC), Estado do ParanÃ, Brasil. Para tanto, a metodologia utilizada foi uma revisÃo bibliogrÃfica sobre o tema aliada a um estudo de caso com a montadora automobilÃstica pesquisada que tem em suas operaÃÃes industriais elevado avanÃo tecnolÃgico para produzir seus produtos. Esta pesquisa forneceu informaÃÃes para contribuir para uma melhor compreensÃo do processo das tecnologias industriais nas operaÃÃes de produÃÃo numa montadora automobilÃstica e suas relaÃÃes com a qualificaÃÃo do trabalhador deste setor produtivo. Como resultado obteve-se que a montadora automobilÃstica pesquisada utiliza-se da combinaÃÃo de dois modelos produtivos: o radial fordista e o radial transnacional, como tambÃm pode-se concluir que o processo de produÃÃo flexÃvel (enxuta) à mais compatÃvel com a forma de trabalho em grupo (equipes) e remete à maior valorizaÃÃo ao desempenho individual e em grupo, no entanto, ainda ocorre a parcelizaÃÃo do trabalho e repetitividade das tarefas. O que timidamente, evidencia a relevÃncia da subjetividade humana no ambiente de trabalho, pois ocorre a apropriaÃÃo do saber pela montadora, e intrinsecamente està incorporado à funÃÃo do capital na busca da economia do tempo pelos prÃprios trabalhadores com seus pares, bem como facilita o gerenciamento e controle dos gestores em consonÃncia com as forÃas produtivas capitalistas. Portanto, essa passagem de concepÃÃo do trabalho individualizado para trabalho em equipe à um indicativo da montadora pesquisada em mudar o ambiente de trabalho e tentar buscar a valorizaÃÃo da participaÃÃo da forÃa de trabalho, fato que possibilita a gestÃo de duas forÃas contraditÃrias, o controle do trabalho e a autonomia relativa da equipe. Assim, a relaÃÃo entre avanÃo tecnolÃgico industrial (automaÃÃo e robÃtica) e qualificaÃÃo (conhecimentos tÃcnicos especÃficos) para o posto de trabalho està estritamente ligada à dinÃmica do gerenciamento e controle pelo capital com foco na economia do tempo de processo produtivo na montadora automobilÃstica pesquisada.
30

Continuous Permeability Measurement During Unidirectional Vacuum Infusion Processing

Hoagland, David Wayne 01 July 2017 (has links)
Composite materials have traditionally been used in high-end aerospace parts and low-end consumer parts. The reason for this separation in markets is the wide gap in technology between pre-preg materials processed in an autoclave and chop strand fiberglass blown into an open mold. Liquid composite molding has emerged as a bridge between inexpensive tooling and large, technical parts. Processes such as vacuum infusion have made it possible to utilize complex layups of reinforcement materials in an open mold style set-up, creating optimal conditions for composites to penetrate many new markets with rapid innovation. Flow simulation for liquid composite molding is often performed to assist in process optimization, and requires the permeability of the reinforcement to be characterized. For infusion under a flexible membrane, such as vacuum infusion, or for simulation of a part with non-uniform thickness, one must test the permeability at various levels of compaction. This process is time consuming and often relies on interpolation or extrapolation around a few experimental permeability measurements. To accelerate the process of permeability characterization, a small number of methodologies have been previously presented in the literature, in which the permeability may be tested at multiple fiber volume contents in a single test. Some of the methods even measure the permeability over a continuous range of thicknesses, thus requiring no later interpolation of permeability values. A novel method is presented here for the rapid measurement of permeability over a continuous range of fiber volume content, in a single unidirectional vacuum infusion flow experiment. The thickness gradient across the vacuum bag, as well as the fluid pressure at several locations in the mold, were concurrently measured to calculate the fabric compressibility. An analytical flow model, which accounts for the compressibility, is then used by iterating the fitting constant in a permeability model until the predicted flow front progression matches empirical measurement. The method is demonstrated here for two reinforcement materials: 1) a fiberglass unbalanced weave and 2) a carbon bi-ax non-crimped fabric. The standard deviation of calculated permeabilities across the multiple infusion experiments for each material and flow orientation ranged from 12.8% to 29.7%. Validation of these results was performed by comparing the resulting permeability with multiple non-continuous permeability measurement methods.

Page generated in 0.0763 seconds