• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 59
  • 42
  • 25
  • 13
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • Tagged with
  • 178
  • 52
  • 40
  • 37
  • 31
  • 30
  • 29
  • 28
  • 24
  • 22
  • 18
  • 17
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Optimization of a Ball-Milled Photocatalyst for Wastewater Treatment Through Use of an Orthogonal-Array Experimental Design

Ridder, Bradley J 31 March 2010 (has links)
The effects of various catalyst synthesis parameters on the photocatalytic degradation kinetics of aqueous methyl orange dye are presented. The four factors investigated were: i) InVO4 concentration, ii) nickel concentration, iii) InVO4 calcination temperature, and iv) ballmilling time. Three levels were used for each factor. Due to the large number of possible experiments in a full factorial experiment, an orthogonal-array experimental design was used. UV-vis spectrophotometry was used to measure the dye concentration. The results show that nickel concentration was a significant parameter, with 90% confidence. The relative ranking of importance of the parameters was nickel concentration > InVO4 concentration > InVO4 calcination temperature > milling time. The results of the orthogonal array testing were used to make samples of theoretically slowest and fastest catalysts. Curiously, the predicted-slowest catalyst was the fastest overall, though both samples were faster than the previous set. The only difference between the slowest and fastest catalysts was the milling time, with the longer-milled catalyst being more reactive. From this result, we hypothesize that there is an interaction effect between nickel concentration and milling time. The slowest and fastest catalysts were characterized using energy-dispersive spectroscopy (EDS), scanning electron microscopy (SEM), x-ray powder diffractometry (XRD), BET surface area analysis, and diffuse-reflectance spectroscopy (DRS). The characterization results show that the fastest catalyst had a lower band gap than the slowest one, as well as a slightly greater pore volume and average pore diameter. The results indicate that fast kinetics are achieved with low amounts of nickel and a long ball milling time. Under the levels tested, InVO4 concentration and the calcination temperature of the InVO4 precursor were not significant.
122

Observing the Main Effects of Automotive Primers when Bonding to Polyvinylchloride

Javorsky, Joseph Frank January 2012 (has links)
No description available.
123

Low-grade Thermal Energy Harvesting and Waste Heat Recovery

Kishore, Ravi Anant 14 December 2018 (has links)
Low-grade heat, either in the form of waste heat or natural heat, represents an extremely promising source of renewable energy. A cost-effective method for recovering the low-grade heat will have a transformative impact on the overall energy scenario. Efficiency of heat engines deteriorates with decrease in hot-side temperature, making low-grade heat recovery complex and economically unviable using the current state-of-the-art technologies, such as Organic Rankine cycle, Kalina cycle and Stirling engine. In this thesis, a fundamental breakthrough is achieved in low-grade thermal energy harvesting using thermomagnetic and thermoelectric effects. This thesis systematically investigates two different mechanisms: thermomagnetic effect and thermoelectric effect to generate electricity from the low-grade heat sources available near ambient temperature to 200°C. Using thermomagnetic effect, we demonstrate a novel ultra-low thermal gradient energy recovery mechanism, termed as PoWER (Power from Waste Energy Recovery), with ambient acting as the heat sink. PoWER devices do not require an external heat sink, bulky fins or thermal fluid circulation and generate electricity on the order of 100s μW/cm3 from heat sources at temperatures as low as 24°C (i.e. just 2°C above the ambient) to 50°C. For the high temperature range of 50-200°C, we developed the unique low fill fraction thermoelectric generators that exhibit a much better performance than the commercial modules when operated under realistic conditions such as constant heat flux boundary condition and high thermally resistive environment. These advancements in thermal energy harvesting and waste heat recovery technology will have a transformative impact on renewable energy generation and in reducing global warming. / PHD / Energy is essential to life. While most living organisms utilize natural resources directly to meet their energy requirements, humans need electricity. Unarguably, electricity has made our lives easy; however, it is an expensive form of energy. Every year, a tremendous amount of fossil fuels is burnt to meet the ever-growing energy demand. While we are concerned due to the escalating energy prices, depleting fossil resources, and negative environmental impact, it is devastating to know that more than half of the useable energy generated from various renewable and non-renewable sources are ultimately discarded to atmosphere as byproduct, mostly in form of wasted heat. Utilizing waste heat, particularly when it occurs at low temperature, is usually complex and cost-ineffective. A cost-effective method for recovering the low-grade heat will have a transformative impact on the overall energy scenario. In this thesis, a fundamental breakthrough is achieved in developing the new/improved thermal energy harvesting methods to generate electricity from low-grade heat.
124

The optimization of SPICE modeling parameters utilizing the Taguchi methodology

Naber, John F. 07 June 2006 (has links)
A new optimization technique for SPICE modeling parameters has been developed in this dissertation to increase the accuracy of the circuit simulation. The importance of having accurate circuit simulation models is to prevent the very costly redesign of an Integrated Circuit (IC). This radically new optimization technique utilizes the Taguchi method to improve the fit between measured and simulated I-V curves for GaAs MESFETs. The Taguchi method consists of developing a Signal-to-Noise Ratio (SNR) equation that will find the optimum combination of controllable signal levels in a design or process to make it robust or as insensitive to noise as possible. In this dissertation, the control factors are considered the circuit model curve fitting parameters and the noise is considered the variation in the simulated I-V curves from the measured I-V curves. This is the first known application of the Taguchi method to the optimization of IC curve fitting model parameters. In addition, this method is not technology or device dependent and can be applied to silicon devices as well. Improvements in the accuracy of the simulated I-V curve fit reaching 80% has been achieved between DC test extracted parameters and the Taguchi optimized parameters. Moreover, the computer CPU execution time of the optimization process is 96% less than a commercial optimizer utilizing the Levenberg-Marquardt algorithm (optimizing 31 FETs). This technique does a least square fit on the data comparing measured currents versus simulated currents for various combinations of SPICE parameters. The mean and standard deviation of this least squares fit is incorporated in determining the SNR, providing the best combination of parameters within the evaluated range. Furthermore, the optimum values of the parameters are found without additional simulation by fitting the response curves to a quadratic equation and finding the local maximum. This technique can easily be implemented with any simulator that utilizes simulation modeling parameters extracted from measured DC test data. In addition, two methods are evaluated to obtain the worst case modeling parameters. One method lobks at the correlation coefficients between modeling parameters and the second looks at the actual device parameters that define the +/- 3σ limits of the process. Lastly, an example is given that describes the applicability of the Taguchi methodology in the design of a differential amplifier, that accounts for the effect of offset voltage. / Ph. D.
125

New design comparison criteria in Taguchi's robust parameter design

Savarese, Paul Tenzing 06 June 2008 (has links)
Choice of an experimental design is an important concern for most researchers. Judicious selection of an experimental design is also a weighty matter in Robust Parameter Design (RPD). RPD seeks to choose the levels of fixed controllable variables that provide insensitivity (robustness) to the variability of a process induced by uncontrollable noise variables. We use the fact that in the RPD scenario interest lies primarily with the ability of a design to estimate the noise and control by noise interaction effects in the fitted model. These effects allow for effective estimation of the process variance — an understanding of which is necessary to achieve the goals of RPD. Possible designs for use in RPD are quite numerous. Standard designs such as crossed array designs, Plackett-Burman designs, combined array factorial designs and many second order designs all vie for a place in the experimenters tool kit. New criteria are developed based on classical optimality criteria for judging various designs with respect to their performance in RPD. Many different designs are studied and compared. Several first-order and many second order designs such as the central-composite designs, Box-Behnken designs, and hybrid designs are studied and compared via our criteria. Numerous scenarios involving different models and designs are considered; results and conclusions are presented regarding which designs are preferable for use in RPD. Also, a new design rotatability entity is introduced. Optimality conditions with respect to our criteria are studied. For designs which are rotatable by our new rotatability entity, conditions are given which lead to optimality for a number of the new design comparison criteria. Finally, a sequential design-augmentation algorithm was developed and programmed on a computer. By cultivating a unique mechanism the algorithm implements a D<sub>s</sub>-optimal strategy in selecting candidate points. D<sub>s</sub>-optimality is likened to D-optimality on a subset of model parameters and is naturally suited to the RPD scenario. The algorithm can be used in either a sequential design-augmentation scenario or in a design-building scenario. Especially useful when a standard design does not exist to match the number of runs available to the researcher, the algorithm can be used to generate a design of the requisite size that should perform well in RPD. / Ph. D.
126

Análise do processo de torneamento da superliga Vat 32® com ferramentas de corte experimentais e comerciais /

Kondo, Marcel Yuzo. January 2019 (has links)
Orientador: Manoel Cleber de Sampaio Alves / Resumo: A superliga de níquel VAT 32® foi desenvolvida como um substituto da liga UNS N07751 (Inconel 151) na fabricação de válvulas automotivas para motores de combustão interna de alto desempenho. A formação de carbonetos de nióbio confere a esta liga elevada resistência ao desgaste, desejada na aplicação em válvulas automotivas, criando-se, porém, uma maior dificuldade na usinagem deste material. Este trabalho estudou o torneamento da liga VAT 32® com quatro tipos de ferramentas de corte. São elas insertos de metal duro com diferentes revestimentos, Ti(C,N) + Al2O3 pelo processo de deposição química de vapor (chemical vapor deposition – CVD), e revestimento de Ti-Al-Si-N pelo processo de deposição física de vapor (physical vapor deposition – PVD), pastilhas de nitreto cúbico de boro (cBN) e pastilhas experimentais de Al2O3 + MgO. Através do método de Taguchi de planejamento experimental, foram obtidas as combinações e os efeitos principais dos parâmetros velocidade de corte, avanço da ferramenta, profundidade de usinagem e tipo de lubrificação (seco ou em abundância) para otimizar cada uma das variáveis respostas, sendo elas potência de usinagem, desgaste das ferramentas, qualidade superficial das peças usinadas, e os sinais de emissão acústica e vibração do processo. Foram obtidos também, através da análise da razão sinal-ruído (S/N) de Taguchi, a composição dos parâmetros de corte em que o processo apresentou menor variabilidade das características de qualidade, o chamado proces... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: VAT 32® is a nickel based super alloy developed to substitute UNS N07751 alloy in production of automotive valves for high performance internal combustion engines. The formation of niobium carbides gives to this alloy a high resistance to wear, desired in the application in automotive valves, creating however, a greater difficulty in the machining of this material. This thesis aimed the study of VAT 32® turning with four different cutting tools. The tested tools were Ti(C,N)+Al2O3 coated by chemical vapor deposition (CVD) carbide inserts, Ti-Al-Si-N coated by physical vapor deposition (PVD) carbide inserts, cubic boron nitrite (cBN) inserts and experimental Al2O3+MgO ceramic inserts. Optimal combination of the cutting parameters and main effects of the factors speed of cutting, tool feed, depth of cutting and lubrication condition (dry and abundant) in turning of VAT 32® were found using Taguchi’s method as a design of experiment (DOE). The analyzed response variables were machining power, tool wear, surface quality of the machined pieces, chips format and acoustic emission and vibration signals of the process. It was also obtained in this work the robust process with the analysis of signal to noise ratio (S/N) where cutting parameters for smaller process variability were found. Finally, the multi-objective optimization method called Grey Relational Analysis (GRA) was used to find optimal cutting conditions for each tested tools. These optimal conditions were used in a tool l... (Complete abstract click electronic access below) / Doutor
127

Optimisation de l'usinage par le procédé d'électroérosion à fil des alliages de titane et des matériaux composites à base de titane appliqués à l'aéronautique / Optimization of machining by wire electric discharge machining process of the titanium alloys and titanium based composites applied to the aeronautics

Ezeddini, Sonia 17 December 2018 (has links)
L’usinage par électroérosion est un procédé d’enlèvement de matière par fusion, vaporisation et érosion, réservé aux matériaux conducteurs et semi-conducteurs.Il peut être utilisé pour usiner les métaux et alliages, les aciers trempés, les alliages céramiques, les carbures métalliques, certaines céramiques et même des matériaux plus durs tels que le diamant polycristallin. La pièce ainsi chauffée voit ses caractéristiques mécaniques chuter et modifier, ce qui augmente son usinabilité. Les travaux réalisés ont porté sur l'influence de l'usinage par électroérosion à fil sur; l'intégrité de surface, l'usinabilité, la productivité et la précision de procédé, de plusieurs matériaux, tels que, le titane pur, l'alliage de titane Ti-6Al-4V, le composite intermétallique à base Ti-Al, le composite Ti17 et le composite Ti6242.En usinage par électro-érosion à fil, et plus précisément en finition, le procédé est caractérisé par un débit de matière, une largeur de kerf, un durcissement superficiel, une zone affectée thermiquement et un état de surface variant en fonction de plusieurs paramètres tels que, le courant de décharge, le temps d’impulsion, la tension d’amorçage, la vitesse de coupe, la pression d'injection de lubrifiant et la tension de fil.Toutefois, il s’agit d’une étude d’optimisation et de modélisation empirique des conditions de coupe des matériaux composites à base métallique et des alliages de titane, afin de maitriser et d'améliorer l'intégrité de surface usinée, d'augmenter la productivité et de perfectionner la précision du procédé. Par la suite, atteindre les exigences de la qualité et de la sûreté de fonctionnement des pièces aéronautiques.Dans cette étude, on a utilisé des méthodes de type Plan d'expériences, méthode de Taguchi et la Méthodologie des surfaces de réponses pour le calage et le contrôle des paramètres de l’usinage par électroérosion à fil, et ses conditions opératoires. / EDM machining is a process for the removal of material by melting, spraying and erosion, which is reserved for conductive and semiconductor materials.It can be used for machining metals and alloys, hardened steels, ceramic alloys, metal carbides, some ceramics and even harder materials such as polycrystalline diamond. The heated part has its mechanical characteristics drop, which increases its machinability. The work carried out focused on the influence of WEDM machining on surface integrity, machinability, productivity and process precision, of several materials: pure titanium, Ti6Al4V alloy, composite intermetallicTi-Al based, Ti17 composite and Ti6242 composite.In ripping, and more precisely in finishing, the process is characterized by a flow of material,kerf width, surface hardening, heat affected zone and surface condition varying with discharge current, pulse time and voltage, cutting speed, lubricant injection pressure and wire tension.In fact, the machining conditions of metal-based composite materials and titanium alloys have been modeled and optimized to improve machined surface integrity, increase productivity, and improve process accuracy. Subsequently, meet the quality and safety requirements of aeronautical parts.Methods such as Experimental Design, Taguchi and Surface of Response were used for calibration and process control parameters and operating conditions.
128

Errores en la búsqueda de condiciones robustas. Metodologías para evitarlos.

Pozueta Fernández, Maria Lourdes 10 December 2001 (has links)
El problema de encontrar condiciones robustas al efecto de factores no controlados es un tema que interesa enormemente a las empresas ya que es una característica que demanda el mercado. Existen básicamente dos métodos para estudiar el problema: El que se basa en el método propuesto por G. Taguchi a comienzos de los 80's con el que se aproxima la variabilidad a partir de matrices producto y se seleccionan las condiciones robustas minimizando la respuesta, o el que parte de una matriz más económica que permite estimar un modelo para la respuesta Y en función de los factores de control y ruido, y estudia las condiciones robustas a partir de las interacciones entre los factores ruido y los factores de control. Aunque en un principio cabrían esperar resultados muy similares analizando un mismo problema por las dos vías hemos encontrado ejemplos donde las conclusiones son muy dispares y por ello nos hemos planteado este trabajo de investigación para encontrar las causas de estas diferencias.El trabajo de investigación lo hemos iniciado estudiando la naturaleza de las superficies asociadas a la variabilidad provocada por factores ruido realizando el estudio de forma secuencial aumentando el número de factores ruido. Hemos demostrado que independientemente de que la métrica seleccionada sea s2(Y), s(Y) o lo(s(Y)) las superficies difícilmente podrán ser aproximadas por polinomios de primer orden en los factores de control llegando a la conclusión de que algunas de las estrategias habituales que los experimentadores utilizan en la práctica difícilmente llevan a un buen conocimiento de esta superficie. Por ejemplo no es adecuado colocar un diseño 2k-p de Resolución III en los factores de control en una matriz producto siendo recomendables diseños de Resolución IV con puntos centrales.A continuación se han supuesto dos fuentes de variación en la respuesta debidas a ruido, fuentes desconocidas para el experimentador, y se ha estudiado la sensibilidad de los dos métodos para recoger estas oportunidades de reducción de la variabilidad demostrándose que el modelo para métricas resumen está más preparado para recoger todas las fuentes de variación que el modelo a partir de métricas no-resumen, el cual es muy sensible a la estimación del modelo de Y.Por último se ha investigado sobre los errores más comunes a la hora de seleccionar las condiciones robustas a partir de gráficos.
129

應用資料包絡法降低電源轉換器溫升之研究

廖 合, Liao,Ho Unknown Date (has links)
由績效觀點,品質(適質)與成本(適量),在概念上是完全一致的。因此,績效的管理,應以品質與成本作為其目標達成與否的衡量標準。本研究以績效觀點來解決公司面臨到品質與成本的兩難的問題。隨著電子產品的功能多樣化,發熱問題卻接踵而來,發熱密度的不斷提昇,對於散熱設計的需求也越來越受到重視。本研究以電源轉換器為對象,其目前已設計完成且已通過美國UL安規認證,但因為其溫升及其變異很大,因此降低電源轉換器的溫升及其變異是一急需解決的問題,以期能找出穩健於不可控因子,使產品變異小且各零件溫升與損失均能降至最低的最適外部零件組合。透過了田口與實驗設計的方法規劃及進行實驗並收集數據。引用加權SN比(multi-response S/N ratio)的方法,分別透過(1)管制圖法及(2)資料包絡法的CCR保證領域法(指CCR-AR模型)來決定加權SN比的權數,以決定可控因子及其水準值。對矩陣實驗的數據利用MTS ( M a h a l o n o b I s - Taguchi System)來篩選研究問題中較重要的特性變數,再針對篩選結果中較重要的特性變數的數據分別利用(1)倒傳遞類神經網路結合資料包絡法及(2)資料包絡法結合主成份分析法兩種分析方法,得到外殼鑽孔形狀與矽膠片大小的最佳因子組合。由改善後的確認實驗結果得知,雖然平均溫升下降的程度不大,然而大部份量測點的溫升標準差都顯著變小了,因此本研究在降低該電源轉換器溫升變異的效果顯著。
130

Incertitude et flexibilité dans l'optimisation via simulation ; application aux systèmes de production / Uncertainty and flexibility in optimization via simulation; Application Production Systems

Baccouche, Ahlem 16 October 2012 (has links)
La simulation est de plus en plus utilisée dans les études de conception et d’organisation des systèmes complexes. Une étude par optimisation via simulation permet d’optimiser les paramètres d’un système afin d’obtenir les meilleures performances, estimés par la simulation. Toutefois, dans de nombreux systèmes complexes, certaines données sont incertaines (par exemple, les conditions opératoires du système ou le comportement des décideurs). En conséquence, même lorsque l’étude d’optimisation via simulation est réalisée avec le plus grand soin, les solutions obtenues peuvent se révéler inadaptées. Dans ce contexte, notre objectif est d’étudier comment optimiser, via simulation, un système afin qu’il continue d’être performant et robuste. L’étude bibliographique approfondie que nous avons menée montre que très peu de travaux en optimisation via simulation intègrent l’incertain et qu’ils peuvent être très limités dans leur capacité à fournir des solutions robustes en un temps de calcul raisonnable en particulier lorsque des métaheuristiques sont employées. Par ailleurs, la plupart des travaux existants délivrent une solution unique de conception performante du système et ne sont pas adaptés pour prendre en compte les aspects collaboratifs (groupe de décideurs). C’est pourquoi, nous avons proposé une approche originale connectant une recherche des solutions par optimisation évolutionniste multimodale et une évaluation des performances du système via simulation. Notre approche va permettre de fournir plusieurs alternatives performantes de conception d’un système et assez diversifiées pour acquérir aux décideurs une flexibilité dans le choix de la solution à implanter. De plus, nous avons exploité cette flexibilité pour intégrer, d’une part, les préférences individuelles des membres d’une équipe décisionnelle et, d’autre part, la présence de plusieurs environnements pour étudier la robustesse des solutions en un temps de traitement raisonnable par rapport à d’autres approches utilisant des méta heuristiques. Les approches proposées sont illustrées par l’optimisation d’une maille de supply chain. Grâce à cette application, nous avons montré qu’en plus de fournir un choix de solutions performantes pour dimensionner le système, nous pouvons proposer des solutions « collectivement acceptable » pour l’équipe décisionnelle et déterminer des solutions de conception robustes du système. Ces approches fournissent ainsi une flexibilité pour la phase de décision et contribuent à la prise en compte de l’incertitude dans l’optimisation via simulation d’un système. / Simulation is more and more used in studies of design and organization of complex systems. A simulation optimization study search for the system parameters that yield the best performance. However, in many complex systems, data can be uncertain (e.g., the operating conditions of the system or the behavior of decision makers). Therefore, even when the simulation optimization study is performed with the greatest care, the solutions may be inadequate. In this context, our goal is to study how to optimize, via simulation, a robust system. The extensive literature review we conducted shows that few simulation optimization approaches incorporate uncertainty and they can be very limited in their ability to provide robust solutions in a reasonable processing time, especially when metaheuristics are used. In addition, most existing approaches provide a single solution to the design problem and are not adapted to take into account the collaborative aspects (decision maker’s team). Therefore, we propose a novel approach connecting a search for solutions by evolutionary multimodal optimization and the evaluation of the system performance by simulation. Our approach allows to obtain a diverse set of designs that can be considered as efficient in terms of their performance and to provide decision-Makers with flexibility in the choice of the solution to implement. In addition, we use this flexibility to integrate first, the individual preferences of the members of decision maker’s team and secondly, the presence of multiple environments For studying the robustness of solutions in a reasonable processing time compared to other approaches based on metaheuristics. The proposed approaches are illustrated with an example of supply chain. With this application, we have shown that in addition to providing a choice of efficient solutions for sizing the system, we propose "collectively acceptable" solutions to the decision-Making team and we identify robust solutions. Then, these approaches provide flexibility to the decision phase and contribute to the consideration of uncertainty in the simulation optimization of the system.

Page generated in 0.0376 seconds