Spelling suggestions: "subject:"design off experiments"" "subject:"design oof experiments""
381 |
Hit Identification and Hit Expansion in Antituberculosis Drug Discovery : Design and Synthesis of Glutamine Synthetase and 1-Deoxy-D-Xylulose-5-Phosphate Reductoisomerase InhibitorsNordqvist, Anneli January 2011 (has links)
Since the discovery of Mycobacterium tuberculosis (Mtb) as the bacterial agent causing tuberculosis, the permanent eradication of this disease has proven challenging. Although a number of drugs exist for the treatment of tuberculosis, 1.7 million people still die every year from this infection. The current treatment regimen involves lengthy combination therapy with four different drugs in an effort to combat the development of resistance. However, multidrug-resistant and extensively drug-resistant strains are emerging in all parts of the world. Therefore, new drugs effective in the treatment of tuberculosis are much-needed. The work presented in this thesis was focused on the early stages of drug discovery by applying different hit identification and hit expansion strategies in the exploration of two new potential drug targets, glutamine synthetase (GS) and 1-deoxy-D-xylulose-5-phosphate reductoisomerase (DXR). A literature survey was first carried out to identify new Mtb GS inhibitors from compounds known to inhibit GS in other species. Three compounds, structurally unrelated to the typical amino acid derivatives of previously known GS inhibitors, were then discovered by virtual screening and found to be Mtb GS inhibitors, exhibiting activities in the millimolar range. Imidazo[1,2-a]pyridine analogues were also investigated as Mtb GS inhibitors. The chemical functionality, size requirements and position of the substituents in the imidazo[1,2-a]pyridine hit were investigated, and a chemical library was designed based on a focused hierarchical design of experiments approach. The X-ray structure of one of the inhibitors in complex with Mtb GS provided additional insight into the structure–activity relationships of this class of compounds. Finally, new α-arylated fosmidomycin analogues were synthesized as inhibitors of Mtb DXR, exhibiting IC50 values down to 0.8 µM. This work shows that a wide variety of aryl groups are tolerated by the enzyme. Cinnamaldehydes are important synthetic intermediates in the synthesis of fosmidomycin analogues. These were prepared by an oxidative Heck reaction from acrolein and various arylboronic acids. Electron-rich, electron-poor, heterocyclic and sterically hindered boronic acids could be employed, furnishing cinnamaldehydes in 43–92% yield.
|
382 |
Développement d'une méthode d'aide à la décision multicritère pour la conception des bâtiments neufs et la réhabilitation des bâtiments existants à haute efficacité énergétique / Development of a multicriteria optimization method for decision support in designing or retrofitting high energy performance buildingsRomani, Zaid 12 December 2015 (has links)
Le bâtiment est considéré comme étant le premier secteur consommateur d’énergie dans le monde. Dans la région méditerranéenne, face à la crise économique et aux engagements pris pour limiter les effets produisant le réchauffement climatique, il est devenu impératif de réduire la consommation énergétique des bâtiments que ce soit par la conception des bâtiments neufs ou par la réhabilitation du parc existant. Dans ce cadre-là, chercher des solutions techniques optimales en tenant compte des critères économiques, environnementaux et sociétaux est un problème très complexe du fait du nombre élevé de paramètres à prendre en compte. Pour remédier à ce problème, un état de l’art des méthodes d’optimisation multicritère a été réalisé. Nous avons constaté que plusieurs contraintes existent lors de l’utilisation de ces méthodes telles qu’un temps de calcul élevé et la non-maîtrise de la convergence des résultats vers des optimums globaux recherchés. L’objectif de notre travail est de proposer une nouvelle méthode qui permette de contourner ces difficultés. Cette méthode est basée, dans un premier temps, sur le développement des modèles polynomiaux pour la prédiction des besoins de chauffage, de refroidissement, d’énergie finale et du confort thermique d’été à l’aide du logiciel TRNSYS. Pour établir ces modèles, nous avons utilisé la méthode des plans d’expériences et des simulations thermiques dynamiques. À partir de ces modèles, une analyse de sensibilité a été entamée afin d’identifier les paramètres les plus influents sur les besoins énergétiques et le confort thermique d’été. Une base de données est utilisée associant chaque paramètre à son coût et à son impact environnemental sur son cycle de vie. Ensuite, une étude paramétrique complète a été réalisée en utilisant les fonctions polynomiales dans le but de déterminer un ensemble de solutions optimales à l’aide de l’approche du Front de Pareto. Cette nouvelle méthode a été appliquée dans le but de concevoir des bâtiments neufs à haute efficacité énergétique à des coûts maîtrisés pour les six zones climatiques du Maroc. La validation des modèles polynomiaux réalisée grâce à une comparaison avec des simulations aléatoires a donné des résultats très satisfaisants. Avec un modèle polynomial de second degré, l’erreur maximale sur les besoins énergétiques et le confort thermique adaptatif d’été ne dépasse pas 2 kWh/m².an et 9% respectivement dans la plupart des cas. Les modèles développés ont ensuite été utilisés pour l’aide à la décision multicritères. Les résultats obtenus ont montré que des bâtiments à très faibles besoins énergétiques peuvent être construits à un coût raisonnable, et qu’un effort doit être porté sur des solutions plus performantes pour le rafraîchissement en été. Enfin, nous avons mis en œuvre la méthode que nous avons développée dans le cadre de la réhabilitation d’un bâtiment existant à La Rochelle. Les critères environnementaux ont aussi été intégrés à la recherche de solutions optimales. La solution retenue selon 14 critères correspond à un ensemble de solutions techniques permettant d’obtenir des besoins de chauffage de l’ordre de 15 kWh/m².an avec un compromis réalisé entre l’efficacité énergétique, le confort des occupants, les impacts environnementaux et la maîtrise du coût de la réhabilitation. La méthode développée dans le cadre de ce travail a montré un fort potentiel d’utilisation pour l’aide à la décision multicritère lors de la conception des bâtiments neufs ou en réhabilitation. Elle permet d’effectuer très rapidement une optimisation opérationnelle de l’enveloppe pour contribuer à des bâtiments durables, confortables, à coût maîtrisé et à basse consommation énergétique. / The building sector is the largest consumer of energy in the world. In Mediterranean region, facing the economic crisis and commitments for climate change, the reduction of energy consumption for both new and existing buildings is more necessary. Against this background, seeking optimal technical solutions taking into account the economic, environmental and societal criteria is a very complex problem due to the high number of parameters to consider. In order to solve this problem, a state of the art of multi-criteria optimization method has been achieved. We found that many constraints exist when using these methods such as high time calculation and no absolute assurance to find the global optimum. Thus, the main objective of the present work is to propose a new method that allows overcome these difficulties. This method is based on the development of polynomial models for the prediction of heating energy needs, cooling energy needs, final energy needs and summer thermal comfort. To establish these models, we used the design of experiments method and dynamic thermal simulations using TRNSYS software. From these models, a sensitivity analysis has been achieved in order to identify the leading parameters on energy requirements and thermal comfort in summer. A database associating each parameter for its cost and environmental impact on its lifetime was generated from CYPE software and INIES database. Then, a detailed parametric study was performed using polynomial functions for determining a set of optimal solutions using the Pareto front approach. This new method was applied to design new buildings with high energy efficiency at controlled costs for the six Moroccan climate zones. The validation of polynomial models through a comparison with random simulations gave very satisfactory results. With a polynomial model of the second order, the maximum error on the energy needs and the adaptive thermal comfort did not exceed 2 kWh/m².an and 9% respectively. The developed models were used for multiple-criteria decision analysis. The results showed that buildings with very low energy needs can be built with a reasonable cost. On the other hand an effort should be focused on more efficient solutions for adaptive thermal comfort in summer especially for Marrakech and Errachidia. Finally, we also implemented our method to a project of energy rehabilitation of an existing building located in La Rochelle (France). Environmental criteria were also taken into account in the optimization process. The selected technical solutions procured approximately 15 kWh/m².year of heating energy needs. The developed multicriteria decision method showed a great potential for both designing new and existing buildings with high energy efficiency. It allows a very fast operational optimization of sustainable buildings at reasonable cost and low energy consumption.
|
383 |
Modelo de aplicação de ferramentas de projeto integradas ao longo das fases de desenvolvimento de produtoRodrigues, Leandro Sperandio January 2008 (has links)
O presente trabalho apresenta um modelo de aplicação de ferramentas de projeto integradas ao longo das fases de desenvolvimento de produto, neste caso, aplicadas na melhoria do produto suporte para fixação de cilindro de gás natural veicular. O foco do trabalho é apresentar a integração de ferramentas nas fases de Projeto Informacional, Projeto Conceitual e Projeto Detalhado do Processo de Desenvolvimento de Produtos. Entende-se por integração a escolha de ferramentas que permitam conduzir o fluxo de informação ao longo das fases de desenvolvimento de produtos, de tal forma que a informação de saída de uma ferramenta seja a informação de entrada da ferramenta subseqüente. As ferramentas integradas a partir da fase de Projeto Informacional foram a Pesquisas de Mercado Qualitativa e Quantitativa, com a finalidade de identificar as demandas dos clientes. As demandas dos clientes foram os dados de entrada da Matriz da Qualidade (Quality Function Deployment - QFD), resultando nos requisitos do produto e suas respectivas especificações-meta. A partir dos requisitos do produto, diferentes conceitos (configurações) foram gerados, apoiados pela Matriz Morfológica no Projeto Conceitual. Na seqüência utilizou-se a ferramenta de Projeto de Experimentos (Design of Experiments - DOE) para avaliar a estimativa de preço frente às possíveis configurações do produto. Com a Matriz de Pugh, alternativas de conceito de produto foram avaliadas possibilitando a escolha do melhor conceito de produto. No Projeto Detalhado, foi aplicada ferramenta de Análise dos Modos de Falha e seus Efeitos (Failure Mode and Effects Analysis - FMEA), utilizado de forma integrada com o QFD, para identificar as falhas atuais e potenciais e seus efeitos em sistemas e processo. Em função das demandas identificadas, foram definidas e implementadas melhorias no produto. Observou-se a adequabilidade destas ferramentas de projeto para aplicação de forma integrada, garantindo um fluxo contínuo de informações rastreáveis e que tendem a levar à uma reduzida chance de perdas ao longo do processo. / There are few examples in literature about the integration of project tools along the product development phases. The main research objective in thesis is to integrate some tools that facilitate the information flow along the product development phases, more specifically in Informational Project, Conceptual Project and Detailed Project phases. The product improvement “support for Vehicular Natural Gas” was the object of study in thesis. The main idea is that the information output from one tool is the input information of the subsequent tool. Starting from the Informational Project phase it was performed qualitative and quantitative market researches with the purpose of identifying the customers' demands for the studied product. The customers’ demands were the entrance data of the QFD (Quality Function Deployment) tool resulting in the product requirements and their respective specifications-goal. In Concept Project the product requirements were converted in functions and further different concepts were generated through the Morphologic Analysis. In the sequence, it was used the DOE (Design for experiments) tool to evaluate the estimate price to the possible products' configurations. The Pugh Matrix tool was used for concepts evaluation and choice. The FMEA (Failure Mode and Effects Analysis) tool integrated with QFD was useful for current and potential failures identification and impact analysis in the system and process. With the application of these five tools the users’ demands were identified and improvements to the product were performed. The chosen tools proved to be adequate for integration, assuring that a continuous trackable information flow was attained with presumable reduced information loss, along the Product Development Process phases.
|
384 |
PP/clay nanocomposites : compounding and thin-wall injection mouldingFu, Tingrui January 2017 (has links)
This research investigates formulation, compounding and thin-wall injection moulding of Polypropylene/clay nanocomposites (PPCNs) prepared using conventional melt-state processes. An independent study on single screw extrusion dynamics using Design of Experiments (DoE) was performed first. Then the optimum formulation of PPCNs and compounding conditions were determined using this strategy. The outcomes from the DoE study were then applied to produce PPCN compounds for the subsequent study of thin-wall injection moulding, for which a novel four-cavity injection moulding system was designed using CAD software and a new moulding tool was constructed based upon this design. Subsequently, the effects of moulding conditions, nanoclay concentration and wall thickness on the injection moulded PPCN parts were investigated. Moreover, simulation of the injection moulding process was carried out to compare the predicted performance with that obtained in practice by measurement of real-time data using an in-cavity pressure sensor. For the selected materials, the optimum formulation is 4 wt% organoclay (DK4), 4 wt% compatibiliser (Polybond 3200, PPgMA) and 1.5 wt% co-intercalant (erucamide), as the maximum interlayer spacing of clay can be achieved in the selected experimental range. Furthermore, DoE investigations determined that a screw speed of 159 rpm and a feed rate of 5.4 kg/h are the optimum compounding conditions for the twin screw extruder used to obtain the highest tensile modulus and yield strength from the PPCN compounds. The optimised formulation of PPCNs and compounding conditions were adopted to manufacture PPCN materials for the study of thin-wall injection moulding. In the selected processing window, tensile modulus and yield strength increase significantly with decreasing injection speed, due to shear-induced orientation effects, exemplified by a significantly increased frozen layer thickness observed by optical microscopy (OM) and Moldflow® simulation. Furthermore, the TEM images indicate a strong orientation of clay particles in the flow direction, so the PPCN test pieces cut parallel to the flow direction have 36.4% higher tensile modulus and 13.6 % higher yield strength than those cut perpendicular to the flow direction, demonstrating the effects of shear induced orientation on the tensile properties of thin-wall injection moulded PPCN parts. In comparison to injection speed, mould temperature has very limited effects in the selected range investigated (25-55 °C), in this study. The changes in moulding conditions show no distinctive effects on PP crystallinity and intercalation behaviour of clay. Impact toughness of thin wall injection moulded PPCN parts is not significantly affected by either the changes in moulding conditions or clay concentration (1-5 %). The SEM images show no clear difference between the fracture surfaces of PPCN samples with different clay concentrations. TEM and XRD results suggest that higher intercalation but lower exfoliation is achieved in PPCN parts with higher clay content. The composites in the thin sections (at the end of flow) have 34 % higher tensile modulus and 11 % higher yield strength than in the thicker sections, although the thin sections show reduced d001 values. This is attributed to the significantly enhanced shear-induced particle/molecular orientation and more highly oriented frozen layer, according to TEM, OM and process simulation results. In terms of the reduced d001 values in the thin sections, it is proposed that the extreme shear conditions in the thin sections stretch the PP chains in the clay galleries to a much higher level, compaction of clay stacks occurs as less interspacing is needed to accommodate the stretched chains, but rapid cooling allows no time for the chains to relax and expand the galleries back. Overall, data obtained from both actual moulding and simulation indicate that injection speed is of utmost importance to the thin-wall injection moulding process, development of microstructure, and thus the resulting properties of the moulded PPCN parts, in the selected experimental ranges of this research.
|
385 |
Extraction au point de trouble de substances organiques et électrolytes à l'aide de mélangeurs-décanteurs / Cloud point extraction of organics and electrolytes substances using mixer settlersBenkhedja, Houaria 10 March 2015 (has links)
Au-dessus d’une certaine température appelée température de trouble (Tc), les solutions aqueuses de la majorité des tensioactifs non ioniques polyéthoxylés se séparent en deux phases liquides en équilibre : la phase diluée et le coacervat. Grâce à la solubilisation micellaire de composés hydrophobes, amphiphiles ou même ioniques et à leur concentration dans le (faible) volume de coacervat, une extraction à deux phases aqueuses (extraction par point de trouble ou par coacervat)peut être réalisée et appliquée à des opérations de dépollution d'effluents industriels ou à la concentration ou encore à la séparation de substances à haute valeur ajoutée. L’extraction par point de trouble (CPE) est une technique relativement simple et écologique pour l'élimination des matières toxiques de l'environnement, qui s’est avérée efficace dans le traitement de divers contaminants (organiques et inorganiques dissous ou dispersés) de l’eau. Une première partie de cette thèse consiste à rappeler quelques notions sur la pollution industrielle des eaux et quelques généralités sur les tensioactifs (TA) et sur l’extraction liquide-liquide. Elle est suivie d’une description des réactifs, du matériel et des méthodes utilisées au cours de ce travail, comme préliminaire à la mise au point d'un procédé d'extraction par coacervat. Quelques propriétés thermodynamiques superficielles (adsorption) et d’association (micellisation) de deux tensioactifs non ioniques industriels (le Simulsol NW342 et le Tergitol 15-S-7) ont été déterminées. Les courbes de démixtion des systèmes binaires (eau/TA) ainsi que l’effet de divers additifs (sel, composés organiques, tensioactifs ioniques) sur le point de trouble sont étudiés. Le diagramme isotherme du système ternaire (eau/TA/ phénol) est tracé. Une application du modèle de Flory-Huggins-Rupert pour la prédiction des courbes de démixtion des tensioactifs non ioniques a été expérimentée. L’extraction à un seul contact, à partir de solutions modèles, utilise des alcools oxo éthoxylés biodégradables (Simulsol NW342 et Tergitol 15-S-7) pour des polluants organiques dissous (phénol, 1-phényléthanol et alcool benzylique) et un mélange de tensioactifs non ionique (Simulsol NW342) et ionique (SDS ou CTAB) pour des polluants métalliques solubles (plomb(II), molybdène(VI)). On cherche le meilleur compromis entre le pourcentage de soluté extrait (E%),la fraction volumique du coacervat (фv), et les pourcentages de soluté et de tensioactif restants dans la phase diluée (Xs,d et XTA, d), en utilisant un plan d'expériences de type Scheffé et un lissage empirique des courbes. Les résultats sont très prometteurs car les pourcentages d’extraction varient de 60 à 95% pour les solutés organiques et de 40 à 85% pour les solutés métalliques, les meilleures performances étant obtenues pour le phénol et le plomb. D’autre part, il est possible, en jouant sur le pH, d’améliorer les séparations et de recycler le tensioactif après désextraction des solutés. Les cinétiques d’extraction, de séparation et de clarification ont été aussi étudiées pour une meilleure compréhension de ces systèmes. Enfin, l’extraction continue du phénol à partir du mélange eau/4%Simulsol NW342/0,2%phénol (mass.) a été testée sur deux appareillages (extracteur centrifuge et mélangeurdécanteur)thermostatés. Dans un procédé multi-étagé à courants croisés sur un mélangeur-décanteur, on arrive à réduire la concentration du phénol à moins de 0,3ppm (concentration limite selon la législation) après six étages. / Above a certain temperature called cloud point (Tc), aqueous solutions of most nonionic polyethoxylated surfactants separate into two liquid phases in equilibrium: the dilute phase and the coacervate. Thanks to the micellar solubilization of hydrophobic, amphiphilic or even ionic compounds and their concentration in the low volume of coacervate, two-aqueous phase extraction (cloud-point or coacervate extraction) can be performed and applied to the removal of pollutants from aqueous industrial effluents or to the concentration or even separation of high added-value chemicals. Cloud point extraction (CPE) is a relatively simple and ecologically-safe technique for the removal of toxic materials from the environment; this process has proved efficient in treating water for various contaminants including dissolved or dispersed organic and inorganic chemicals. The first part of this thesis consists of recalling some notions on industrial wastewater, some generalities about surfactants (TA) and liquid-liquid extraction, followed by a description of all the reagents, materials and methods used in this work as a preliminary study of a coacervate extraction process. Some surface thermodynamic (adsorption) and association (micellization) properties of two industrial nonionic surfactants (Simulsol NW342 and Tergitol 15-S-7) were determined. The cloud point curves of water /TA binary systems are drawn and the effect of various additives (salt, organic compounds, ionic surfactants) on the cloud point is studied. The isothermal diagram of a water/TA/phenol ternary system is drawn. An application of the Flory-Huggins-Rupert model for the prediction of cloud point curves of nonionic surfactants is discussed. Single contact extraction, from model solutions, uses biodegradable polyethoxylated nonionic surfactants (Simulsol NW342 and Tergitol 15-S-7) for dissolved organic pollutants (phenol, 1-phenylethanol and benzyl alcohol) and mixed micelles of nonionic (Simulsol NW342) and ionic (SDS, CTAB) surfactants for soluble electolytes (lead (II), molybdenum (VI)). We search for the best compromise between the percentage of solute extracted (E%), the coacervate volume fraction (фc) and the percentages of solute and surfactant remaining in the dilute phase (Xs,d and XTA,d), These experimental results are subject to an empirical smoothing through a Scheffé-type experimental design, and an empirical curve fitting procedure. The results are very promising, due to the percentage of solute extracted, which varies between 60 and 95% for organic solutes and from 40 to 85% for electrolytes, the best performances being obtained for phenol and lead. On the other hand, it is possible, by adjusting pH, to improve the separation and recycle the surfactant after back-extraction. The kinetics of extraction yield and phase separation and clarification were also investigated for a better understanding of these systems. Finally, the continuous extraction of phenol from a model solution (water/4wt.% Simulsol NW342/0.2wt.% phenol) on two thermostated equipments (a centrifugal extractor and a mixer-settler) was attempted. In a multi-stage process on a cross-current mixer-settler, the concentration of residual phenol in the dilute phase could be reduced to less than 0.3 ppm (concentration allowed by standard European regulations) after six stages.
|
386 |
Diferentes métodos de aglutinação para melhoria de processos com múltiplas respostas / Different agglutination methods for optmize a process whit multiple responsesGomes, Fabrício Maciel [UNESP] 15 December 2015 (has links)
Submitted by FABRÍCIO MACIEL GOMES null (fabricio@dequi.eel.usp.br) on 2016-01-04T00:06:19Z
No. of bitstreams: 1
Tese_Fabricio_Maciel_Gomes.pdf: 1836829 bytes, checksum: 3ec7860a9d87ebfeaef21b25dc157d25 (MD5) / Approved for entry into archive by Juliano Benedito Ferreira (julianoferreira@reitoria.unesp.br) on 2016-01-06T16:12:19Z (GMT) No. of bitstreams: 1
gomes_fm_dr_guara.pdf: 1836829 bytes, checksum: 3ec7860a9d87ebfeaef21b25dc157d25 (MD5) / Made available in DSpace on 2016-01-06T16:12:19Z (GMT). No. of bitstreams: 1
gomes_fm_dr_guara.pdf: 1836829 bytes, checksum: 3ec7860a9d87ebfeaef21b25dc157d25 (MD5)
Previous issue date: 2015-12-15 / Empresas não medem esforços para aperfeiçoar seus processos e produtos de acordo com diferentes critérios para satisfazer as exigências e necessidades dos clientes em busca de um padrão de competitividade superior ao de suas concorrentes. Neste cenário é muito comum a necessidade de se estabelecer condições que resultem na melhoria de mais de um critério de forma simultânea. Neste trabalho foi realizada uma avaliação da utilização de quatro métodos que utilizam as Meta-heurísticas Recozimento Simulado, Algoritmo Genético, Recozimento Simulado combinado com o método Nelder Mead Simplex e algoritmo genético combinado com o método Nelde-Mead simplex para o estabelecimento de melhoria das condições de processos com múltiplas respostas. Para a avaliação dos métodos propostos foram utilizados problemas-teste criteriosamente selecionados na literatura de forma a serem analisados casos com diferente número de variáveis, número de respostas e tipos de resposta. A aglutinação das respostas foi realizada por quatro métodos diferentes: Desirability, Desvio Médio Percentual, Programação por Compromisso e Programação por Compromisso normalizada pela distância euclidiana. A avaliação dos métodos foi realizada por meio de comparação entre os resultados obtidos na utilização de um mesmo método de aglutinação, determinando assim a eficiência do método de busca. Os resultados obtidos na avaliação dos métodos sugerem a aplicação do método do algoritmo genético quando se pretende estabelecer parâmetros que resultem na melhoria de processos com múltiplas respostas, em particular quando essas respostas são modeladas por equações com termos cúbicos, independentemente do número de termos que possam conter, do tipo de respostas e do número de variáveis. / Companies go to great lengths to improve its processes and products according to different criteria to meet the demands and needs of customers looking for a higher standard of competitiveness to that of their competitors. This scenario is very common the need to establish conditions that result in the improvement of more than one criterion simultaneously. This work was carried out an evaluation of the use of four methods that use Metaheuristics Simulated Annealing, Genetic Algorithms, Simulated Annealing combined with the Nelder Mead Simplex method and genetic algorithm combined with Nelde Mead simplex method for the improvement of establishing the conditions of processes with multiple answers. For the evaluation of the proposed test methods were used in the literature problems carefully selected in order to be analyzed cases with different numbers of variables, response numbers and types of responses. In this research we used the average percentage deviation function as a way to bring together the answers. The agglutination of the answers was performed by four different methods: Desirability, Average Percentage Deviation, Compromise Programming and Compromise Programming normalized by Euclidean distance. The evaluation method was performed by comparison between the results obtained in using the same bonding method, thereby determining the efficiency of the search method. The results obtained in the evaluation of the methods suggest the application of the genetic algorithm method when you want to set parameters that result in the improvement of processes with multiple answers, particularly when these responses are modeled by equations with cubic terms, regardless of the number of terms that can contain the type of responses and the number of variables.
|
387 |
Modelo de aplicação de ferramentas de projeto integradas ao longo das fases de desenvolvimento de produtoRodrigues, Leandro Sperandio January 2008 (has links)
O presente trabalho apresenta um modelo de aplicação de ferramentas de projeto integradas ao longo das fases de desenvolvimento de produto, neste caso, aplicadas na melhoria do produto suporte para fixação de cilindro de gás natural veicular. O foco do trabalho é apresentar a integração de ferramentas nas fases de Projeto Informacional, Projeto Conceitual e Projeto Detalhado do Processo de Desenvolvimento de Produtos. Entende-se por integração a escolha de ferramentas que permitam conduzir o fluxo de informação ao longo das fases de desenvolvimento de produtos, de tal forma que a informação de saída de uma ferramenta seja a informação de entrada da ferramenta subseqüente. As ferramentas integradas a partir da fase de Projeto Informacional foram a Pesquisas de Mercado Qualitativa e Quantitativa, com a finalidade de identificar as demandas dos clientes. As demandas dos clientes foram os dados de entrada da Matriz da Qualidade (Quality Function Deployment - QFD), resultando nos requisitos do produto e suas respectivas especificações-meta. A partir dos requisitos do produto, diferentes conceitos (configurações) foram gerados, apoiados pela Matriz Morfológica no Projeto Conceitual. Na seqüência utilizou-se a ferramenta de Projeto de Experimentos (Design of Experiments - DOE) para avaliar a estimativa de preço frente às possíveis configurações do produto. Com a Matriz de Pugh, alternativas de conceito de produto foram avaliadas possibilitando a escolha do melhor conceito de produto. No Projeto Detalhado, foi aplicada ferramenta de Análise dos Modos de Falha e seus Efeitos (Failure Mode and Effects Analysis - FMEA), utilizado de forma integrada com o QFD, para identificar as falhas atuais e potenciais e seus efeitos em sistemas e processo. Em função das demandas identificadas, foram definidas e implementadas melhorias no produto. Observou-se a adequabilidade destas ferramentas de projeto para aplicação de forma integrada, garantindo um fluxo contínuo de informações rastreáveis e que tendem a levar à uma reduzida chance de perdas ao longo do processo. / There are few examples in literature about the integration of project tools along the product development phases. The main research objective in thesis is to integrate some tools that facilitate the information flow along the product development phases, more specifically in Informational Project, Conceptual Project and Detailed Project phases. The product improvement “support for Vehicular Natural Gas” was the object of study in thesis. The main idea is that the information output from one tool is the input information of the subsequent tool. Starting from the Informational Project phase it was performed qualitative and quantitative market researches with the purpose of identifying the customers' demands for the studied product. The customers’ demands were the entrance data of the QFD (Quality Function Deployment) tool resulting in the product requirements and their respective specifications-goal. In Concept Project the product requirements were converted in functions and further different concepts were generated through the Morphologic Analysis. In the sequence, it was used the DOE (Design for experiments) tool to evaluate the estimate price to the possible products' configurations. The Pugh Matrix tool was used for concepts evaluation and choice. The FMEA (Failure Mode and Effects Analysis) tool integrated with QFD was useful for current and potential failures identification and impact analysis in the system and process. With the application of these five tools the users’ demands were identified and improvements to the product were performed. The chosen tools proved to be adequate for integration, assuring that a continuous trackable information flow was attained with presumable reduced information loss, along the Product Development Process phases.
|
388 |
Estudo comparativo das aproximações baseadas no método de decomposição paramétrico para avaliar redes de filas de manufatura utilizando planejamento de experimentos / A comparative study of approximations based on the parametric decomposition method to evaluate manufacturing queueing networks using designing of experimentsCamorim, José Eduardo Vieira 29 February 2008 (has links)
Made available in DSpace on 2016-06-02T19:51:35Z (GMT). No. of bitstreams: 1
1770.pdf: 2888839 bytes, checksum: 18c57ba21e9192d597b74f8d1d0fc997 (MD5)
Previous issue date: 2008-02-29 / Universidade Federal de Minas Gerais / This is a study of approximations based on parametric decomposition methods
used in open queueing networks for modeling discrete job-shop manufacturing systems. These
approximations play an important role in evaluating the performance of productive systems and
have proved effective in many situations. Besides, these approximations are relatively easy to
apply requiring fewer data compared to other methods because they use the average rate and SCV
(square coefficient of variation) as the only parameters to characterize the network arrival and
service processes. This work is aimed at analyzing and comparing several approximations since
they are not yet available in the literature. Hence, several network situations were tested in order
to identify the most adequate approximation for each situation. Firstly, a two-station network was
analyzed followed by the analysis of a five-station network and lastly, a real example of a
semiconductor plant, analyzed by Bitran e Tirupati (1988), was used. In order to reach these
goals, the state of the art of approximation methods to evaluate the performance of open queueing
networks was studied, and the approximations were compared using the experiment planning
technique, important factors for building network configuration and data analysis The findings of
this work demonstrate that approximations can be highly efficient to evaluate the performance of
discrete job-shop manufacturing systems. Regardless of the configurations studied, it is worth
mentioning that approximations 3 and 2, in general, showed the best results if compared to the
other values obtained from simulations to evaluate the performance of open queueing networks,
OQN,. The other approximations tended to overestimate E(Lj) when the number of stations is
higher. This study intends to contribute to the development of computing systems in order to
support project decisions and the planning and control of discrete manufacturing systems using
approximations based on the parametric decomposition method / Esta dissertação estuda as aproximações baseadas nos métodos analíticos
paramétricos de decomposição usados em redes de filas abertas que modelam sistemas discretos
de manufatura do tipo job-shop. Estas aproximações possuem um importante papel na avaliação
do desempenho de sistemas produtivos e vem se mostrando eficiente para uma grande
diversidade de situações. Além disso, são aproximações relativamente fáceis de serem
aplicadas, necessitando poucos dados em relação a outros métodos, pois utilizam apenas a
média e o scv (coeficiente quadrático de variação) como parâmetros para caracterizar os
processos de chegadas e os processos de serviço da rede. O foco deste trabalho está em realizar
análises e comparações entre diversas aproximações, pois ainda não existe nenhuma
comparação mais efetiva entre elas na literatura. Para isso, diversas situações de redes foram
exploradas para identificar quais aproximações são mais adequadas para quais situações.
Primeiramente é analisado uma rede com 2 estações, posteriormente uma outra com 5 estações
e por fim usou um exemplo real de uma fábrica de semi-condutores, analisada por Bitran e
Tirupati (1988). Para que seja possível atingir esses objetivos, estudou-se o estado da arte das
aproximações dos métodos de avaliação de desempenho para redes de filas abertas, e comparouse
as aproximações por meio de técnicas de planejamento de experimentos, parte importante na
construção das configurações das redes e análise dos resultados. Os resultados apresentados
nesta dissertação mostram que essas aproximações podem ser bem eficientes ao avaliar o
desempenho de sistemas discretos de manufatura do tipo job-shop. Independente das
configurações analisadas, vale destacar a Aproximação 3 e a Aproximação 2, em geral, obtendo
os melhores resultados, entre as aproximações, em comparação aos valores obtidos pela
simulação para avaliar o desempenho de redes de filas OQN. As outras aproximações
demonstraram certo comportamento em sobreestimar E(Lj), quando o número de estações
aumentam. Este estudo pretende contribuir para o desenvolvimento de sistemas computacionais
13
para apoiar decisões de projeto, planejamento e controle dos sistemas discretos de manufatura,
usando aproximações baseadas em métodos de decomposição paramétrico
|
389 |
Etude du comportement dynamique des structures composites réalisées par LRI : application à l’impact et à la fatigue / Dynamic behavior of LRI's composite structures : application to impact and fatigueGarnier, Christian 29 November 2011 (has links)
Les industriels du secteur aéronautique sont, de plus en plus, à la recherche de procédés de fabrication à forte valeur ajoutée sans modifier les paramètres d’infusabilité de la résine lorsque l’on change de tissu. Nous avons donc mis en œuvre le procédé d’infusion de résine liquide sur des composites carbone/époxyde de forte épaisseur (e>4 mm) en modifiant les cycles de polymérisation, les matériaux utilisés et les séquences d’empilement. Tous les tissus sont en carbone et la résine utilisée est la résine commerciale RTM6. Les structures aéronautiques sont sollicitées, en service, de différentes façons. Elles peuvent être accidentellement impactées par des engins de maintenance, des outils, de la grêle ou toute autre forme d’impact. Le problème pour les industriels est de pouvoir détecter l’endommagement créé et de comprendre les mécanismes mis en jeu lors de l’impact mais aussi leur évolution pendant un cyclage en fatigue. Nos travaux se sont donc inscrits dans cet objectif et différentes méthodes ont été mises en œuvre : détection de défauts d’impact et suivi en temps réel par thermographie infrarouge, détection de l’indentation résiduelle par numérisation par projection de franges. Parallèlement, le phénomène d’impact a aussi été traité par une étude statistique par plan d’expérience et une modélisation avancée a été créée avec l’utilisation de surfaces cohésives. / Aeronautical manufacturers are looking for the best manufacturing process giving high benefits. Moreover, it has to be implemented easily with change of woven fabrics. So, we realize thick carbon/epoxy composites (t>4 mm) by modifying cure cycles, woven fabrics and lay-up sequences. Concerning the constituent materials of the composites, woven fabrics are carbon made and the resin is the commercial product named RTM6. Aeronautical structures can be unfortunately stressed, in service, with different solicitations: holding, engine impact, falling objects impacts or other way. The main problem for industrials is to be able to detect the created damage, to understand the phenomena dealing with it and the damage evolution during fatigue cycles. By following this direction, different methods are developed: impact damage defects detection and evolution monitored with infrared thermography, impact residual depth measurement by fringes projection digitalization. At the same time, a statistical study made by design of experiments is completed. A numerical impact modeling is also developed by using cohesive surfaces.
|
390 |
Planejamento de experimentos com várias replicações em paralelo em grades computacionais / Towards distributed simulation design of experiments on computational gridsLourenço Alves Pereira Júnior 07 June 2010 (has links)
Este trabalho de mestrado apresenta um estudo de Grades Computacionais e Simulações Distribuídas sobre a técnica MRIP. A partir deste estudo foi possível propor e implementar o protótipo de uma ferramenta para Gerenciamento de Experimento em Ambiente de Grade, denominada Grid Experiments Manager - GEM, organizada de forma modular podendo ser usada como um programa ou integrada com outro software, podendo ser expansível para vários middlewares de Grades Computacionais. Com a implementação também foi possível avaliar o desempenho de simulações sequenciais com aquelas executadas em cluster e em uma Grade Computacional de teste, sendo construído um benchmark que possibilitou repetir a mesma carga de trabalho para os sistemas sobre avaliação. Com os testes foi possível verificar um ganho alto no tempo de execução, quando comparadas as execuções sequenciais e em cluster, obteve-se eficiência em torno de 197% para simulações com tempo de execução baixo e 239% para aquelas com tempo de execução maior; na comparação das execuções em cluster e em grade, obteve-se os valores para eficiência de 98% e 105%, para simulações pequenas e grandes, respectivamente / This master\'s thesis presents a study of Grid Computing and Distributed Simulations using the MRIP approach. From this study was possible to design and implement the prototype of a tool for Management of Experiments in Grid Environment, called Grid Experiments Manager - GEM, which is organized in a modular way and can be used as a program or be integrated with another piece of software, being expansible to varius middlewares of Computational Grids. With its implementation was also possible to evaluate the performance of sequencial simulations executed in clusters and a Computational testbed Grid, also being implemented a benchmark which allowed repeat the same workload at the systems in evaluation. A high gain turnaround of the executions was infered with those results. When compared Sequential and Cluster executions, the eficiency was about of 197% for thin time of execution and 239% for those bigger in execution; when compared Cluster and Grid executions, the eficiency was about of 98% and 105% for thin and bigger simulations, repectivelly
|
Page generated in 0.0894 seconds