Spelling suggestions: "subject:"design off experiments"" "subject:"design oof experiments""
361 |
Optimisation robuste multiobjectifs par modèles de substitution / Multiobjective robust optimization via surrogate modelsBaudoui, Vincent 07 March 2012 (has links)
Cette thèse traite de l'optimisation sous incertitude de fonctions coûteuses dans le cadre de la conception de systèmes aéronautiques.Nous développons dans un premier temps une stratégie d'optimisation robuste multiobjectifs par modèles de substitution. Au-delà de fournir une représentation plus rapide des fonctions initiales, ces modèles facilitent le calcul de la robustesse des solutions par rapport aux incertitudes du problème. L'erreur de modélisation est maîtrisée grâce à une approche originale d'enrichissement de plan d'expériences qui permet d'améliorer conjointement plusieurs modèles au niveau des régions de l'espace possiblement optimales.Elle est appliquée à la minimisation des émissions polluantes d'une chambre de combustion de turbomachine dont les injecteurs peuvent s'obstruer de façon imprévisible.Nous présentons ensuite une méthode heuristique dédiée à l'optimisation robuste multidisciplinaire. Elle repose sur une gestion locale de la robustesse au sein des disciplines exposées à des paramètres incertains, afin d'éviter la mise en place d'une propagation d'incertitudes complète à travers le système. Un critère d'applicabilité est proposé pour vérifier a posteriori le bien-fondé de cette approche à partir de données récoltées lors de l'optimisation. La méthode est mise en œuvre sur un cas de conception avion où la surface de l'empennage vertical n'est pas connue avec précision. / This PhD thesis deals with the optimization under uncertainty of expensive functions in the context of aeronautical systems design.First, we develop a multiobjective robust optimization strategy based on surrogate models.Beyond providing a faster representation of the initial functions, these models facilitate the computation of the solutions' robustness with respect to the problem uncertainties. The modeling error is controlled through a new design of experiments enrichment approach that allows improving several models concurrently in the possibly optimal regions of the search space. This strategy is applied to the pollutant emission minimization of a turbomachine combustion chamber whose injectors can clog unpredictably. We subsequently present a heuristic method dedicated to multidisciplinary robust optimization. It relies on local robustness management within disciplines exposed to uncertain parameters, in order to avoid the implementation of a full uncertainty propagation through the system. An applicability criterion is proposed to check the validity of this approach a posteriori using data collected during the optimization. This methodology is applied to an aircraft design case where the surface of the vertical tail is not known accurately.
|
362 |
Assemblage de verre sur verre par impulsions laser femtosecondes / Glass on glass welding by femtosecond laser pulsesGstalter, Marion 12 October 2018 (has links)
Cette thèse porte sur l’assemblage de verres par impulsions laser femtosecondes. Une source laser femtoseconde à haute fréquence de répétition a été utilisée pour souder des lames de borosilicate de haute qualité de surface. La technique d’assemblage mise en œuvre diffère de la littérature par le système de focalisation utilisé. Un plan d’expériences a été réalisé afin de déterminer l’influence des différents paramètres laser sur les performances des soudures obtenues, démontrant que l’augmentation de la quantité d’énergie déposée améliore les performances mécaniques et thermiques. Les assemblages soudés peuvent atteindre une haute résistance mécanique supérieure à 25 MPa et supporter des chocs thermiques supérieurs à 300 ° C. L’adaptation des paramètres laser en fonction de la distance entre les lames de verre permet de souder des verres hors contact optique. Cette méthode a également été implémentée avec succès à l’assemblage de verre sur du silicium. / This PhD thesis is about glass bonding by femtosecond laser pulses. A femtosecond laser source generating high repetition rate laser pulses has been used to weld borosilicate glass plates with high surface quality. The method presented in this work differs from the literature by the focusing system implemented. The influence of the laser parameters on the bonded samples performances has been studied implementing a design of experiments, demonstrating that the mechanical and thermal resistance of the samples can be improved by increasing the amount of deposited. Thebonded samples provide high mechanical resistance, higher than 25 MPa, can held high thermal shock above 300 °C and present high transparency above 90 %. Glass bonding with a distance between the glass plates has been performed by adapting the laser parameters. Bonding of glass on silicon has also been performed successfully.
|
363 |
Planejamento de experimentos com várias replicações em paralelo em grades computacionais / Towards distributed simulation design of experiments on computational gridsPereira Júnior, Lourenço Alves 07 June 2010 (has links)
Este trabalho de mestrado apresenta um estudo de Grades Computacionais e Simulações Distribuídas sobre a técnica MRIP. A partir deste estudo foi possível propor e implementar o protótipo de uma ferramenta para Gerenciamento de Experimento em Ambiente de Grade, denominada Grid Experiments Manager - GEM, organizada de forma modular podendo ser usada como um programa ou integrada com outro software, podendo ser expansível para vários middlewares de Grades Computacionais. Com a implementação também foi possível avaliar o desempenho de simulações sequenciais com aquelas executadas em cluster e em uma Grade Computacional de teste, sendo construído um benchmark que possibilitou repetir a mesma carga de trabalho para os sistemas sobre avaliação. Com os testes foi possível verificar um ganho alto no tempo de execução, quando comparadas as execuções sequenciais e em cluster, obteve-se eficiência em torno de 197% para simulações com tempo de execução baixo e 239% para aquelas com tempo de execução maior; na comparação das execuções em cluster e em grade, obteve-se os valores para eficiência de 98% e 105%, para simulações pequenas e grandes, respectivamente / This master\'s thesis presents a study of Grid Computing and Distributed Simulations using the MRIP approach. From this study was possible to design and implement the prototype of a tool for Management of Experiments in Grid Environment, called Grid Experiments Manager - GEM, which is organized in a modular way and can be used as a program or be integrated with another piece of software, being expansible to varius middlewares of Computational Grids. With its implementation was also possible to evaluate the performance of sequencial simulations executed in clusters and a Computational testbed Grid, also being implemented a benchmark which allowed repeat the same workload at the systems in evaluation. A high gain turnaround of the executions was infered with those results. When compared Sequential and Cluster executions, the eficiency was about of 197% for thin time of execution and 239% for those bigger in execution; when compared Cluster and Grid executions, the eficiency was about of 98% and 105% for thin and bigger simulations, repectivelly
|
364 |
Desenvolvimento de um método indicativo de estabilidade para ondansetrona / Development of a stability indicating method for ondansetronMaranho, Rafael Finocchiaro 17 October 2017 (has links)
ultravioleta e espectrometria de massas para análise do teor e limite de impurezas e compostos de degradação para o principio ativo farmacêutico ondansetrona e três diferentes formas farmacêuticas foi desenvolvido utilizando conceitos de qualidade analítica por planejamento (aQbD) e validado de acordo com os requerimentos da USP-NF e do ICH. O método desenvolvido apresentou capacidade de separação de vinte compostos detectados nas amostras envolvidas no estudo: o princípio ativo ondansetrona, sete impurezas descritas nos principais compêndios farmacopéicos mundiais (United States Pharmacopeia-National Formulary, European Pharmacopoeia, British Pharmacopoeia e Indian Pharmacopoeia), onze compostos de degradação gerados pelos estudos de estresse e um excipiente. O método final apresentou um tempo de corrida de 14 minutos, com vazão de fase móvel de 0,4 mL/min, detecção de impurezas por ultravioleta a 220 nm e do princípio ativo a 305 nm, com apoio da detecção por espectrometria de massas de alta resolução (QTOF). Em comparação aos métodos requeridos pelas monografias relacionadas à ondansetrona publicadas nos compêndios farmacopéicos citados, o método desenvolvido apresenta uma alternativa eficiente e econômica para a análise de rotina de diferentes formas da matéria-prima ondansetrona (base, cloridrato, diferentes níveis de hidratação) e formas farmacêuticas (comprimidos, comprimidos de desintegração oral e solução injetável), mostrando que a modernização dos métodos cromatográficos, além de garantir a qualidade dos produtos farmacêuticos e promover a saúde da população, tem um impacto relevante na economia da produção e análise de medicamentos e na diminuição do impacto ao meio ambiente / An analytical method by ultraperformance liquid chromatography and detection by UV and mass spectrometry for assay and limit test for impurities and degradation compounds for the active pharmaceutical ingredient ondansetron and three different pharmaceutical products was developed using the analytical Quality by Design (aQbD) approach, and was validated according to the USP-NF and ICH requirements. The analytical method was efficient for the separation of twenty different compounds, detected in the samples involved in this study: the active ingredient ondansetron, seven impurities mentioned in the main global pharmacopeial compendia (United States Pharmacopeia-National Formulary, European Pharmacopoeia, British Pharmacopoeia e Indian Pharmacopoeia), eleven degradation compounds detected in the samples from stress studies and one excipient. The final method was composed by a 14 minutes run, using mobile phase flow at 0.4 mL/min, detection by UV at 220 nm for the impurities and degradation compounds and at 305 nm for ondansetron, supported by the high-resolution mass spectrometry detection (QTOF). Comparing the method developed with the chromatographic methods required by the monographs related to ondansetron published in the mentioned pharmacopeial compendia, it represents an efficient and economic alternative to the routine analysis of different ondansetron raw material forms (base, hydrochloride, different hydrates) and pharmaceutical products (tablets, orally disintegrating tablets and injectable), demonstrating the importance of the modernization of analytical procedures, with regard to not only the quality assurance of pharmaceutical products and promotion of public health, but also to the positive impact on the economy and sustainability of the pharmaceutical analysis and manufacturing.
|
365 |
Optimisation dynamique en temps-réel d’un procédé de polymérisation par greffage / Dynamic real-time optimization of a polymer grafting processBousbia-Salah, Ryad 17 December 2018 (has links)
D'une manière schématique, l'optimisation dynamique de procédés consiste en trois étapes de base : (i) la modélisation, dans laquelle un modèle (phénoménologique) du procédé est construit, (ii) la formulation du problème, dans laquelle le critère de performance, les contraintes et les variables de décision sont définis, (iii) et la résolution, dans laquelle les profils optimaux des variables de décision sont déterminés. Il est important de souligner que ces profils optimaux garantissent l'optimalité pour le modèle mathématique utilisé. Lorsqu'ils sont appliqués au procédé, ces profils ne sont optimaux que lorsque le modèle décrit parfaitement le comportement du procédé, ce qui est très rarement le cas dans la pratique. En effet, les incertitudes sur les paramètres du modèle, les perturbations du procédé, et les erreurs structurelles du modèle font que les profils optimaux des variables de décision basés sur le modèle ne seront probablement pas optimaux pour le procédé. L'application de ces profils au procédé conduit généralement à la violation de certaines contraintes et/ou à des performances sous-optimales. Pour faire face à ces problèmes, l'optimisation dynamique en temps-réel constitue une approche tout à fait intéressante. L'idée générale de cette approche est d'utiliser les mesures expérimentales associées au modèle du procédé pour améliorer les profils des variables de décision de sorte que les conditions d'optimalité soient vérifiées sur le procédé (maximisation des performances et satisfaction des contraintes). En effet, pour un problème d'optimisation sous contraintes, les conditions d'optimalité possèdent deux parties : la faisabilité et la sensibilité. Ces deux parties nécessitent différents types de mesures expérimentales, à savoir les valeurs du critère et des contraintes, et les gradients du critère et des contraintes par rapport aux variables de décision. L'objectif de cette thèse est de développer une stratégie conceptuelle d'utilisation de ces mesures expérimentales en ligne de sorte que le procédé vérifie non seulement les conditions nécessaires, mais également les conditions suffisantes d'optimalité. Ce développement conceptuel va notamment s'appuyer sur les récents progrès en optimisation déterministe (les méthodes stochastiques ne seront pas abordées dans ce travail) de procédés basés principalement sur l'estimation des variables d'état non mesurées à l'aide d'un observateur à horizon glissant. Une méthodologie d'optimisation dynamique en temps réel (D-RTO) a été développée et appliquée à un réacteur batch dans lequel une réaction de polymérisation par greffage a lieu. L'objectif est de déterminer le profil temporel de température du réacteur qui minimise le temps opératoire tout en respectant des contraintes terminales sur le taux de conversion et l'efficacité de greffage / In a schematic way, process optimization consists of three basic steps: (i) modeling, in which a (phenomenological) model of the process is developed, (ii) problem formulation, in which the criterion of Performance, constraints and decision variables are defined, (iii) the resolution of the optimal problem, in which the optimal profiles of the decision variables are determined. It is important to emphasize that these optimal profiles guarantee the optimality for the model used. When applied to the process, these profiles are optimal only when the model perfectly describes the behavior of the process, which is very rarely the case in practice. Indeed, uncertainties about model parameters, process disturbances, and structural model errors mean that the optimal profiles of the model-based decision variables will probably not be optimal for the process. The objective of this thesis is to develop a conceptual strategy for using experimental measurements online so that the process not only satisfies the necessary conditions, but also the optimal conditions. This conceptual development will in particular be based on recent advances in deterministic optimization (the stochastic methods will not be dealt with in this work) of processes based on the estimation of the state variables that are not measured by a moving horizon observer. A dynamic real-time optimization (D-RTO) methodology has been developed and applied to a batch reactor where polymer grafting reactions take place. The objective is to determine the on-line reactor temperature profile that minimizes the batch time while meeting terminal constraints on the overall conversion rate and grafting efficiency
|
366 |
Plans prédictifs à taille fixe et séquentiels pour le krigeage / Fixed-size and sequential designs for krigingAbtini, Mona 30 August 2018 (has links)
La simulation numérique est devenue une alternative à l’expérimentation réelle pour étudier des phénomènes physiques. Cependant, les phénomènes complexes requièrent en général un nombre important de simulations, chaque simulation étant très coûteuse en temps de calcul. Une approche basée sur la théorie des plans d’expériences est souvent utilisée en vue de réduire ce coût de calcul. Elle consiste à partir d’un nombre réduit de simulations, organisées selon un plan d’expériences numériques, à construire un modèle d’approximation souvent appelé métamodèle, alors beaucoup plus rapide à évaluer que le code lui-même. Traditionnellement, les plans utilisés sont des plans de type Space-Filling Design (SFD). La première partie de la thèse concerne la construction de plans d’expériences SFD à taille fixe adaptés à l’identification d’un modèle de krigeage car le krigeage est un des métamodèles les plus populaires. Nous étudions l’impact de la contrainte Hypercube Latin (qui est le type de plans les plus utilisés en pratique avec le modèle de krigeage) sur des plans maximin-optimaux. Nous montrons que cette contrainte largement utilisée en pratique est bénéfique quand le nombre de points est peu élevé car elle atténue les défauts de la configuration maximin-optimal (majorité des points du plan aux bords du domaine). Un critère d’uniformité appelé discrépance radiale est proposé dans le but d’étudier l’uniformité des points selon leur position par rapport aux bords du domaine. Ensuite, nous introduisons un proxy pour le plan minimax-optimal qui est le plan le plus proche du plan IMSE (plan adapté à la prédiction par krigeage) et qui est coûteux en temps de calcul, ce proxy est basé sur les plans maximin-optimaux. Enfin, nous présentons une procédure bien réglée de l’optimisation par recuit simulé pour trouver les plans maximin-optimaux. Il s’agit ici de réduire au plus la probabilité de tomber dans un optimum local. La deuxième partie de la thèse porte sur un problème légèrement différent. Si un plan est construit de sorte à être SFD pour N points, il n’y a aucune garantie qu’un sous-plan à n points (n 6 N) soit SFD. Or en pratique le plan peut être arrêté avant sa réalisation complète. La deuxième partie est donc dédiée au développement de méthodes de planification séquentielle pour bâtir un ensemble d’expériences de type SFD pour tout n compris entre 1 et N qui soient toutes adaptées à la prédiction par krigeage. Nous proposons une méthode pour générer des plans séquentiellement ou encore emboités (l’un est inclus dans l’autre) basée sur des critères d’information, notamment le critère d’Information Mutuelle qui mesure la réduction de l’incertitude de la prédiction en tout point du domaine entre avant et après l’observation de la réponse aux points du plan. Cette approche assure la qualité des plans obtenus pour toutes les valeurs de n, 1 6 n 6 N. La difficulté est le calcul du critère et notamment la génération de plans en grande dimension. Pour pallier ce problème une solution a été présentée. Cette solution propose une implémentation astucieuse de la méthode basée sur le découpage par blocs des matrices de covariances ce qui la rend numériquement efficace. / In recent years, computer simulation models are increasingly used to study complex phenomena. Such problems usually rely on very large sophisticated simulation codes that are very expensive in computing time. The exploitation of these codes becomes a problem, especially when the objective requires a significant number of evaluations of the code. In practice, the code is replaced by global approximation models, often called metamodels, most commonly a Gaussian Process (kriging) adjusted to a design of experiments, i.e. on observations of the model output obtained on a small number of simulations. Space-Filling-Designs which have the design points evenly spread over the entire feasible input region, are the most used designs. This thesis consists of two parts. The main focus of both parts is on construction of designs of experiments that are adapted to kriging, which is one of the most popular metamodels. Part I considers the construction of space-fillingdesigns of fixed size which are adapted to kriging prediction. This part was started by studying the effect of Latin Hypercube constraint (the most used design in practice with the kriging) on maximin-optimal designs. This study shows that when the design has a small number of points, the addition of the Latin Hypercube constraint will be useful because it mitigates the drawbacks of maximin-optimal configurations (the position of the majority of points at the boundary of the input space). Following this study, an uniformity criterion called Radial discrepancy has been proposed in order to measure the uniformity of the points of the design according to their distance to the boundary of the input space. Then we show that the minimax-optimal design is the closest design to IMSE design (design which is adapted to prediction by kriging) but is also very difficult to evaluate. We then introduce a proxy for the minimax-optimal design based on the maximin-optimal design. Finally, we present an optimised implementation of the simulated annealing algorithm in order to find maximin-optimal designs. Our aim here is to minimize the probability of falling in a local minimum configuration of the simulated annealing. The second part of the thesis concerns a slightly different problem. If XN is space-filling-design of N points, there is no guarantee that any n points of XN (1 6 n 6 N) constitute a space-filling-design. In practice, however, we may have to stop the simulations before the full realization of design. The aim of this part is therefore to propose a new methodology to construct sequential of space-filling-designs (nested designs) of experiments Xn for any n between 1 and N that are all adapted to kriging prediction. We introduce a method to generate nested designs based on information criteria, particularly the Mutual Information criterion. This method ensures a good quality forall the designs generated, 1 6 n 6 N. A key difficulty of this method is that the time needed to generate a MI-sequential design in the highdimension case is very larg. To address this issue a particular implementation, which calculates the determinant of a given matrix by partitioning it into blocks. This implementation allows a significant reduction of the computational cost of MI-sequential designs, has been proposed.
|
367 |
Desenvolvimento de um método indicativo de estabilidade para ondansetrona / Development of a stability indicating method for ondansetronRafael Finocchiaro Maranho 17 October 2017 (has links)
ultravioleta e espectrometria de massas para análise do teor e limite de impurezas e compostos de degradação para o principio ativo farmacêutico ondansetrona e três diferentes formas farmacêuticas foi desenvolvido utilizando conceitos de qualidade analítica por planejamento (aQbD) e validado de acordo com os requerimentos da USP-NF e do ICH. O método desenvolvido apresentou capacidade de separação de vinte compostos detectados nas amostras envolvidas no estudo: o princípio ativo ondansetrona, sete impurezas descritas nos principais compêndios farmacopéicos mundiais (United States Pharmacopeia-National Formulary, European Pharmacopoeia, British Pharmacopoeia e Indian Pharmacopoeia), onze compostos de degradação gerados pelos estudos de estresse e um excipiente. O método final apresentou um tempo de corrida de 14 minutos, com vazão de fase móvel de 0,4 mL/min, detecção de impurezas por ultravioleta a 220 nm e do princípio ativo a 305 nm, com apoio da detecção por espectrometria de massas de alta resolução (QTOF). Em comparação aos métodos requeridos pelas monografias relacionadas à ondansetrona publicadas nos compêndios farmacopéicos citados, o método desenvolvido apresenta uma alternativa eficiente e econômica para a análise de rotina de diferentes formas da matéria-prima ondansetrona (base, cloridrato, diferentes níveis de hidratação) e formas farmacêuticas (comprimidos, comprimidos de desintegração oral e solução injetável), mostrando que a modernização dos métodos cromatográficos, além de garantir a qualidade dos produtos farmacêuticos e promover a saúde da população, tem um impacto relevante na economia da produção e análise de medicamentos e na diminuição do impacto ao meio ambiente / An analytical method by ultraperformance liquid chromatography and detection by UV and mass spectrometry for assay and limit test for impurities and degradation compounds for the active pharmaceutical ingredient ondansetron and three different pharmaceutical products was developed using the analytical Quality by Design (aQbD) approach, and was validated according to the USP-NF and ICH requirements. The analytical method was efficient for the separation of twenty different compounds, detected in the samples involved in this study: the active ingredient ondansetron, seven impurities mentioned in the main global pharmacopeial compendia (United States Pharmacopeia-National Formulary, European Pharmacopoeia, British Pharmacopoeia e Indian Pharmacopoeia), eleven degradation compounds detected in the samples from stress studies and one excipient. The final method was composed by a 14 minutes run, using mobile phase flow at 0.4 mL/min, detection by UV at 220 nm for the impurities and degradation compounds and at 305 nm for ondansetron, supported by the high-resolution mass spectrometry detection (QTOF). Comparing the method developed with the chromatographic methods required by the monographs related to ondansetron published in the mentioned pharmacopeial compendia, it represents an efficient and economic alternative to the routine analysis of different ondansetron raw material forms (base, hydrochloride, different hydrates) and pharmaceutical products (tablets, orally disintegrating tablets and injectable), demonstrating the importance of the modernization of analytical procedures, with regard to not only the quality assurance of pharmaceutical products and promotion of public health, but also to the positive impact on the economy and sustainability of the pharmaceutical analysis and manufacturing.
|
368 |
Modelo de aplicação de ferramentas de projeto integradas ao longo das fases de desenvolvimento de produtoRodrigues, Leandro Sperandio January 2008 (has links)
O presente trabalho apresenta um modelo de aplicação de ferramentas de projeto integradas ao longo das fases de desenvolvimento de produto, neste caso, aplicadas na melhoria do produto suporte para fixação de cilindro de gás natural veicular. O foco do trabalho é apresentar a integração de ferramentas nas fases de Projeto Informacional, Projeto Conceitual e Projeto Detalhado do Processo de Desenvolvimento de Produtos. Entende-se por integração a escolha de ferramentas que permitam conduzir o fluxo de informação ao longo das fases de desenvolvimento de produtos, de tal forma que a informação de saída de uma ferramenta seja a informação de entrada da ferramenta subseqüente. As ferramentas integradas a partir da fase de Projeto Informacional foram a Pesquisas de Mercado Qualitativa e Quantitativa, com a finalidade de identificar as demandas dos clientes. As demandas dos clientes foram os dados de entrada da Matriz da Qualidade (Quality Function Deployment - QFD), resultando nos requisitos do produto e suas respectivas especificações-meta. A partir dos requisitos do produto, diferentes conceitos (configurações) foram gerados, apoiados pela Matriz Morfológica no Projeto Conceitual. Na seqüência utilizou-se a ferramenta de Projeto de Experimentos (Design of Experiments - DOE) para avaliar a estimativa de preço frente às possíveis configurações do produto. Com a Matriz de Pugh, alternativas de conceito de produto foram avaliadas possibilitando a escolha do melhor conceito de produto. No Projeto Detalhado, foi aplicada ferramenta de Análise dos Modos de Falha e seus Efeitos (Failure Mode and Effects Analysis - FMEA), utilizado de forma integrada com o QFD, para identificar as falhas atuais e potenciais e seus efeitos em sistemas e processo. Em função das demandas identificadas, foram definidas e implementadas melhorias no produto. Observou-se a adequabilidade destas ferramentas de projeto para aplicação de forma integrada, garantindo um fluxo contínuo de informações rastreáveis e que tendem a levar à uma reduzida chance de perdas ao longo do processo. / There are few examples in literature about the integration of project tools along the product development phases. The main research objective in thesis is to integrate some tools that facilitate the information flow along the product development phases, more specifically in Informational Project, Conceptual Project and Detailed Project phases. The product improvement “support for Vehicular Natural Gas” was the object of study in thesis. The main idea is that the information output from one tool is the input information of the subsequent tool. Starting from the Informational Project phase it was performed qualitative and quantitative market researches with the purpose of identifying the customers' demands for the studied product. The customers’ demands were the entrance data of the QFD (Quality Function Deployment) tool resulting in the product requirements and their respective specifications-goal. In Concept Project the product requirements were converted in functions and further different concepts were generated through the Morphologic Analysis. In the sequence, it was used the DOE (Design for experiments) tool to evaluate the estimate price to the possible products' configurations. The Pugh Matrix tool was used for concepts evaluation and choice. The FMEA (Failure Mode and Effects Analysis) tool integrated with QFD was useful for current and potential failures identification and impact analysis in the system and process. With the application of these five tools the users’ demands were identified and improvements to the product were performed. The chosen tools proved to be adequate for integration, assuring that a continuous trackable information flow was attained with presumable reduced information loss, along the Product Development Process phases.
|
369 |
Snap Scholar: The User Experience of Engaging with Academic Research Through a Tappable Stories MediumBurk, Ieva 01 January 2019 (has links)
With the shift to learn and consume information through our mobile devices, most academic research is still only presented in long-form text. The Stanford Scholar Initiative has explored the segment of content creation and consumption of academic research through video. However, there has been another popular shift in presenting information from various social media platforms and media outlets in the past few years. Snapchat and Instagram have introduced the concept of tappable “Stories” that have gained popularity in the realm of content consumption.
To accelerate the growth of the creation of these research talks, I propose an alternative to video: a tappable Snapchat-like interface. This style is achieved using AMP, Google’s open source project to optimize web experiences on mobile, and particularly the AMP Stories visual medium. My research seeks to explore how the process and quality of consuming the content of academic papers would change if instead of watching videos, users would consume content through Stories on mobile instead.
Since this form of content consumption is still largely unresearched in the academic context, I approached this research with a human-centered design process, going through a few iterations to test various prototypes before formulating research questions and designing an experiment. I tested various formats of research consumption through Stories with pilot users, and learned many lessons to iterate from along the way. I created a way to consume research papers in a Stories format, and designed a comparative study to measure the effectiveness of consuming research papers through the Stories medium and the video medium.
The results indicate that Stories are a quicker way to consume the same content, and improve the user’s pace of comprehension. Further, the Stories medium provides the user a self-paced method—both temporally and content-wise—to consume technical research topics, and is deemed as a less boring method to do so in comparison to video. While Stories gave the learner a chance to actively participate in consumption by tapping, the video experience is enjoyed because of its reduced effort and addition of an audio component. These findings suggest that the Stories medium may be a promising interface in educational contexts, for distributing scientific content and assisting with active learning.
|
370 |
Influence of geometry and placement configuration on side forces in compression springsRahul Deshmukh (7847843) 12 November 2019 (has links)
<div>A leading cause of premature failure and excessive wear and tear in mechanical components that rely on compression springs for their operation is the development of unwanted side forces when the spring is compressed.</div><div>These side forces are usually around 10% - 20% of the magnitude of the axial load and point in different directions in the plane perpendicular to the axis of the spring.</div><div>The magnitude and direction of the resultant of side forces varies very non-linearly and unpredictably even though the axial force behavior of the spring is very consistent and predictable.</div><div>Since these side forces have to be resisted by the housing components that hold the spring in place, it is difficult to design these components for optimal operation.</div><div><br></div><div>The hypothesis of this study is that side forces are highly sensitive to small changes in spring geometry and its placement configuration in the housing. <br></div><div><div>Several experiments are conducted to measure the axial and side forces in barrel springs and two different types of finite element models are developed and calibrated to model the spring behavior. </div><div>Spring geometry and placement are parameterized using several control variables and an approach based on design of experiments is used to identify the critical parameters that control the behavior of side-forces. </div><div>The models resulted in deeper insight into the development of side forces as the spring is progressively loaded and how its contact interactions with the housing lead to changes in the side force.</div><div>It was found that side-forces are indeed sensitive to variations in spring geometry and placement.</div><div>These sensitivities are quantified to enable designers to and manufacturers of such springs to gain more control of side force variations between different spring specimens.</div></div>
|
Page generated in 0.1121 seconds