• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 143
  • 131
  • 38
  • 16
  • 11
  • 10
  • 7
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 444
  • 444
  • 91
  • 86
  • 81
  • 77
  • 72
  • 48
  • 44
  • 42
  • 41
  • 40
  • 35
  • 31
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Multi-Stage Experimental Planning and Analysis for Forward-Inverse Regression Applied to Genetic Network Modeling

Taslim, Cenny 05 September 2008 (has links)
No description available.
352

An Improved Model-Based Methodology for Calibration of an Alternative Fueled Engine

Everett, Ryan Vincent 15 December 2011 (has links)
No description available.
353

Aktive Ausgangsselektion zur modellbasierten Kalibrierung dynamischer Fahrmanöver

Prochaska, Adrian 05 October 2022 (has links)
Die modellbasierte Kalibrierung dynamischer Fahrmanöver an Prüfständen ermöglicht die systematische Optimierung von Steuergerätedaten über den gesamten Betriebsbereich des Fahrzeugs und begegnet somit der steigenden Komplexität in der Antriebsstrangentwicklung. Dabei werden mehrere empirische Black-Box-Modelle zur Abbildung der Zielgrößen für die nachfolgende Optimierung identifiziert. Der Einsatz der statistischen Versuchsplanung ermöglicht eine systematische Abdeckung des gesamten Eingangsbereiches. In jüngerer Vergangenheit werden in der Automobilindustrie vereinzelt Methoden des maschinellen Lernens eingesetzt, um die Anwendung der modellbasierten Kalibrierung zu vereinfachen und die Effizienz zu erhöhen. Insbesondere der Einsatz des aktiven Lernens führt zu vielversprechenden Ergebnissen. Mit diesen Methoden werden Modelle mit einer geringeren Anzahl an Messpunkten identifiziert, während gleichzeitig die erforderliche Expertise für die Versuchsplanerstellung reduziert wird. Eine Herausforderung stellt die simultane Identifikation mehrerer Regressionsmodelle dar, die für die Anwendung des aktiven Lernens auf die Fahrbarkeitskalibrierung erforderlich ist. Hierfür wird im Rahmen dieser Arbeit die aktive Ausgangsselektion (AOS) eingeführt und eingesetzt. Die AOS-Strategie bestimmt dabei das führende Modell im Lernprozess. Erste Veröffentlichungen zeigen das Potenzial der Verwendung von AOS. Statistisch signifikante Ergebnisse über die Effektivität gibt es bislang jedoch nicht, weswegen die weitere intensive Untersuchung von Strategien erforderlich ist. In der vorliegenden Arbeit werden regel- und informationsbasierte AOS-Strategien vorgestellt. Letztere wählen das führende Modell basierend auf allen während des Versuchs verfügbaren Informationen aus. Hier erfolgt erstmals die detaillierte Beschreibung und Untersuchung einer normierten modellgütebasierten Auswahlstrategie. Als Modellart werden Gauß’sche Prozessmodelle verwendet. Anhand von Versuchen wird überprüft, ob der Einsatz von AOS gegenüber gängiger statistischer Versuchsplanung sinnvoll ist. Darüber hinaus wird untersucht, ob die Berücksichtigung aller zur Versuchslaufzeit bekannten Informationen zu einer Verbesserung des Lernprozesses beiträgt. Die Strategien werden an Simulationsexperimenten getestet. Diese Simulationsexperimente stellen Grenzfälle echter Versuche dar, die für die Strategien besonders herausfordernd sind. Die Erstellung der Experimente wird anhand von Informationen aus realen Prüfstandsversuchen abgeleitet. Die Strategien werden analysiert und miteinander verglichen. Dazu wird eine anspruchsvolle Referenzstrategie verwendet, die auf den Methoden der klassischen Versuchsplanung basiert. Die Versuche zeigen, dass bereits einfache regelbasierte Strategien bessere Ergebnisse hervorbringen als die Referenzstrategie. Durch Berücksichtigung der momentanen Modellgüte und Abschätzung des Prozessrauschens zur Versuchslaufzeit ist eine weitere Reduktion der Messpunkte um mehr als 50% gegenüber der Referenzstrategie möglich. Da die informationsbasierte Strategie rechenintensiver ist, wird auch ein zeitlicher Vergleich mit unterschiedlichen langen Annahmen für die Fahrmanöverdauer am Prüfstand vorgenommen. Bei kurzen Manöverzeiten ist der Vorteil der informationsbasierten Strategie gegenüber der regelbasierten Strategie nur gering ausgeprägt. Mit zunehmender Manöverzeit nähert sich die abgeschätzte zeitliche Ersparnis jedoch der prozentualen Einsparung der Messpunkte an. Die aus den Simulationsexperimenten abgeleiteten Ergebnisse werden anhand eines realen Anwendungsbeispiels validiert. Die Implementierung an einem Antriebsstrangprüfstand wird dazu vorgestellt. Für die Versuche werden insgesamt 1500 Fahrmanöver an diesem Prüfstand durchgeführt. Die Ergebnisse der Versuche bestätigen die aus den Simulationsexperimenten abgeleiteten Ergebnisse. Die regelbasierte AOS-Strategie reduziert die Anzahl der Messpunkte im Durchschnitt um 65% im Vergleich zur verwendeten Referenzstrategie. Die informationsbasierte AOS-Strategie verringert die Anzahl der Punkte weiter auf 70% gegenüber der Referenzstrategie. Die Modelle der informationsbasierten Strategie sind bereits nach 50% der Punkte besser als die besten Modelle der regelbasierten Strategie. Die Ergebnisse dieser Arbeit legen den ständigen Einsatz der vorgestellten informationsbasierten Strategien für die modellbasierte Kalibrierung nahe. / Model-based calibration of dynamic driving maneuvers on test benches enables the systematic optimization of ECU data over the vehicle’s entire operating range and thus faces the increasing complexity in powertrain development. Several empirical black-box models are identified to represent the target variables for the succeeding optimization. The use of statistical experimental design enables systematic coverage of the entire input range. Recently, machine learning methods have been occasionally used in the automotive industry to simplify applying the process and increase its efficiency. In particular, the use of active learning leads to promising results. It leads to a reduction of the number of measurement points necessary for model identification. At the same time, the required expertise for experimental design is reduced. The simultaneous identification of multiple regression models, which is required for a broad application of active learning to drivability calibration, is challenging. In this work, active output selection (AOS) is introduced and applied to face this challenge. An AOS strategy determines the leading model in the learning process. First publications show the potential of using AOS. However, no statistically significant results about the effectiveness are available to date, which is why these strategies need to be studied in more detail. This work presents rule- and information-based AOS strategies. The latter select the leading model based on all current information available during the experiment. For the first time, this publication provides a detailed description and investigation of a normalized model-quality-based selection strategy. Gaussian process models are used as model type. Experiments are conducted to verify whether the use of AOS is reasonable compared to common designs of experiments. Furthermore, we analyze whether taking into account all information known at the time of the experiment helps to improve the learning process. The strategies are first tested on computer experiments. These computer experiments represent borderline cases of real experiments, which are particularly challenging for the strategies. The experiments are derived using information from real test bench experiments. The strategies are analyzed and compared with each other. For this purpose, a sophisticated reference strategy is used, which is based on the methods of classical designs of experiments. The experiments show that even simple rule-based strategies lead to better results than the reference strategy. By considering the current model quality and estimating the process noise during experiment runtime, a further reduction of the measurement points by more than 50% compared to the reference strategy is possible. Since the information-based strategy is more computationally expensive, we perform a time comparison with different assumptions for the driving maneuver duration at the test bench. For short maneuver times, the advantage of the information-based strategy in comparison to the rule-based strategy is only small. As the maneuver time increases, the estimated time reduction approaches the percentage savings of the measurement points. The results derived from the computer experiments are validated using a real application example. The implementation on a powertrain test bench is presented for this purpose. For the experiments, a total of 1500 driving maneuvers are performed on this test bench. The results of the experiments confirm the results of the computer experiments. The rule-based AOS strategy reduces the number of measurement points by 65% on average compared to the reference strategy used. The information-based AOS strategy further reduces the number of points to 70% compared to the reference strategy. The results of this work suggest the use of the presented information-based strategies for model-based calibration.
354

INFERENCE FOR ONE-SHOT DEVICE TESTING DATA

Ling, Man Ho 10 1900 (has links)
<p>In this thesis, inferential methods for one-shot device testing data from accelerated life-test are developed. Due to constraints on time and budget, accelerated life-tests are commonly used to induce more failures within a reasonable amount of test-time for obtaining more lifetime information that will be especially useful in reliability analysis. One-shot devices, which can be used only once as they get destroyed immediately after testing, yield observations only on their condition and not on their real lifetimes. So, only binary response data are observed from an one-shot device testing experiment. Since no failure times of units are observed, we use the EM algorithm for determining the maximum likelihood estimates of the model parameters. Also, inference for the reliability at a mission time and the mean lifetime at normal operating conditions are also developed.</p> <p>The thesis proceeds as follows. Chapter 2 considers the exponential distribution with single-stress relationship and develops inferential methods for the model parameters, the reliability and the mean lifetime. The results obtained by the EM algorithm are compared with those obtained from the Bayesian approach. A one-shot device testing data is analyzed by the proposed method and presented as an illustrative example. Next, in Chapter 3, the exponential distribution with multiple-stress relationship is considered and corresponding inferential results are developed. Jackknife technique is described for the bias reduction in the developed estimates. Interval estimation for the reliability and the mean lifetime are also discussed based on observed information matrix, jackknife technique, parametric bootstrap method, and transformation technique. Again, we present an example to illustrate all the inferential methods developed in this chapter. Chapter 4 considers the point and interval estimation for the one-shot device testing data under the Weibull distribution with multiple-stress relationship and illustrates the application of the proposed methods in a study involving the development of tumors in mice with respect to risk factors such as sex, strain of offspring, and dose effects of benzidine dihydrochloride. A Monte Carlo simulation study is also carried out to evaluate the performance of the EM estimates for different levels of reliability and different sample sizes. Chapter 5 describes a general algorithm for the determination of the optimal design of an accelerated life-test plan for one-shot device testing experiment. It is based on the asymptotic variance of the estimated reliability at a specific mission time. A numerical example is presented to illustrate the application of the algorithm. Finally, Chapter 6 presents some concluding remarks and some additional research problems that would be of interest for further study.</p> / Doctor of Philosophy (PhD)
355

Statistical Improvements for Ecological Learning about Spatial Processes

Dupont, Gaetan L 20 October 2021 (has links) (PDF)
Ecological inquiry is rooted fundamentally in understanding population abundance, both to develop theory and improve conservation outcomes. Despite this importance, estimating abundance is difficult due to the imperfect detection of individuals in a sample population. Further, accounting for space can provide more biologically realistic inference, shifting the focus from abundance to density and encouraging the exploration of spatial processes. To address these challenges, Spatial Capture-Recapture (“SCR”) has emerged as the most prominent method for estimating density reliably. The SCR model is conceptually straightforward: it combines a spatial model of detection with a point process model of the spatial distribution of individuals, using data collected on individuals within a spatially referenced sampling design. These data are often coarse in spatial and temporal resolution, though, motivating research into improving the quality of the data available for analysis. Here I explore two related approaches to improve inference from SCR: sampling design and data integration. Chapter 1 describes the context of this thesis in more detail. Chapter 2 presents a framework to improve sampling design for SCR through the development of an algorithmic optimization approach. Compared to pre-existing recommendations, these optimized designs perform just as well but with far more flexibility to account for available resources and challenging sampling scenarios. Chapter 3 presents one of the first methods of integrating an explicit movement model into the SCR model using telemetry data, which provides information at a much finer spatial scale. The integrated model shows significant improvements over the standard model to achieve a specific inferential objective, in this case: the estimation of landscape connectivity. In Chapter 4, I close by providing two broader conclusions about developing statistical methods for ecological inference. First, simulation-based evaluation is integral to this process, but the circularity of its use can, unfortunately, be understated. Second, and often underappreciated: statistical solutions should be as intuitive as possible to facilitate their adoption by a diverse pool of potential users. These novel approaches to sampling design and data integration represent essential steps in advancing SCR and offer intuitive opportunities to advance ecological learning about spatial processes.
356

Determining temperature and drivers of heat in mechanical face seals

Carlén, Vincent January 2024 (has links)
High heat in mechanical seals is a long-recognized main failure cause, disrupting the seal's vital lubricating film and heating temperature-sensitive process media. A way to accommodate the seal's heat generation is by choosing materials with high thermal conductivity, such as silicon carbide (SiC). In single-use and short operating time applications, the usage of SiC may be greatly over-dimensioned, unnecessarily environmentally intense, and expensive. There is a desire to apply other materials for these cases, but the heat generation of the mechanical seal poses great limitations in material selections.  The purpose of this study is to reduce the operating temperature of mechanical seals to enable the application of cheaper and more sustainable materials. The seals used for testing are presently used in Alfa Laval’s single-use separators, CultureOne, and accompanying this work is the designing of a temperature measuring rig for the specific mechanical seal to be tested in its applied environment. The following research questions have been formulated to concretize the presented problematization:     RQ1: Which parameters have the highest significance for generated heat in the single-use mechanical seal?   RQ2: How can the heat of the mechanical seal’s wear face be measured while operating in the CultureOne machine?    This study is deductive, intending to gather quantitative data through thermal measuring. The temperature measuring rig was designed with inspiration from previous similar studies and was then validated through repeated testing and comparisons with FEM-simulations of the tested case. This resulted in a detected temperature loss of 1°C, which has been accounted for in the conclusions. During validation tests, a standard original seal temperature to use as a comparison was found to be 41°C.    Heat-affecting parameters have been gathered and tested through the method of Design of Experiments (DoE), where a Placket-Burman design was chosen to enable testing 9 parameters with 12 tests. The results of the study indicate that surface roughness and sealing liquid temperature have significant effects on the seal temperature. Roughening the surface of the mechanical seal’s polished, static carbon face, as achieved with a 1000-grit abrasive paper, provided a heat mitigating effect of -9.3 °C, which is a 23% heat reduction. Similarly, introducing more cooling power to the system by cooling the sealing liquid with external methods such as an ice bath provided a heat-reducing effect of -5.4 °C at the end of a one-hour test, reducing the temperature with 13%. Combining the parameters provided a 36% reduction in heat for a one-hour run with the mechanical seal in the CultureOne Primo machine. Thus, the temperature-reducing strategies discovered in this study can be applied to enable more sustainable and cheaper material selections for mechanical face seals. / Hög temperatur i plantätningar är sedan länge uppmärksammat som en huvudsaklig haveriorsak eftersom det i många fall leder till att den smörjande vätskefilmen störs. En åtgärd för att avlägsna tätningens värme är att välja material med hög värmeledningsförmåga, som exempelvis kiselkarbid. Vid engångsapplikationer och andra tillämpningsområden med kort tid i användning har kiselkarbidtätningar ofta en överdimensionerad livslängd samtidigt som materialet är både miljöpåfrestande och dyrt i jämförelse med andra material. Således finns behovet att byta material hos tätningar i dessa applikationer, men värmegenereringen utgör hinder och begränsar alternativen i materialvalet.    Målet med denna studie är att sänka temperaturen hos mekaniska plantätningar för att möjliggöra val av billigare och miljövänligare material. Tätningarna som används för studiens experiment sitter i Alfa Lavals engångsseparator, CultureOne. Till denna studie hör framtagandet av en metod och testuppställning för att mäta temperaturen hos dessa tätningar under drift i sin tillämpade miljö. Utifrån problematiken har följande forskningsfrågor formulerats:    FF1: Vilka parametrar har störst signifikans för värmegenerering i engångstätningarna?    FF2: Hur kan temperaturen av plantätningarnas nötningsyta mätas under drift i CultureOne-maskinen?   Studien är av deduktiv natur, med insamling av primärt kvantitativa data genom temperaturmätningar. Metoden för temperaturmätning togs fram med stöd i mätmetoder från andra studier på tätningar, och mätvärdenas riktighet validerades genom kalibreringssteg, tester och jämförelser med FEM-simuleringar av tätningen. Under kalibreringen detekterades en värmeförlust på 1°C mellan uppmätt temperatur och sann temperatur av tätningens nötningsyta, vilket kompenseras för i slutsatserna. Under normala körförhållanden mättes tätningens temperatur till 41°C vilket användes som utgångspunkt att jämföra med senare experiment.    Värmegenererande parametrar för tätningar har sammanställts och testats genom en statistisk försöksplanering, där en Placket-Burman-design implementerades. Resultaten visar att ytfinhet och temperatur av tätningsvätska har signifikanta effekter för tätningens temperatur. Genom att skapa en grövre yta för plantätningens statiska halvas nötningsyta, som uppnått med ett sandpapper av finhet K1000, har en värmesänkning på -9.3 °C, motsvarande en 23% sänkning av utgångstemperaturen, uppnåtts för tätningen efter en timmes drift. På liknande sätt resulterade kylning av tätningsvätskan i en värmesänkning på -5.4 °C, vilket är 13% svalare än utgångstemperaturen. Tillsammans bidrar parametrarna till en 36% lägre temperatur hos plantätningen efter en timmes drift i CultureOne Primo-maskinen. Således skulle de testade strategierna för att sänka plantätningars temperatur kunna användas för att möjliggöra valet av miljövänligare och billigare material.
357

Contributions to the Use of Statistical Methods for Improving Continuous Production

Capaci, Francesca January 2017 (has links)
Complexity of production processes, high computing capabilities, and massive datasets characterize today’s manufacturing environments, such as those of continuous andbatch production industries. Continuous production has spread gradually acrossdifferent industries, covering a significant part of today’s production. Commonconsumer goods such as food, drugs, and cosmetics, and industrial goods such as iron,chemicals, oil, and ore come from continuous processes. To stay competitive intoday’s market requires constant process improvements in terms of both effectivenessand efficiency. Statistical process control (SPC) and design of experiments (DoE)techniques can play an important role in this improvement strategy. SPC attempts toreduce process variation by eliminating assignable causes, while DoE is used toimprove products and processes by systematic experimentation and analysis. However,special issues emerge when applying these methods in continuous process settings.Highly automated and computerized processes provide an exorbitant amount ofserially dependent and cross-correlated data, which may be difficult to analyzesimultaneously. Time series data, transition times, and closed-loop operation areexamples of additional challenges that the analyst faces.The overall objective of this thesis is to contribute to using of statisticalmethods, namely SPC and DoE methods, to improve continuous production.Specifically, this research serves two aims: [1] to explore, identify, and outlinepotential challenges when applying SPC and DoE in continuous processes, and [2] topropose simulation tools and new or adapted methods to overcome the identifiedchallenges.The results are summarized in three appended papers. Through a literaturereview, Paper A outlines SPC and DoE implementation challenges for managers,researchers, and practitioners. For example, problems due to process transitions, themultivariate nature of data, serial correlation, and the presence of engineering processcontrol (EPC) are discussed. Paper B further explores one of the DoE challengesidentified in Paper A. Specifically, Paper B describes issues and potential strategieswhen designing and analyzing experiments in processes operating under closed-loopcontrol. Two simulated examples in the Tennessee Eastman (TE) process simulatorshow the benefits of using DoE techniques to improve and optimize such industrialprocesses. Finally, Paper C provides guidelines, using flow charts, on how to use thecontinuous process simulator, “The revised TE process simulator,” run with adecentralized control strategy as a test bed for developing SPC and DoE methods incontinuous processes. Simulated SPC and DoE examples are also discussed.
358

A comparison of a distributed control system’s graphical interfaces : a DoE approach to evaluate efficiency in automated process plants / En jämförelse av grafiska gränssnitt för ett distribuerat kontrollsystem : en försöksplaneringsstrategi för att utvärdera effektiviteten i fabriker med automatiserade processer

Maanja, Karen January 2024 (has links)
Distributed control systems play a central role for critical processes within a plant that needs to be monitored or controlled. They ensure high production availability and output while simultaneously ensuring the safety of the personnel and the environment. However, 5% of global annual production is lost due to unscheduled downtime. 80% of the unscheduled shutdowns could have been prevented and 40% of these are caused by human error.  This study is conducted at ABB's Process Automation team in Umeå. The aim is to examine if different human-machine interfaces affect operators' effectiveness in resolving errors and maintaining a high production level. DoE is the chosen approach for this study which includes planning and conducting an experiment where the two dependent variables are Effect and Time. The independent variables examined are Scenario, Graphic, and Operator which are used as factors in a factorial design, each having two levels.  Experiments showed that the design of the human-machine interface has no impact on either responses, i.e. it has no statistically significant effect on the production in terms of operator effectiveness or production efficiency. Instead, the level of experience of the operators seems to be the main contributor of variance in production in the models used. / Distribuerade styrsystem spelar en central roll för kritiska processer inom en anläggning som måste övervakas eller kontrolleras. De säkerställer hög produktionstillgänglighet ochproduktion samtidigt som säkerheten för personalen och miljön säkerställs. Det har visats att 5% av den globala årsproduktionen går förlorad på grund av oplanerade driftstopp. 80% av de oplanerade avbrotten kunde ha förhindrats och 40% av dessa orsakas av den mänskliga faktorn. Denna studie genomförs hos ABB:s Process Automation team i Umeå. Målet är att undersöka om olika gränssnitt för styrsystemen är en viktig faktor för operatörens effektivitet i att åtgärda fel och att upprätthålla en hög produktionsnivå. Försöksplanering är det valda tillvägagångssättet för denna studie som inkluderar planering och genomförande av experimentet där de två beroende variabler är Effekt och Tid. De oberoende variabler som undersöks är Scenario, Grafik och Operatör, och används som faktorer i en faktoriell design, där faktorerna har två nivåer vardera. Experimentet visade att utformningen av den grafiska designen för gränssnittet inte har någon inverkan på någondera svaren, d.v.s. den har ingen statistiskt signifikant effekt på produktionen i form av operatörseffektivitet eller produktionseffektivitet. Istället tycks operatörernas erfarenhetsnivå vara den främsta orsaken till variationen i produktionen i de modeller som används.
359

The Role of Interface in Crystal Growth, Energy Harvesting and Storage Applications

Ramesh, Dinesh 12 1900 (has links)
A flexible nanofibrous PVDF-BaTiO3 composite material is prepared for impact sensing and biomechanical energy harvesting applications. Dielectric polyvinylidene fluoride (PVDF) and barium titanate (BaTiO3)-PVDF nanofibrous composites were made using the electrospinning process based on a design of experiments approach. The ultrasonication process was optimized using a 2k factorial DoE approach to disperse BaTiO3 particles in PVDF solution in DMF. Scanning electron microscopy was used to characterize the microstructure of the fabricated mesh. The FT-IR and Raman analysis were carried out to investigate the crystal structure of the prepared mesh. Surface morphology contribution to the adhesive property of the composite was explained through contact angle measurements. The capacitance of the prepared PVDF- BaTiO3 nanofibrous mesh was a more than 40% increase over the pure PVDF nanofibers. A comparative study of dielectric relaxation, thermodynamics properties and impact analysis of electrospun polyvinylidene fluoride (PVDF) and 3% BaTiO3-PVDF nanofibrous composite are presented. The frequency dependent dielectric properties revealed micro structural features of the composite material. The dielectric relaxation behavior is further supported by complex impedance analysis and Nyquist plots. The temperature dependence of electric modulus shows Arrhenius type behavior. The observed non-Debye dielectric relaxation in electric loss modulus follows a thermally activated process which can be attributed to a small polaron hopping effect. The particle induced crystallization is supported with thermodynamic properties from differential scanning calorimetric (DSC) measurements. The observed increase in piezoelectric response by impact analysis was attributed to the interfacial interaction between PVDF and BaTiO3. The interfacial polarization between PVDF and BaTiO3 was studied using density functional theory calculations and atomic charge density analysis. The results obtained indicates that electrospinning offers a potential way to produce nanofibers with desired crystalline nature which was not observed in molded samples. In addition, BaTiO3 can be used to increase the capacitance, desired surface characteristics of the PVDF nanofibers which can find potential application as flexible piezoelectric sensor mimicking biological skin for use in impact sensing and energy harvesting applications.
360

A design of experiments based on the Normal-boundary-intersection method to identify optimum machine settings in manufacturing processes

Gellerich, Anton, Majschak, Jens-Peter 10 January 2025 (has links)
Finding the appropriate machine settings for a given manufacturing process is an important issue in industrial production. A set of minimum and maximum machine settings correspond to the lower and upper quality limits that are specified for the produced product, and by this define the boundaries of all appropriate machine settings. This paper shows that these boundaries are the solution of a multi-objective optimisation problem, which is called the optimum machine settings problem. However, for most processes there is no mathematical model of the manufacturing process available, which maps the setting parameters on the quality key figures in a way that allows to compute the optimisation problem. In this case, experiments may provide the required empirical data simultaneously while executing the optimisation procedure. Using a case study on heat sealing in industrial packaging, the paper shows, how to develop a design of experiments based on the Normal-boundary-intersection method (NBI), and how to generate the Pareto-frontier by executing test according to this test plan. It addresses the specific limitations inherent in solving an optimisation problem by experiments. The behaviour of the method towards discrete and binary objectives and constraints is discussed.

Page generated in 0.1169 seconds