• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 53
  • 9
  • 7
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 160
  • 160
  • 160
  • 45
  • 42
  • 40
  • 36
  • 27
  • 26
  • 17
  • 16
  • 13
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Modelovanje "cross-flow" mikrofiltracije suspenzija kvasca primenom koncepta neuronskih mreža i postupka odzivne površine / Cross-flow microfiltration modelling of yeast suspension by neural networks and response surface methodology

Jokić Aleksandar 09 July 2010 (has links)
<p>Cilj ovog rada je ispitivanje mogućnosti primene koncepta neuronskih mreža i postupka odzivne povr&scaron;ine za modelovanje cross-flow mikrofiltracije suspenzija kvasca. Drugi cilj je bio ispitivanje pobolj&scaron;anja procesa primenom Kenics statičkog me&scaron;ača kao promotora turbulencije. Primena statičkog me&scaron;ača ispitana je i sa energetskog stanovi&scaron;ta, a ne samo sa aspekta povećanja fluksa permeata. Svi eksperimenti izvedeni su u uslovima recirkulacije i koncentrisanja napojne suspenzije.</p><p>Dobijeni rezultati ukazuju da se pobolj&scaron;anje mikrofiltracije može se ostvariti primenom statičkog me&scaron;ača bez primene dodatne opreme. Tokom eksperimentalnog rada porast fluksa iznosio je između 89,32% i 258,86% u uslovima recirkulacije napojne suspenzije u zavisnosti od odabranih eksperimentalnih uslova, dok je u uslovima koncentrisanja napojne suspenzije porast fluksa imao vrednosti od 100% do 540% u istom eksperimentalnom opsegu.</p><p>Koncept neuronskih mreža daje veoma dobre rezultate fitovanja posmatranih odziva.<br />Pored primene ovog koncepta ispitana je i mogućnost procene uticaja pojedinih<br />eksperimentalnih promenljivih na odzive primenom jednačine Garsona i metode jačine sinapsi koje povezuju neurone. Rezulati ovog ispitivanja u saglasnosti su sa regresionom analizom.</p><p>Za detaljniju analizu uticaja eksperimentalnih promenljivih na posmatrane odzive primenjen je postupak odzivne povr&scaron;ine funkcije. Prvi korak u ovom segmentu istraživanja bio je određivanje uticaja srednjeg prečnika pora membrane na proces mikrofiltracije. Najbolji rezultati dobijeni su za membranu srednjeg prečnika 200 nm, po&scaron;to kod većih prečnika pora dolazi do izraženijeg unutra&scaron;njeg prljanja koje rezultuje manjim vrednostima fluksa permeata tokom proces mikroflitracije.</p><p>Dalja istraživanja usmerena su na ispitivanje uticaja pojedinih eksperimentalnih promenljivih ali i njihovih interakcija za odabranu membranu (srednji prečnik pora 200 nm). Rezultati fitovanja eksperimentalnih podataka dobijeni za jednu membranu bolji su u poređenju sa rezultatima kada su fitovani eksperimentalni rezultati za sve tri kori&scaron;tene membrane. Sa energetske tačke gledi&scaron;ta primećeno je da je najbolje raditi u umerenom opsegu protoka napojne suspenzije. Kao kranji cilj primene postupka odzivne povr&scaron;ine urađena je optimizacija vrednosti eksperimentalnih promenljivih, primenom postupka željene funkcije. Optimalni uslovi rada dobijeni u uslovima recirkulacije napojene suspenzije su transmembranski pritisak 0,2 bara, koncentracija napojne suspenzije 7,54 g/l i protok 108,52 l/h za maksimalne vrednosti specifične redukcije potro&scaron;nje energije. Sa sruge strane u uslovima koncentrisanja napojne suspenzije eksperimentalne promenljive imale su vrednosti transmembranski pritisak 1 bar, koncentracija napojne suspenzije 7,50 g/l i protok 176 l/h za maksimalne vrednosti specifične redukcije potro&scaron;nje energije.</p> / <p>The aim of this work was to investigate<br />possibilities of applying neural network and<br />response surface methodology for modeling crossflow<br />microfiltration of yeast suspensions. Another<br />aim was to investigate the improvement of process<br />using Kenics static mixer as turbulence promoter.<br />Experimental work was performed on 200, 450 and<br />800 nm tubular ceramic membranes. The use of<br />static mixer was also examined from an energetic<br />point of view not only its influence on permeate<br />flux. All experiments were done in recirculation and<br />concentration mode.<br />The results clearly show that the<br />improvement of cross-flow microfiltration of yeast<br />suspensions performances can be done with static<br />mixer without any additional equipment. In<br />experimental work, flux increase had values<br />between 89.32% and 258.86% for recirculation of<br />feed suspension depending on experimental values<br />of selected variables while in concentration mode<br />this improvement was in range between 100% and<br />540% for the same range of experimental variables.<br />Neural networks had excellent predictive<br />capabilities for this kind of process. Besides<br />examination of predictive capabilities of neural<br />networks influence of each variable was examined<br />by applying Garson equation and connection<br />weights method. Results of this analysis were in<br />fairly good agreement with regression analysis.<br />For more detailed analysis of variables influence on<br />the selected responses response surface<br />methodology was implemented. First step was to<br />investigate the influence of membrane pore size on<br />the process of microfiltration. The results suggested<br />that the best way to conduct microfiltration of yeast<br />suspensions is by using the membrane with mean<br />pore size of 200 nm, because bigger mean pore size<br />can lead to more prominent internal fouling that<br />causes smaller flux values.<br />Further investigations of microfiltration<br />process were done in order to investigate influences<br />of variables as well as their interactions and it was<br />done for the membrane with pore size of 200 nm.<br />Results for this membrane considering regression<br />analysis were considerably better compared with<br />results obtained for modeling all three membranes.<br />From the energetic point of view it was concluded<br />that it is optimal to use moderate feed flows to<br />achieve best results with implementation of static<br />mixer.<br />As the final goal of response surface<br />methodology optimization of process variables was<br />done by applying desirability function approach.<br />Optimal values of process variables for<br />recirculation of feed suspension were<br />trasmembrane pressure 0.2 bar, concentration 7.54<br />g/l and feed flow 108.52 l/h for maximal values of<br />specific energy reduction. On the other side for<br />concentration of feed suspension these variables<br />had values of 1 bar, 7.50 g/l and 176 l/h</p>
82

Improving Storm Surge Hazard Characterization Using "Pseudo-surge" to Augment Hydrodynamic Simulation Outputs

Matthew P. Shisler (5930855) 15 May 2019 (has links)
Joint probability methods for assessing storm surge flood risk involve the use of a collection of hydrodynamic storm simulations to fit a response surface model describing the functional relationship between storm surge and storm parameters like central pressure deficit and the radius of maximum wind speed. However, in areas with a sufficiently low probability of flooding, few storms in the simulated storm suite may produce surge, with most storms leaving the location dry with zero flooding. Analysts could treat these zero-depth, “non-wetting” storms as either truncated or censored data. If non-wetting storms are excluded from the training set used to fit the storm surge response surface, the resulting suite of wetting storms may have too few observations to produce a good fit; in the worst case, the model may no longer be identifiable. If non-wetting storms are censored using a constant value, this could skew the response surface fit. The problem is that non-wetting storms are indistinguishable, but some storms may have been closer to wetting than others for a given location. To address these issues, this thesis proposes the concept of a negative surge, or “pseudo-surge”, value with the intent to describe how close a storm came to causing surge at a location. Optimal pseudo-surge values are determined by their ability to improve the predictive performance of the response surface via minimization of a modified least squares error function. We compare flood depth exceedance estimates generated with and without pseudo-surge to determine the value of perfect information. Though not uniformly reducing flood depth exceedance estimate bias, pseudo-surge values do make improvements for some regions where <40% of simulated storms produced wetting. Furthermore, pseudo-surge values show potential to replace a post-processing heuristic implemented in the state-of-the-art response surface methodology that corrects flood depth exceedance estimates for locations where very few storms cause wetting.
83

Material parameter identification of a thermoplastic using full-field calibration

Prabhu, Nikhil January 2020 (has links)
Finite element simulation of thermoplastic components is gaining importance as the companies aim to avoid overdesign of the components. Cost of the component can be minimized by using an adequate amount of material for its application. Life of the component, in a particular application, can be predicted as early as during its design phase with the help of computer simulations. To achieve reliable simulation results, an accurate material model which can predict the material behaviour is vital. Most material models consist of a number of material parameters that needs to be fed into them. These material parameters can be identified with the inputs from physical tests. The accuracy of the data extracted from the physical tests, however, remains the base for the aforementioned process. The report deals with the implementation of optical measurement technique such as Digital Image Correlation (DIC) in contrast with the conventional extensometers. A tensile test is conducted on a glass fibre reinforced thermoplastic specimen, according to ISO 527-2/1A, to extract the experimental data with the help of DIC technique. The material behavior is reproduced within a finite element analysis software package LS-DYNA, with the combination of elastoplastic model called *MAT_024 and stress state dependent damage and failure model called GISSMO. The tensile test is performed under quasi-static condition to rule out the strain rate dependency of the thermoplastic material. The mesh sensitivity of the damage model is taken into account with the element size regularization. The thesis concerns setting up a routine for material parameter identification of thermoplastics by full-field calibration (FFC) approach. Also, comparison of the strain field in the specimen, obtained through the newly set up routine against the regular non-FFC i.e. extensometer measurement routine. The major objective being, through the comparisons, a qualitative assessment of the two routines in terms of calibration time vs. gain in simulation accuracy. Material models obtained through both the routines are implemented in three-point and four-point bending simulations. The predicted material behaviors are evaluated against experimental tests.
84

Reagent-Free Immobilization of Industrial Lipases to Develop Lipolytic Membranes with Self-Cleaning Surfaces

Schmidt, Martin, Prager, Andrea, Schönherr, Nadja, Gläser, Roger, Schulze, Agnes 20 October 2023 (has links)
Biocatalytic membrane reactors combine the highly efficient biotransformation capability of enzymes with the selective filtration performance of membrane filters. Common strategies to immobilize enzymes on polymeric membranes are based on chemical coupling reactions. Still, they are associated with drawbacks such as long reaction times, high costs, and the use of potentially toxic or hazardous reagents. In this study, a reagent-free immobilization method based on electron beam irradiation was investigated, which allows much faster, cleaner, and cheaper fabrication of enzyme membrane reactors. Two industrial lipase enzymes were coupled onto a polyvinylidene fluoride (PVDF) flat sheet membrane to create self-cleaning surfaces. The response surface methodology (RSM) in the design-of-experiments approach was applied to investigate the effects of three numerical factors on enzyme activity, yielding a maximum activity of 823 118 U m2 (enzyme concentration: 8.4 g L1, impregnation time: 5 min, irradiation dose: 80 kGy). The lipolytic membranes were used in fouling tests with olive oil (1 g L1 in 2 mM sodium dodecyl sulfate), resulting in 100% regeneration of filtration performance after 3 h of self-cleaning in an aqueous buffer (pH 8, 37 C). Reusability with three consecutive cycles demonstrates regeneration of 95%. Comprehensive membrane characterization was performed by determining enzyme kinetic parameters, permeance monitoring, X-ray photoelectron spectroscopy, FTIR spectroscopy, scanning electron microscopy, and zeta potential, as well as water contact angle measurements.
85

Analyzing the performance of an order accumulation and sortation system using simulation: A design of experiments approach

Habibulla, Murtuza January 2001 (has links)
No description available.
86

An approach to integrating numerical and response surface models for robust design of production systems

Kini, Satish D. 30 March 2004 (has links)
No description available.
87

Statistical Experimental Design Framework for Cognitive Radio

Amanna, Ashwin Earl 30 April 2012 (has links)
This dissertation presents an empirical approach to identifying decisions for adapting cognitive radio parameters with no a priori knowledge of the environment. Cognitively inspired radios, attempt to combine observed metrics of system performance with artificial intelligence decision-making algorithms. Current architectures trend towards hybrid combinations of heuristics, such as genetic algorithms (GA) and experiential methods, such as case-based reasoning (CBR). A weakness in the GA is its reliance on limited mathematical models for estimating bit error rate, packet error rate, throughput, and signal-to-noise ratio. The CBR approach is similarly limited by its dependency on past experiences. Both methods have potential to suffer in environments not previously encountered. In contrast, the statistical methods identify performance estimation models based on exercising defined experimental designs. This represents an experiential decision-making process formed in the present rather than the past. There are three core contributions from this empirical framework: 1) it enables a new approach to decision making based on empirical estimation models of system performance, 2) it provides a systematic method for initializing cognitive engine configuration parameters, and 3) it facilitates deeper understanding of system behavior by quantifying parameter significance, and interaction effects. Ultimately, this understanding enables simplification of system models by identifying insignificant parameters. This dissertation defines an abstract framework that enables application of statistical approaches to cognitive radio systems regardless of its platform or application space. Specifically, it assesses factorial design of experiments and response surface methodology (RSM) to an over-the-air wireless radio link. Results are compared to a benchmark GA cognitive engine. The framework is then used for identifying software-defined radio initialization settings. Taguchi designs, a related statistical method, are implemented to identify initialization settings of a GA. / Ph. D.
88

Fractional Catalytic Pyrolysis Technology for the Production of Upgraded Bio-oil using FCC Catalyst

Mante, Nii Ofei Daku 06 January 2012 (has links)
Catalytic pyrolysis technology is one of the thermochemical platforms used to produce high quality bio-oil and chemicals from biomass feedstocks. In the catalytic pyrolysis process, the biomass is rapidly heated under inert atmosphere in the presence of an acid catalyst or zeolite to promote deoxygenation and cracking of the primary vapors into hydrocarbons and small oxygenates. This dissertation examines the utilization of conventional fluid catalytic cracking (FCC) catalyst in the fractional catalytic pyrolysis of hybrid poplar wood. The influence of Y-zeolite content, steam treatment, addition of ZSM-5 additive, process conditions (temperature, weight hourly space velocity (WHSV) and vapor residence time) and recycling of the non-condensable gases (NCG) on the product distribution and the quality of the bio-oil were investigated. The first part of the study demonstrates the influence of catalytic property of FCC catalyst on the product distribution and quality of the bio-oil. It was found that FCC catalyst with higher Y-zeolite content produces higher coke yield and lower organic liquid fraction (OLF). Conversely, FCC catalyst with lower Y-zeolite content results in lower coke yield and higher OLF. The results showed that higher Y-zeolite content extensively cracks dehydrated products from cellulose decomposition and demethoxylates phenolic compounds from lignin degradation. The Y-zeolite promoted both deoxygenation and coke forming reactions due to its high catalytic activity and large pore size. Higher Y-zeolite content increased the quality of the bio-oil with respect to higher heating value (HHV), pH, density, and viscosity. The steam treatment at 732 oC and 788 oC decreased the total BET surface area of the FCC catalyst. The findings suggest that steam treatment reduces the coking tendency of the FCC catalyst and enhances the yield of the OLF. Analysis of the bio-oils showed that the steamed FCC catalyst produces bio-oil with lower viscosity and density. Gas chromatography and 13C-NMR spectrometry suggest that steam treatment affect the catalyst selectivity in the formation of CO, CO2, H2, CH4, C2-C5 hydrocarbons and aromatic hydrocarbons. The addition of ZSM-5 additive to the FCC catalyst was found to alter the characteristic/functionality of the catalytic medium. The product slate showed decrease in coke yield and increase in OLF with increase in ZSM-5 additive. The FCC/ZSM-5 additive hybrid catalysts produced bio-oils with relatively lower viscosity and higher pH value. The formation of CO2, CH4, and H2 decreased whilst C5 and aromatic hydrocarbons increased with increase in ZSM-5 additive level. The second part of the work assesses the effect of operating conditions on the catalytic pyrolysis process. The response surface methodology study showed reaction temperature to be the most influential statistically significant independent variable on char/coke yield, concentration of non-condensable gases, carbon content, oxygen content, pH and viscosity of the bio-oils. The WHSV was the most important statistically significant independent variable that affects the yield of organic liquid and water. Adequate and statistically significant models were generated for the prediction of the responses with the exception of viscosity. Recycling of the NCG in the process was found to potentially increase the liquid yield and decrease char/coke yield. The experiments with the model fluidizing gases showed that CO/N2, CO2/N2, CO/CO2/N2 and H2/N2 increase the liquid yield and CO2/N2 decrease char/coke yield. The results showed that recycling of NCG increases the higher heating value and the pH of the bio-oil as well as decreases the viscosity and density. The concept of recycling the NCG in the catalytic cracking of biomass vapors with FCC catalyst improved the overall process. The evaluation of the reactivity of conventional FCC catalyst towards bio-based molecules provide essential direction for FCC catalyst formulation and design for the production of high quality bio-oils from catalytic pyrolysis of biomass. / Ph. D.
89

以機器學習方法估計電腦實驗之目標區域 / Estimation of Target Regions in Computer Experiments: A Machine Learning Approach

林家立, Lin, Chia Li Unknown Date (has links)
電腦實驗(computer experiment)是探索複雜系統輸出反應值和輸入參數之間關係的重要工具,其重要特性是每一次的實驗非常耗費時間及運算的成本。一般在電腦實驗中,研究者較常關心的多是反應曲面的配適和輸出反應值的最佳化等問題(如極大或極小值)。借由一真實平行分散處理系統的啟發,本文所關心的是如何找出系統反應值的局部目標區域。此目標區域有一個非常重要的特性,即區域內外的輸出值所呈現的反應曲面並不連續,因此一般傳統的反應曲面法(response surface methodology)無法適用。本文提出一個新的、可估計不同類型電腦實驗目標區域的有效方法,其中包含了逐步均勻設計和建立分類模型的概 念,電腦模擬的結果也證明了所提方法準確又有效率。 / Computer experiment has been an important tool for exploring the relationships between the input factors and the output responses. It’s important feature is that conducting an experiment is usually time consuming and computationally expensive. In general, researchers are more interested in finding an adequate model for the response surface and the related output optimization problems over the entire input space. Motivated by a real-life parallel and distributed system, here we focus on finding a localized “target region” for the computer experiment. The experiment here has an important characteristic - the response surface is not continuous over the target region of interest. Thus, the traditional response surface methodology (RSM) cannot be directly applied. In this thesis, a novel and efficient methodology for estimating this type of target regions of computer experiment is proposed. The method incorporates the concept of sequential uniform design (UD) and the development of classification techniques based on support vector machines (SVM). Computer simulation shows that the proposed method can efficiently and precisely estimate the target region of computer experiment with different shapes.
90

Modélisation statistique pour données fonctionnelles : approches non-asymptotiques et méthodes adaptatives / Statistical modeling for functional data : non-asymptotic approaches and adaptive methods

Roche, Angelina 07 July 2014 (has links)
L'objet principal de cette thèse est de développer des estimateurs adaptatifs en statistique pour données fonctionnelles. Dans une première partie, nous nous intéressons au modèle linéaire fonctionnel et nous définissons un critère de sélection de la dimension pour des estimateurs par projection définis sur des bases fixe ou aléatoire. Les estimateurs obtenus vérifient une inégalité de type oracle et atteignent la vitesse de convergence minimax pour le risque lié à l'erreur de prédiction. Pour les estimateurs définis sur une collection de modèles aléatoires, des outils de théorie de la perturbation ont été utilisés pour contrôler les projecteurs aléatoires de manière non-asymptotique. D'un point de vue numérique, cette méthode de sélection de la dimension est plus rapide et plus stable que les méthodes usuelles de validation croisée. Dans une seconde partie, nous proposons un critère de sélection de fenêtre inspiré des travaux de Goldenshluger et Lepski, pour des estimateurs à noyau de la fonction de répartition conditionnelle lorsque la covariable est fonctionnelle. Le risque de l'estimateur obtenu est majoré de manière non-asymptotique. Des bornes inférieures sont prouvées ce qui nous permet d'établir que notre estimateur atteint la vitesse de convergence minimax, à une perte logarithmique près. Dans une dernière partie, nous proposons une extension au cadre fonctionnel de la méthodologie des surfaces de réponse, très utilisée dans l'industrie. Ce travail est motivé par une application à la sûreté nucléaire. / The main purpose of this thesis is to develop adaptive estimators for functional data.In the first part, we focus on the functional linear model and we propose a dimension selection device for projection estimators defined on both fixed and data-driven bases. The prediction error of the resulting estimators satisfies an oracle-type inequality and reaches the minimax rate of convergence. For the estimator defined on a data-driven approximation space, tools of perturbation theory are used to solve the problems related to the random nature of the collection of models. From a numerical point of view, this method of dimension selection is faster and more stable than the usual methods of cross validation.In a second part, we consider the problem of bandwidth selection for kernel estimators of the conditional cumulative distribution function when the covariate is functional. The method is inspired by the work of Goldenshluger and Lepski. The risk of the estimator is non-asymptotically upper-bounded. We also prove lower-bounds and establish that our estimator reaches the minimax convergence rate, up to an extra logarithmic term.In the last part, we propose an extension to a functional context of the response surface methodology, widely used in the industry. This work is motivated by an application to nuclear safety.

Page generated in 0.0906 seconds