Spelling suggestions: "subject:"bseparation efficiency"" "subject:"coreparation efficiency""
1 |
Electrowet Coalescence Of Water Drops In Water-ULSD DispersionBandekar, Ashish January 2017 (has links)
No description available.
|
2 |
Modelling the efficiency of an automated sensor-based sorterUdoudo, Ofonime January 2010 (has links)
For future development of automated sensor-based sorting in the mining industry, an improvement in the separation efficiency of the equipment is desirable. This could be achieved through a better understanding of the identification and separation aspects of the automated sorter. For automated sorters that undertake separation through the use of compressed air jets, the problem of poor separation efficiency has been linked with co-deflection losses. Co-deflection losses occur as particles meant to pass on to the ‘accept’ bin are co-deflected with the particles (which are to be deflected) meant to go to the ‘reject’ bin. To study co-deflection losses and suggest means of improving automated sorter separation efficiency, this research investigates the effects of particle size, shape, throughput, together with the proportion of particles (out of the total test batch) required to be deflected on separation efficiency. The effect of the air valve configuration on separation efficiency was also studied. Presented also is a mathematical model which could be used to predict automated sorter separation efficiency. All separation efficiency investigations were undertaken using a TiTech Combisense© (BSM 063) automated sorter. Samples of granite were sized into -20+15mm, -15+10mm and -10+6mm size fractions and grouped into cubic and flaky shape fractions. These fractions were then divided into two with one portion painted for colour separation efficiency investigations. The separation efficiency results confirmed earlier research indicating that particle size and the fraction requiring deflection affects separation efficiency, with separation efficiency decreasing with a decrease in particle size and an increase in throughput. It was observed that co-deflection loss occurs when correctly identified ‘accept’ particles are co-deflected due to their close proximity to ‘reject’ particles that are to be deflected. Observations from the tests indicate that an increase in the proportion of particles requiring deflection increases the probability of finding ‘accept’ particles in close proximity to ‘reject’ particles leading to co-deflections. Monte Carlo simulations were used to produce a random distribution of particles on the conveyor belt as would be obtained from actual investigations. From these simulations particle proximity relationships and particle co-deflections were studied. Results indicate that the Monte Carlo simulations under-predicts particle proximity associations. The effect of shape on co-deflection was investigated with results indicating that flaky shaped particles produce higher number of co-deflections compared to cubic shaped particles. It was also observed that the valve sensitivity determined from valve opening and closing times is of importance to the selectivity (precision) of the separating air jets. A mathematical separation efficiency model is presented which contains two variables, the belt loading (calculated using particle size, shape and throughput) and the particle fraction of the total test batch that are to be deflected (% deflection). The separation efficiency can be calculated once these two variables are determined.
|
3 |
Investigation of Joule Heat Induced in Micro CE Chips Using Advanced Optical Microscopy and the Methods for Separation Performance ImprovementWang, Jing-Hui 30 July 2008 (has links)
This research presents a detection scheme for analyzing the temperature distribution produced by the Joule heating effect nearby the channel wall in a microfluidic chip utilizing a temperature-dependent fluorescence dye. An advanced optical microscope system¡Xtotal internal reflection fluorescence microscope (TIRFM) is used for measuring the temperature distribution on the inner channel wall at the point of electroosmotic flow in an electrokinetically driven microfluidic chip. In order to meet the short working distance of the objective-type TIRFM, microscope cover glass are used to fabricate the microfluidic chips. The short fluorescence excitation depth from a TIRFM makes the intensity information obtained is not sensitive to the channel depth variation which ususally biases the measured results while using conventional epi-fluorescence microscope (Epi-FM). Therefore, a TIRFM can precisely describe the temperature profile of the distance within hundreds of nanometer of the channel wall where consists of the Stern layer and the diffusion layer for an electrokinetic microfluidic system. In order to investigate the temperature distribution produced by the Joule heating effect for electrokinetically driven microchips, this study not only measures the temperature on the microchannel wall by the proposed TIRFM but also measures the temperature inside the microchannel by an Epi-FM. In addition, this research presents a method to reduce the Joule heating effect and enhance the separation efficiency of DNA biosamples in a chip-based capillary electrophoresis (CE) system utilizing pulse DC electric fields. Since the average power consumption is reduced by the pulse electric fields, the Joule heating effect can be significantly reduced. Results indicate the proposed TIRFM method provides higher measurement sensitivity over the Epi-FM method. Significant temperature difference along the channel depth measured by TIRFM and Epi-FM is experimentally observed. In addition, the measured wall temperature distributions can be the boundary conditions for numerical investigation into the Joule heating effect. The proposed method gives a precise temperature profile of microfluidic channels and shows the substantial impact on developing a simulation model for precisely predicting the Joule heating effect in microfluidic chips. Moreover, in the research of reducing the Joule heating effect and enhancing the separation efficiency in a chip-based CE system utilizing pulse electric fields, the experimental and numerical investigations commence by separating a mixed sample comprising two fluoresceins with virtually identical physical properties. The separation level is approximately 2.1 times higher than that achieved using a conventional DC electric field. The performance of the proposed method is further evaluated by separating a DNA sample of Hae III digested £XX¡V174 ladder. Results indicate the separation level of the two neighboring peaks of 5a (271 bp) and 5b (281 bp) in the DNA ladder is as high as 120% which is difficult to be achieved using a conventional CE scheme. The improved separation performance is attributed to a lower Joule heating effect as a result of a lower average power input and the opportunity for heat dissipation during the zero-voltage stage of the pulse cycle. Overall, the results demonstrate a simple and low-cost technique for achieving a high separation performance in CE microchips.
|
4 |
Matematické modelování odlehčovacích komor na stokových sítích / Mathematical modelling of CSO chambersStudnička, Tomáš Unknown Date (has links)
The thesis is concerned with the use of 3D mathematical modelling for flow simulation and separation efficiency in a single side weir CSO chambers. Analysis of the effect of turbulence model and computational grid on simulation results has been carried out in order to maximize the efficiency of numerical simulations. The goal of the thesis is to examine the effect of scum board position on separation efficiency of a single side weir CSO chamber.
|
5 |
Numerical Methods for Simulating Separation in a Vacuum Cleaner CycloneLans, Patrik January 2016 (has links)
This thesis includes a numerical comparison of different turbulence models and particle models in terms of convergence time and physical accuracy. A cyclone is used as the computational domain. Cyclones are common devices for separating two or more substances. The work is divided into an experimental part and a numerical part. In the experiments, characteristics of the cyclone were measured. This data is then used to evaluate different numerical modeling approaches. The numerical part consists of two parts, namely single phase flow and multiphase flow, where different modeling aspects are examined and presented. Furthermore, important parameters that characterize a cyclone, such as pressure drop and separation efficiency, are calculated. The separation efficiency, i.e. how much dust that actually goes to the dust bin, is calculated for two different types of dust. The software used for the numerical simulations has been Star-CCM+.
|
6 |
Vergleich von konstanter und periodischer Trenngrenze bei der Thrombozytapherese mit dem Blutzellseparator Fresenius AS 104Balke, Bettina 28 October 2002 (has links)
Vergleich zweier Thrombozytaphereseprotokolle des kontinuierlich arbeitenden Zellseparators FRESENIUS AS 104 mit dem Ziel einer regelmäßigen Senkung der Leukozytenkontamination (LK) in Thrombozytapheresekonzentraten unter die kritische Grenze für eine Alloimmunisierung von 1 x 106 Leukozyten pro Transfusionseinheit ("critical immunological load of leukocytes" (cill)). Die Bestimmung der Leukozytenzahl in diesem niedrigen Konzentrationsbereich stellt ein bekanntes Problem dar, weshalb drei Methoden dazu miteinander verglichen wurden. Bei je 50 Thrombozytapheresen von gesunden Spendern wurde das Standardprogramm mit konstanter Trenngrenze bzw. die Programm-Modifikation mit periodischer Trenngrenzenposition eingesetzt. Der Vergleich der beiden Trenngrenzeneinstellungen erfolgte auf der Basis der ermittelten LK in den Thrombozytapheresekonzentraten sowie den errechneten Qualitätsparametern Plättchenausbeute und Separationseffizienz. Zur Bestimmung der LK wurden neben der mikroskopischen Auszählung der Zellen mit Hilfe der modifizierten Nageotte-Kammer zum einen ein neuartiger Blutbildautomat (Abbott CD 3500), zum anderen durchflußzytometrische Methode (FACS) verwendet. Die Effektivitätsparameter beider Thrombozytaphereseverfahren wichen signifikant voneinander ab: Der Thrombozyten-Ertrag ist bei Anwendung der konstanten Trenngrenze war im Mittel mit 3,6 x 1011/TE signifikant höher als bei Anwendung der periodischen Trenngrenze mit 3,3 x 1011/TE (t-Test:p = 0,017). Dementsprechend sind bei vergleichbaren Mittelwerten für Separationsdauer (63min vs. 64min) und Separationsvolumen (3588 ml vs. 3737 ml) sowohl die Separationsleistung (5,9 x 109/min vs. 5,2 x 109/min; t-Test: p = 0,018), als auch die Separationseffizienz (48,0% vs. 43,3%; t-Test: p = 0,005) signifikant niedriger bei der Anwendung der periodischen Trenngrenzenposition. Die Leukozytenkontamination ist signifikant niedriger bei Anwendung der periodischen Trenngrenze: Je nach angewandter Methode liegt die mittlere Leukozyten-Kontamination bei Anwendung der konstanten Trenngrenze bei 4,9 x 106/TE, bei Anwendung der periodischen Trenngrenze bei 2,1 bzw. 1,2 x 106/TE (U-Test: p < 0,0001). Bei der Bestimmung mittels FACS lag die Leukozytenkontamination bei 46 von 50 Thrombozytapheresen mit periodischer Trenngrenze unter dem angestrebten Wert von 1 x 106/TE (vs. konstante Trenngrenze: 9 / 50) . Beim Vergleich zwischen Nageotte-Kammer-Zählung als Goldstandard und Blutbildautomat zeigt der Blutbildautomat Abbott CD 3500 bei Leukozyten-Konzentrationen unter 50 bis 80 pro Mikroliter eine unzureichende Genauigkeit (r = 0,546). Zwischen Nageotte-Kammer-Zählung und FACS-Methode findet sich hingegen eine gute Korrelation (r = 0,930). Die Modifikation des Thrombozytenseparationsverfahrens mit periodischer Verschiebung der Trenngrenze bewirkt eine signifikante Senkung der Leukozytenkontamination. Der Thrombozytenertrag und die Separationseffizienz werden um etwa 10% gegenüber dem Standardverfahren mit konstanter Trenngrenze gesenkt. Dieser Thrombozytenverlust ist jedoch bedeutungslos, da die zum Erreichen des gesetzlich vorgeschriebenen cill-Wertes notwendige Filterung des Standard-Thrombozytapheresekonzentrats ebenfalls zu einem Thrombozytenverlust führt, der in derselben Größenordnung liegt. / Comparison of two different separation protocols for plateletpheresis with the Fresenius Blood Cell Separator AS104. Aim was to achieve a low white blood cell (WBC) contamination of the resulting platelet concentrate below the critical immunological load of leukocytes responsible for alloimmunisation. Furthermore comparison of three different methods to determine the exact count of WBC in this low range of WBC concentration. 50 healthy donors each underwent platelet apheresis with the Fresenius Blood Cell Separator AS104 using the periodically alternating interface position (PAIP) or a standard interface position (SIP). To evaluate the influence, WBC contamination of the platelet concentrate and separation efficiency (SE) were investigated. WBC were counted either microscopically (modified Nagotte chamber) or dermined on a whole blood counter Abbott CD 3500 or on a FACScan flowcytometer. SE with PAIP is lower than with SIP ( 43.3% vs. 48%; t-test: p=0.005). WBC contamination with PAIP is lower than with SIP: depending on the method of counting 2.1 resp. 1.2 x 106/TE for PAIP vs. 4.9 x 106/TE for SIP (u-test: p < 0.0001). Correlation between counting the WBC by the modified Nagotte chamber as a ´gold standard´ methode and the Abbott CD 3500 was poor (r = 0.546), whereas counting by modified Nagotte chamber and by FACScan showed a good correlation (r = 0.93). Modification of the separation protocol for platelet apheresis with the Fresenius Blood Cell Separator AS104 by PAIP results in a lower WBC contamination of platelet concentrates. The lower SE and total amount of platelets in the concentrates with the PAIP method compared to SIP make no difference in the end because the concentrates produced with SIP have additionally to be filtered in order to achieve a WBC conatmination below the cill and undergo thereby a reduction of platelets in about the same range.
|
7 |
A Statistical Analysis of Hydrocyclone ParametersHsiang, Thomas C. H. 12 1900 (has links)
Both Part I and Part II are included. / The separation of a mixture of glass spheres in water using 2 inch hydrocyclones was studied. <p> Three operating parameters were investigated: feed concentration, volume split and feed flow rate. In addition, three design parameters were cone angle, inlet diameter, and vortex finder length. The performance criterion parameters were the efficiency with which the solids were separated from the liquid, and the energy required per unit mass flowing through the hydrocyclone. </p> <p> First the experimental data were analyzed by three different statistical methods and the results compared in an attempt to determine which statistical method was most suitable for this two criteria system. The three methods were principal component analysis, canonical correlation analysis and multiple regression analysis. The theory behind these methods is briefly outlined. Our conclusion is that using all three methods give much more insight than could be obtained from any individual method. </p> <p> Second, an analysis of the above eight hydrocyclonc parameters of hydrocyclones with cylindrical sections indicated that for the range of parameters covered in this work, feed flow rate and inlet diameter influenced the energy loss most; volume split influenced the separation efficiency the most. Energy loss and separation efficiency are quite independent; this means that it is possible to design and run a hydrocyclone with high separation efficiency and low energy loss. The dilute concentrations used in this work indicate that a hydrocyclone of conventional design can be used in waste water treatment. When the parameters were correlated, a power model gave more consistent interpretation than a linear model. </p> <p> Third, the effect of the three operating parameters on hydrocyclones with three different body shapes suggested that the most efficient cyclone was one with a straight cone and no cylindrical section. The body shape dictated which parameters would significantly affect performance. </p> / Thesis / Master of Engineering (ME)
|
8 |
Efeito do campo elétrico sobre a eficiência de separação de cargas foto-geradas em isolantes / Electric field effect on the separation efficiency of photo-charges generated inside insulatorsEvora, Antonio Vieira de Miranda 31 May 1989 (has links)
O presente trabalho faz um estudo do efeito do campo elétrico sobre a eficiência de separação de buracos e elétrons gerados, por um pulso de luz, numa camada próxima da superfície de um isolante. Obtém-se a fração de portadores que é retirada da camada superficial, pelo campo elétrico aplicado, e que entra no interior da amostra. Assume-se que o excesso de carga é instantaneamente produzido por um pulso de luz fortemente absorvido, e que os eletrodos são bloqueantes, evitando injeção secundária. A solução analítica exata é obtida para o caso em que os elétrons e buracos são criados aos pares, e estão sujeitos à recombinação do tipo bimolecular. Num segundo caso, considera-se o mecanismo de fotogeração extrínseco, e os buracos livres podem se recombinar indiretamente com elétrons aprisionados e os elétrons livres podem ser capturados por armadilhas profundas. Impondo-se a condição de que os buracos são muito mais móveis que os elétrons, obtêm-se uma solução numérica que é apresentada graficamente / The effect of the electric field on the efficiency of separation of electrons and holes generated by a strongly absorbed light pulse is studied in two cases. The fraction of carriers that enters the electrode and the one that goes into the sample is calculated assuming instantaneous generation. In the first, carriers are created in pairs and bimolecular recombination prevails. In the second, a photo-generation extrinsic mechanism is assumed, in which generated free holes recombine with trapped electrons present at the surface and electrons excited, out of traps may be captured again. In this last case, the mobility of the holes is assumed to be much larger than the one of the electrons. The numerical solution was found and conveniently ploted
|
9 |
Soap separation efficiency at Gruvön mill : An evaluation of the process before and after a modificationTran, Tony January 2011 (has links)
Wood consists not only of cellulose, lignin and hemicellulose but also of so called extractives which includes fats and acids and these components are separated in the mill from the black liquor. These extractives are in the mill denoted as tall oil soap. Tall oil has a large field of applications like chemicals and fuel and as it is produced to the atmosphere if it can replace oil and thus reduce the oil consumption. Tall oil soap is separated from the black liquor in a skimmer and the focus of this thesis was to examine the effect of air injection and the soap layer thickness on the soap separation efficiency in a skimmer. The work was focused on in analyzing the soap content of the inlet and outlet black liquor flow of the skimmer and to detect if an enhancement has been achieved with the two mentioned methods. The reason for the pulp mill to improve the soap separation efficiency was to decrease the risk of foaming and fouling in the evaporator but also to be able to increase the production of tall oil. The air injection gave a 41% improvement of the soap separation efficiency and further improvements are probably possible to achieve. The air injection flow was about 7 l air /m3 liquor in the black liquor feed. The airflow lowers the density of soap, creating a greater difference in density between soap and black liquor and this improves the separation efficiency. A thicker soap layer could increase the likelihood for soap drops to raise and reach the soap-liquor interface, because the soap drops have the tendency to bind with each other and will be separated from the liquor instead of following with the skimmed liquor outlet (fig. i.2). However, this study shows no indication of improvement with thicknesses that exceeds 0,75- 3,5 m which also endanger the skimmer due to overflow from the skimmer or create a short circuit between the in- and the outlet black liquor flow.
|
10 |
Thermodynamic Insight for the Design and Optimization of Extractive Distillation of 1.0-1a Class Separation / Approche thermodynamique pour la conception et l'optimisation de la distillation extractive de mélanges à température de bulle minimale (1.0-1a)You, Xinqiang 07 September 2015 (has links)
Nous étudions la distillation extractive continue de mélanges azéotropiques à temperature de bulle minimale avec un entraineur lourd (classe 1.0-1a) avec comme exemples les mélanges acétone-méthanol avec l’eau et DIPE-IPA avec le 2-méthoxyethanol. Le procédé inclut les colonnes de distillation extractive et de régénération de l’entraineur en boucle ouverte et en boucle fermée. Une première stratégie d’optimisation consiste à minimiser la fonction objectif OF en cherchant les valeurs optimales du débit d’entraineur FE, les positions des alimentations en entraineur et en mélange NFE, NFAB, NFReg, les taux de reflux R1, R2 et les débits de distillat de chaque colonne D1, D2. OF décrit la demande en énergie par quantité de distillat et tient compte des différences de prix entre les utilités chaudes et froides et entre les deux produits. La deuxième stratégie est une optimisation multiobjectif qui minimise OF, le coût total annualisé (TAC) et maximise deux nouveaux indicateurs thermodynamiques d’efficacité de séparation extractive totale Eext et par plateau eext. Ils décrivent la capacité de la section extractive à séparer le produit entre le haut et le bas de la section extractive. L’analyse thermodynamique des réseaux de courbes de résidu ternaires RCM et des courbes d’isovolatilité montre l’intérêt de réduire la pression opératoire dans la colonne extractive pour les séparations de mélanges 1.0-1a. Une pression réduite diminue la quantité minimale d’entraineur et accroît la volatilité relative du mélange binaire azéotropique dans la région d’opération de la colonne extractive. Cela permet d’utiliser un taux de reflux plus faible et diminue la demande énergétique. La première stratégie d’optimisation est conduite avec des contraintes sur la pureté des produits avec les algorithmes SQP dans les simulateurs Aspen Plus ou Prosim Plus en boucle ouverte. Les variables continues optimisées sont : R1, R2 et FE (étape 1). Une étude de sensibilité permet de trouver les valeurs de D1, D2 (étape 2) et NFE, NFAB, NFReg (étape 3), tandis l’étape 1 est faite pour chaque jeu de variables discrètes. Enfin le procédé est resimulé en boucle fermée et TAC, Eext et eext sont calculés (étape 4). Les bilans matières expliquent l’interdépendance des débits de distillats et des puretés des produits. Cette optimisation permet de concevoir des procédés avec des gains proches de 20% en énergie et en coût. Les nouveaux procédés montrent une amélioration des indicateurs Eext et eext. Afin d’évaluer l’influence de Eext et eext sur la solution optimale, la seconde optimisation multiobjectif est conduite. L’algorithme génétique est peu sensible à l’initialisation, permet d’optimiser les variables discrètes N1, N2 et utilise directement le shéma de procédé en boucle fermée. L’analyse du front de Pareto des solutions met en évidence l’effet de FE/F et R1 sur TAC et Eext. Il existe un Eext maximum (resp. R1 minimum) pour un R1 donné (resp. Eext). Il existe aussi un indicateur optimal Eext,opt pour le procédé optimal avec le plus faible TAC. Eext,opt ne peut pas être utilisé comme seule fonction objectif d’optimisation mais en complément des autres fonctions OF et TAC. L’analyse des réseaux de profils de composition extractive explique la frontière du front de Pareto et pourquoi Eext augmente lorsque FE diminue et R1 augmente, le tout en lien avec le nombre d’étage. Visant à réduire encore TAC et la demande énergétique nous étudions des procédés avec intégration énergétique double effet (TEHI) ou avec des pompes à chaleur (MHP). En TEHI, un nouveau schéma avec une intégration énergétique partielle PHI réduit le plus la demande énergétique. En MHP, la recompression partielle des vapeurs VRC et bottom flash partiel BF améliorent les performances de 60% et 40% respectivement. Au final, le procédé PHI est le moins coûteux tandis que la recompression totale des vapeurs est la moins énergivore. / We study the continuous extractive distillation of minimum boiling azeotropic mixtures with a heavy entrainer (class 1.0-1a) for the acetone-methanol with water and DIPE-IPA with 2-methoxyethanol systems. The process includes both the extractive and the regeneration columns in open loop flowsheet and closed loop flowsheet where the solvent is recycled to the first column. The first optimization strategy minimizes OF and seeks suitable values of the entrainer flowrate FE, entrainer and azeotrope feed locations NFE, NFAB, NFReg, reflux ratios R1, R2 and both distillates D1, D2. OF describes the energy demand at the reboiler and condenser in both columns per product flow rate. It accounts for the price differences in heating and cooling energy and in product sales. The second strategy relies upon the use of a multi-objective genetic algorithm that minimizes OF, total annualized cost (TAC) and maximizes two novel extractive thermodynamic efficiency indicators: total Eext and per tray eext. They describe the ability of the extractive section to discriminate the product between the top and to bottom of the extractive section. Thermodynamic insight from the analysis of the ternary RCM and isovolatility curves shows the benefit of lowering the operating pressure of the extractive column for 1.0-1a class separations. A lower pressure reduces the minimal amount of entrainer and increases the relative volatility of original azeotropic mixture for the composition in the distillation region where the extractive column operates, leading to the decrease of the minimal reflux ratio and energy consumption. The first optimization strategy is conducted in four steps under distillation purity specifications: Aspen Plus or Prosim Plus simulator built-in SQP method is used for the optimization of the continuous variables: R1, R2 and FE by minimizing OF in open loop flowsheet (step 1). Then, a sensitivity analysis is performed to find optimal values of D1, D2 (step 2) and NFE, NFAB, NFReg (step 3), while step 1 is done for each set of discrete variables. Finally the design is simulated in closed loop flowsheet, and we calculate TAC and Eext and eext (step 4). We also derive from mass balance the non-linear relationships between the two distillates and how they relate product purities and recoveries. The results show that double digit savings can be achieved over designs published in the literature thanks to the improving of Eext and eext. Then, we study the influence of the Eext and eext on the optimal solution, and we run the second multiobjective optimization strategy. The genetic algorithm is usually not sensitive to initialization. It allows finding optimal total tray numbers N1, N2 values and is directly used with the closed loop flow sheet. Within Pareto front, the effects of main variables FE/F and R1 on TAC and Eext are shown. There is a maximum Eext (resp. minimum R1) for a given R1 (resp. Eext). There exists an optimal efficiency indicator Eext,opt which corresponds to the optimal design with the lowest TAC. Eext,opt can be used as a complementary criterion for the evaluation of different designs. Through the analysis of extractive profile map, we explain why Eext increases following the decrease of FE and the increase of R1 and we relate them to the tray numbers. With the sake of further savings of TAC and increase of the environmental performance, double-effect heat integration (TEHI) and mechanical heat pump (MHP) techniques are studied. In TEHI, we propose a novel optimal partial HI process aiming at the most energy saving. In MHP, we propose the partial VRC and partial BF heat pump processes for which the coefficients of performance increase by 60% and 40%. Overall, optimal partial HI process is preferred from the economical view while full VRC is the choice from the environmental perspective.
|
Page generated in 0.1132 seconds