641 |
The effects of early experience on cognitive functioning in the ratWilson, Lynn Allison, 1953- January 1989 (has links)
Forty-eight rat pups were handled and isolated from postnatal days 3 through 13 in order to determine whether this manipulation would alter the postnatal development of the hippocampus. Half of these animals were then reared in enriched environments from weaning until maturity to determine whether enrichment would ameliorate the expected deficits in learning ability. Beginning at 90 days of age, all animals were tested on a T-maze, rotating bar and both place and cued versions of a water maze task. The study failed to find gross deficits in learning as a result of the handling/isolation procedure, although emotional differences between groups was evident, as were sex differences. Apparently more questions have been raised than answered by this study, and possible directions for future research are discussed.
|
642 |
Theoretical and Experimental Analysis of Operational Wave Energy ConvertersLejerskog, Erik January 2016 (has links)
This thesis studies wave energy converters developed at Uppsala University. The wave energy converters are of point absorbing type with direct driven linear generators. The aim has been to study generator design with closed stator slots as well as offshore experimental studies. By closing the stator slots, the harmonic content in the magnetic flux density is reduced and as a result the cogging forces in the generator are reduced as well. By reducing these forces, the noise and vibrations from the generator can be lowered. The studies have shown a significant reduction in the cogging forces in the generator. Moreover, by closing the slots, the magnetic flux finds a short-cut through the closed slots and will lower the magnetic flux linking the windings. The experimental studies have focused on the motion of the translator. The weight of the translator has a significant impact on the power absorption, especially in the downward motion. Two different experiments have been studied with two different translator weights. The results show that with a higher translator weight the power absorption is more evenly produced between the upward and downward motion as was expected from the simulation models. Furthermore, studies on the influence of the changing active area have been conducted which show some benefits with a changing active area during the downward motion. The experimental results also indicate snatch-loads for the wave energy converter with a lower translator weight. Within this thesis results from a comparative study between two WECs with almost identical properties have been presented. The generators electrical properties and the buoy volumes are the same, but with different buoy heights and diameters. Moreover, experimental studies including the conversion from AC to DC have been achieved. The work in this thesis is part of a larger wave power project at Uppsala University. The project studies the whole process from the energy absorption from the waves to the connection to the electrical grid. The project has a test-site at the west coast of Sweden near the town of Lysekil, where wave energy systems have been studied since 2004.
|
643 |
Pharmacokinetic-Pharmacodynamic modeling and prediction of antibiotic effectsKhan, David D. January 2016 (has links)
Problems of emerging antibiotic resistance are becoming a serious threat worldwide, and at the same time, the interest to develop new antimicrobials has declined. There is consequently a need for efficient methods to develop new treatments that minimize the risk of resistance development and that are effective on infections caused by resistant strains. Based on in silico mathematical models, describing the time course of exposure (Pharmacokinetics, PK) and effect (Pharmacodynamics, PD) of a drug, information can be collected and the outcome of various exposures may be predicted. A general model structure, that characterizes the most important features of the system, has advantages as it can be used for different situations. The aim of this thesis was to develop Pharmacokinetic-Pharmacodynamic (PKPD) models describing the bacterial growth and killing after mono- and combination exposures to antibiotics and to explore the predictive ability of PKPD-models across preclinical experimental systems. Models were evaluated on data from other experimental settings, including prediction into animals. A PKPD model characterizing the growth and killing for a range of E. coli bacteria strains, with different MICs, as well as emergence of resistance, was developed. The PKPD model was able to predict results from different experimental conditions including high start inoculum experiments, a range of laboratory and clinical strains as well as experiments where wild-type and mutant bacteria are competing at different drug concentrations. A PKPD model, developed based on in vitro data, was also illustrated to have the capability to replicate the data from an in vivo study. This thesis illustrates the potential of PKPD models to characterize in vitro data and their usage for predictions of different types of experiments. The thesis supports the use of PKPD models to facilitate development of new drugs and to improve the use of existing antibiotics.
|
644 |
Development of a machine-tooling-process integrated approach for abrasive flow machining (AFM) of difficult-to-machine materials with application to oil and gas exploration componenetsHoward, Mitchell James January 2014 (has links)
Abrasive flow machining (AFM) is a non-traditional manufacturing technology used to expose a substrate to pressurised multiphase slurry, comprised of superabrasive grit suspended in a viscous, typically polymeric carrier. Extended exposure to the slurry causes material removal, where the quantity of removal is subject to complex interactions within over 40 variables. Flow is contained within boundary walls, complex in form, causing physical phenomena to alter the behaviour of the media. In setting factors and levels prior to this research, engineers had two options; embark upon a wasteful, inefficient and poor-capability trial and error process or they could attempt to relate the findings they achieve in simple geometry to complex geometry through a series of transformations, providing information that could be applied over and over. By condensing process variables into appropriate study groups, it becomes possible to quantify output while manipulating only a handful of variables. Those that remain un-manipulated are integral to the factors identified. Through factorial and response surface methodology experiment designs, data is obtained and interrogated, before feeding into a simulated replica of a simple system. Correlation with physical phenomena is sought, to identify flow conditions that drive material removal location and magnitude. This correlation is then applied to complex geometry with relative success. It is found that prediction of viscosity through computational fluid dynamics can be used to estimate as much as 94% of the edge-rounding effect on final complex geometry. Surface finish prediction is lower (~75%), but provides significant relationship to warrant further investigation. Original contributions made in this doctoral thesis include; 1) A method of utilising computational fluid dynamics (CFD) to derive a suitable process model for the productive and reproducible control of the AFM process, including identification of core physical phenomena responsible for driving erosion, 2) Comprehensive understanding of effects of B4C-loaded polydimethylsiloxane variants used to process Ti6Al4V in the AFM process, including prediction equations containing numerically-verified second order interactions (factors for grit size, grain fraction and modifier concentration), 3) Equivalent understanding of machine factors providing energy input, studying velocity, temperature and quantity. Verified predictions are made from data collected in Ti6Al4V substrate material using response surface methodology, 4) Holistic method to translating process data in control-geometry to an arbitrary geometry for industrial gain, extending to a framework for collecting new data and integrating into current knowledge, and 5) Application of methodology using research-derived CFD, applied to complex geometry proven by measured process output. As a result of this project, four publications have been made to-date – two peer-reviewed journal papers and two peer-reviewed international conference papers. Further publications will be made from June 2014 onwards.
|
645 |
Going Lean and Green on Your Mobile Machine : A Quantitative Marketing Placebo Effect Study on Eco-Labelled TechnologyBojanowicz, Weronika, Mattsson, Lina, Nilsson, Heidi January 2016 (has links)
The environmental concern has become a well discussed topic within today’s society and as a result awareness of the impact human behaviour has on the environment is continuously increasing. This concern is something companies take advantage of when marketing, as for instance by promoting their products or services as eco-labelled. Eco-labelled products have further shown to involve a lot of consumer opinions, and are thus common to study in relation to consumer attitudes. Theories also show that eco-labelled goods have been idealised in favour of conventional ones, referred to as a marketing placebo effect. In connection to this, companies have started to point interest at eco-labelled technology, which has become a recent phenomenon attracting attention. Nonetheless, the existing theory regarding this phenomenon has been mainly applied on specific areas, such as the food industry. The purpose of this study was therefore to explain the marketing placebo effect on eco-labelled technology. 162 experiments were conducted using one experiment group and one control group, in order to be able to detect an eventual marketing placebo effect when implementing an eco-label, using attitudes as an influencer. Based on the results, it was revealed that attitudes are crucial to take into consideration when applying an eco-label in a technology context. This as it was concluded that attitudes act as a trigger evoking a marketing placebo effect. The findings from this study contradicts current theories on how different factors cooperate in the process of a marketing placebo effect, and advances has thus been made in how the marketing placebo effect works when applied in a technology context.
|
646 |
Experiment och inlärning : Experiment som metod för inlärningsstudier / Experiments and learning : Experiments as a method for studying learningAndersson, Elisabet January 2016 (has links)
Humans are curious beings. We investigate and explore. We experiment and learn from them. But that process of learning is not very easy to study. Each person learns in different ways. The verbal part of learning is just one piece of the puzzle. The process of learning happens in many other ways, which makes is hard to study (especially in the past). The aim for this thesis is to examine whether experiment could be a tool to use in that research. It also aims to see if cultural transmission theory could be a theoretical base to study learning processes. The thesis describes experiments as a method, the relations between theoretical and practical memory and how culture is usually transmitted. It also studies two examples of experiments that were carried out in order to study learning. The thesis discusses the result of the experiments separately and in connection to cultural transmission theory. It discusses the possibilities of experiments as a method and its relation to the process of learning. It also discusses the relevance of modern novices. / Människan är en nyfiken varelse. Hon undersöker saker, prövar sig fram och lär sig av sina experiment. Något som bidrar till att vi dominerar planeten är att vi inte behåller kunskapen för oss själva, utan delar med oss. Detta inkluderar allt från sociala regler till redskapsrelaterade färdigheter och andra saker vi behöver för att klara oss. Inlärning är en process som sker på mer än det verbala planet, vilket gör det till ett svårstuderat ämne. Det blir ännu svårare i forntiden. Uppsatsen syftar till att undersöka om experiment skulle kunna vara ett verktyg för att studera den här inlärningsprocessen. Den syftar också till att se om kulturell transmissionsteori skulle kunna utgöra en teoretisk grund för dessa studier. Två experiment med fokus på inlärning används för att utvidga diskussionen. Uppsatsen utgår från följande frågeställningar: Hur kan experiment bidra till att förstå läroprocesser? Kan kulturell transmissionsteori vara en teoretisk grund för att förstå läroprocesser? Uppsatsen inleder med att redogöra för kulturell transmissionsteori. Teorin, som kretsar kring överföring av kulturell information till nästa generation, förklaras. Uppsatsen redogör för de grundläggande dragen, dess grundkomponenter samt vad som kan gå fel under överföringen. Den tar även upp den mer evolutionistiskt betonade inriktningen i teoribildningen kring hantverk, vilken brukar kallas för darwinism. Uppsatsen går sedan över till att fokusera på hur minnet fungerar, både på ett teoretiskt plan men även ett kroppsligt. Därefter beskrivs faktorer i inlärningssituationen som kan påverka resultatet, däribland hur undervisningen går till och vad situationen behöver för att fungera. Uppsatsen förklarar även experimentell arkeologi och hur detta kan appliceras i fallet inlärning. Även Chaîne Opératoire nämns. Därefter ges en övergripande beskrivning av de två exempel som uppsatsen använder sig av. Det ena exemplet innefattar mer än ett experimenttillfälle och med olika undervisningsmetoder. Dess nybörjarexperiment beskrivs sedan i detalj vad gäller utförande, resultat och de undervisningsmetoder hon använder sig av. Uppsatsen övergår sedan till att diskutera resultaten från experimentstudierna och vad man kan notera i det fysiska material som blev resultatet av experimenten. Det fysiska resultatet, alltså föremålen som experimenten resulterade i, sätts i relation till undervisningsmetod, know-how samt hur hög grad fel som uppstod. Resultatet pekar på att en aktivt engagerad lärare lyckas bäst med sin informationsöverföring. Därefter diskuteras hur pass relevant en modern novis egentligen är. Frågor om erfarenhet och djupare förståelse av fyndmaterialet lyfts samt frågan om exakta återgivningar. Därefter diskuterar uppsatsen Chaîne Opératoire, inlärning och kulturell transmission i relation till resultatet från experimenten. Chaîne Opératoire föreslås som ett möjligt sätt att strukturera och identifiera de olika stegen i den tillverkningsprocess som ett experiment går igenom. Begreppet diskuteras också som en teoretisk bas för att studera utvecklingen av know-how hos en elev, vilket bygger på att det finns material från en erfaren individ. Kulturell transmissionsteori diskuteras sedan som en ytterligare teoretisk grund i relation till inlärning och hur olika variationer mellan dess grundkomponenter skulle kunna användas i relation till studier av inlärning. Uppsatsen resonerar kring en kombination av experiment, Chaîne Opératoire och kulturell transmissionsteori som en möjlig teoretisk grund för ett ramverk som inkluderar kroppsliga som teoretiska aspekter samt som en möjlighet till att genom att utforska olika variationer av dessa eftersträva ett fysiskt resultat som är jämförbart med ett arkeologiskt material.
|
647 |
Statistical adjustment, calibration, and uncertainty quantification of complex computer modelsYan, Huan 27 August 2014 (has links)
This thesis consists of three chapters on the statistical adjustment, calibration, and uncertainty quantification of complex computer models with applications in engineering. The first chapter systematically develops an engineering-driven statistical adjustment and calibration framework, the second chapter deals with the calibration of potassium current model in a cardiac cell, and the third chapter develops an emulator-based approach for propagating input parameter uncertainty in a solid end milling process.
Engineering model development involves several simplifying assumptions for the purpose of mathematical tractability which are often not realistic in practice. This leads to discrepancies in the model predictions. A commonly used statistical approach to overcome this problem is to build a statistical model for the discrepancies between the engineering model and observed data. In contrast, an engineering approach would be to find the causes of discrepancy and fix the engineering model using first principles. However, the engineering approach is time consuming, whereas the statistical approach is fast. The drawback of the statistical approach is that it treats the engineering model as a black box and therefore, the statistically adjusted models lack physical interpretability. In the first chapter, we propose a new framework for model calibration and statistical adjustment. It tries to open up the black box using simple main effects analysis and graphical plots and introduces statistical models inside the engineering model. This approach leads to simpler adjustment models that are physically more interpretable. The approach is illustrated using a model for predicting the cutting forces in a laser-assisted mechanical micromachining process and a model for predicting the temperature of outlet air in a fluidized-bed process.
The second chapter studies the calibration of a computer model of potassium currents in a cardiac cell. The computer model is expensive to evaluate and contains twenty-four unknown parameters, which makes the calibration challenging for the traditional methods using kriging. Another difficulty with this problem is the presence of large cell-to-cell variation, which is modeled through random effects. We propose physics-driven strategies for the approximation of the computer model and an efficient method for the identification and estimation of parameters in this high-dimensional nonlinear mixed-effects statistical model.
Traditional sampling-based approaches to uncertainty quantification can be slow if the computer model is computationally expensive. In such cases, an easy-to-evaluate emulator can be used to replace the computer model to improve the computational efficiency. However, the traditional technique using kriging is found to perform poorly for the solid end milling process. In chapter three, we develop a new emulator, in which a base function is used to capture the general trend of the output. We propose optimal experimental design strategies for fitting the emulator. We call our proposed emulator local base emulator. Using the solid end milling example, we show that the local base emulator is an efficient and accurate technique for uncertainty quantification and has advantages over the other traditional tools.
|
648 |
Supporting Learner-Controlled Problem Selection in Intelligent Tutoring SystemsLong, Yanjin 01 September 2015 (has links)
Many online learning technologies grant students great autonomy and control, which imposes high demands for self-regulated learning (SRL) skills. With the fast development of online learning technologies, helping students acquire SRL skills becomes critical to student learning. Theories of SRL emphasize that making problem selection decisions is a critical SRL skill. Research has shown that appropriate problem selection that fit with students’ knowledge level will lead to effective and efficient learning. However, it has also been found that students are not good at making problem selection decisions, especially young learners. It is critical to help students become skilled in selecting appropriate problems in different learning technologies that offer learner control. I studied this question using, as platform, a technology called Intelligent Tutoring Systems (ITSs), a type of advanced learning technology that has proven to be effective in supporting students’ domain level learning. It has also been used to help students learn SRL skills such as help-seeking and self-assessment. However, it is an open question whether ITS can be designed to support students’ learning of problem selection skills that will have lasting effects on their problem selection decisions and future learning when the tutor support is not in effect. ITSs are good at adaptively selecting problems for students based on algorithms like Cognitive Mastery. It is likely, but unproven, that ITS problem selection algorithms could be used to provide tutoring on students’ problem selection skills through features like explicit instructions and instant feedback. Furthermore, theories of SRL emphasize the important role of motivations in facilitating effective SRL processes, but not much prior work in ITS has integrated designs that could foster the motivations (i.e., motivational design) to stimulate and sustain effective problem selection behaviors. Lastly, although students generally appreciate having learner control, prior research has found mixed results concerning the effects of learner control on students’ domain level learning outcomes and motivation. There is need to investigate how learner control over problem selection can be designed in learning technologies to enhance students’ learning and motivation. My dissertation work consists of two parts. The first part focuses on creating and scaffolding shared student/system control over problem selection in ITSs by redesigning an Open Learner Model (OLM, visualizations of learning analytics that show students’ learning progress) and integrating gamification features to enhance students’ domain level learning and enjoyment. I conducted three classroom experiments with a total of 566 7th and 8th grade students to investigate the effectiveness of these new designs. The results of the experiments show that an OLM can be designed to support students’ self-assessment and problem selection, resulting in greater learning gains in an ITS when shared control over problem selection is enabled. The experiments also showed that a combination of gamification features (rewards plus allowing re-practice of completed problems, a common game design pattern) integrated with shared control was detrimental to student learning. In the second part of my dissertation, I apply motivational design and user-centered design techniques to extend an ITS with shared control over problem selection so that it helps students learn problem selection skills, with a lasting effect on their problem selection decisions and future learning. I designed a set iv of tutor features that aim at fostering a mastery-approach orientation and learning of a specific problem selection rule, the Mastery Rule. (I will refer to these features as the mastery-oriented features.) I conducted a fourth classroom experiment with 200 6th – 8th grade students to investigate the effectiveness of shared control with mastery-oriented features on students’ domain level learning outcomes, problem selection skills and enjoyment. This experiment also measured whether there were lasting effects of the mastery-oriented shared control on students’ problem selection decisions and learning in new tutor units. The results of the experiment show that shared control over problem selection accompanied by the mastery-oriented features leads to significantly better learning outcomes, as compared to full system-controlled problem selection in the ITS. Furthermore, the mastery-oriented shared control has lasting effects on students’ declarative knowledge of problem selection skills. Nevertheless, there was no effect on future problem selection and future learning, possibly because the tutor greatly facilitated problem selection (through its OLM and badges). My dissertation contributes to the literatures on the effects of learner control on students’ domain level learning outcomes in learning technologies. Specifically, I have shown that a form of learner control (i.e., shared control over problem selection, with mastery-oriented features) can lead to superior learning outcomes than system-controlled problem selection, whereas most prior work has found results in favor of system control. I have also demonstrated that Open Learner Models can be designed to enhance student learning when shared control over problem selection is provided. Further, I have identified a specific combination of gamification features integrated with shared control that may be detrimental to student learning. A second line of contributions of my dissertation concerns research on supporting SRL in ITSs. My work demonstrates that supporting SRL processes in ITSs can lead to improved domain level learning outcomes. It also shows that the shared control with mastery-oriented features have lasting effects on improving students’ declarative knowledge of problem selection skills. Regarding using ITSs to help students learn problem selection skill, the user-centered motivational design identifies mastery-approach orientation as important design focus plus tutor features that can support problem selection in a mastery-oriented way. Lastly, the dissertation contributes to human-computer interaction by generating design recommendations for how to design learner control over problem selection in learning technologies that can support students’ domain level learning, motivation and SRL.
|
649 |
Cross sections and analysing power energy-sharing distributions of valence (p,2p)-knockout from 208Pb with a projectile of 200MeVBezuidenhout, Jacques 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2000. / ENGLISH ABSTRACT:
The aim of this work was to study the 208Pb(p,2p)207Tl quasi-free knockout process. The
experimental data were measured at the National Accelerator Centre using incident polarised
protons of 200 MeV. The two scattered particles, from the knockout reaction, were detected
in coincidence and their energies were determined using a magnetic spectrometer and a solid
state detector telescope.
Cross section and analysing power energy distributions were extracted from the experimental
measurements and these were compared with theoretical values for the Distorted Wave
Impulse Approximation. The theoretical cross-section calculations predict the experimental
cross-section distribution well for all combinations of distorting potentials and bound states
that were investigated, both with regard to shape, as well as absolute magnitude. However
the theoretical analysing power distributions did not agree with the experimental quantities.
Therefore it is not clear whether the analysing power is a useful tool to extract information on
the specifics of the quasi-free reaction mechanism. The spectroscopic factors were found to
be consistent with the results obtained in previous studies, thereby inspiring confidence that
the problem with the analysing power distribution is not ascribable to a possible deficiency in
the experimental techniques exploited in this work. / AFRIKAANSE OPSOMMING:
Die doel van die studie was om die kwasi vrye 208pb(p,2p )207TI verstrooingsproses te
ondersoek. Die eksperimentele data is ingewin by die Nasionale Versnellingsentrum deur
gebruik te maak van 'n 200 MeV gepolariseerde proton bundel. Die twee verstrooide
deeltjies is in koïnsidens gemeet. Vir die metings is 'n magnetiese spektrometer en 'n
vastetoestand detektorteleskoop gebruik.
Die kansvlak- en analiseervermoë-energieverdelings is uit die eksperimentele data verkry en
is vergelyk met die berekenings van die Vervormde Golf Impuls Benadering. Die teoretiese
kansvlak berekening het die eksperimetele data goed voorspel, vir die verskillende
parametrisering van potensiaal en gebonde toestande. Die berekeninge het goed
ooreengestem met betrekking tot beide vorm en absolute grootheid. Die berekende
analiseervermoë het egter nie goed met die eksperimentele data ooreengestem nie. Dit is dus
nie duidelik of die analiseervermoë 'n handige instrument is om inligting oor die betrokke
kwasi-vrye reaksie meganisme te bekom nie. Die spektroskopiese faktore was in
ooreenstemming met resultate wat in vorige studies verkry is. Dit versterk vertroue dat die
probleem met die analiseervermoë nie toegeskryf kan word aan die eksperimentele tegniek
wat gebruik is nie.
|
650 |
Modeling & optimisation of coarse multi-vesiculated particlesClarke, Stephen Armour 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Multi-vesiculated particles (MVP) are synthetic insoluble polymeric particles containing a multitude
of vesicles (micro-voids). The particles are generally produced and used as a suspension in an
aqueous fluid and are therefore readily incorporated in latex paints as opacifiers. The coarse or suede
MVP have a large volume-mean diameter (VMD) generally in the range of 35-60μm, the large VMD
makes them suitable for textured effect paints.
The general principle behind the MVP technology is as the particles dry, the vesicles drain of liquid
and fill with air. The large refractive index difference between the polymer shell and air result in the
scattering of incident light which give the MVP their white opaque appearance making them suitable
as an opacifier for the partial replacement of TiO2 in coating systems.
Whilst the coarse MVP have been successfully commercialized, insufficient understanding of the
influence of the MVP system parameters on the final MVP product characteristics coupled with the
MVP’s sensitivity towards the unsaturated polyester resin (UPR) resulted in a product with significant
quality variation. On the other hand these uncertainties provided the opportunity to model and
optimise the MVP system through developing a better understanding of the influence of the MVP
system parameters on the MVP product characteristics, developing a model to mathematically
describe these relationships and to optimise the MVP system to achieve the product specifications
whilst simultaneously minimising the variation observed in the product characteristics.
The primary MVP characteristics for this study were the particle size distribution (quantified by the
volume-mean diameter (VMD)) and the reactor buildup.1
The approach taken was to analyse the system determining all possible system factors that may
affect it, and then to reduce the total number of system factors by selecting those which have a
significant influence on the characteristics of interest. A model was then developed to
mathematically describe the relationship between these significant factors and the characteristics of
interest. This was done utilising a set of statistical methods known as design of experiments (DoE).
A screening DoE was conducted on the identified system factors reducing them to a subset of factors
which had a significant effect on the VMD & buildup. The UPR was characterised by its acid value and
viscosity and in combination with the identified significant factors a response surface model (RSM)
was developed for the chosen design space, mathematically describing their relationship with the
MVP characteristics. Utilising a DoE method known as robust parameter design (specifically
propagation of error) an optimised MVP system was numerically determined which brought the MVP
product within specification and simultaneously reduced the MVP’s sensitivity to the UPR.
The validation of the response surface model indicated that the average error in the VMD prediction
was 2.16μm (5.16%) which compared well to the 1.96μm standard deviation of replication batches.
The high Pred-R2 value of 0.839 and the low validation error indicates that the model is well suited
for predicting the VMD characteristic of the MVP system. The application of propagation of error to
the model during optimisation resulted in a MVP process and formulation which brought the VMD
response from the standard’s average of 44.56μm to the optimised system’s average of 47.84μm
which was significantly closer to the desired optimal of 47.5μm. The most notable value added to the system by the propagation of error technique was the reduction in the variation around the mean of
the VMD, due to the UPR, by over 30%1 from the standard to optimised MVP system.
In addition to the statistical model, dimensional analysis, (specifically Buckingham-Π method) was
applied to the MVP system to develop a semi-empirical dimensionless model for the VMD. The model
parameters were regressed from the experimental data obtained from the DoE and the model was
compared to several models sited in literature. The dimensionless model was not ideal for predicting
the VMD as indicated by the R2 value of 0.59 and the high average error of 21.25%. However it
described the VMD better than any of the models cited in literature, many of which had negative R2
values and were therefore not suitable for modelling the MVP system. / AFRIKAANSE OPSOMMING: Sintetiese polimeer partikels wat veeltallige lugblasies huisves en omhul, staan beter bekend as MVP
(verkort vanaf die Engelse benaming, "multi-vesiculated particles"). Tipies word hierdie partikels
berei en gestabiliseer in 'n waterige suspensie wat dit mengbaar maak met konvensionele emulsie
sisteme en dit dus in staat stel om te funksioneer as 'n dekmiddel in verf. Deur die volume
gemiddelde deursnee (VGD) te manipuleer tot tussen 35 en 60μm, word die growwe partikels geskik
vir gebruik in tekstuur verwe, soos byvoorbeeld afwerkings met 'n handskoenleer (suède) tipe
tekstuur.
Die dekvermoë van MVP ontstaan soos die partikels droog en die water in die polimeer partikel
vervang word met lug. As gevolg van die groot verskil in brekingsindeks tussen die polimeer huls en
die lugblasies, word lig verstrooi in alle rigtings wat daartoe lei dat die partikels wit vertoon. Dus kan
die produk gebruik word om anorganiese pigmente soos TiO2 gedeeltelik te vervang in verf.
Alhoewel growwe MVP al suksesvol gekommersialiseer is, bestaan daar nog net 'n beperkte kennis
oor die invloed van sisteem veranderlikes op die karakteristieke eienskappe van die finale produk.
Dit volg onder andere uit waarnemings dat die kwaliteit van die growwe MVP baie maklik beïnvloed
word deur onbekende variasies in die reaktiewe poliëster hars wat gebruik word om die partikels te
maak. Dit het egter die geleentheid geskep om die veranderlikes deeglik te modeleer en te
optimiseer om sodoende 'n beter begrip te kry van hoe eienskappe geaffekteer word. 'n
Wetenskaplike model is opgestel om verwantskappe te illustreer en om die sisteem te optimiseer
sodat daar aan produk spesifikasies voldoen word, terwyl produk variasies minimaal bly.
Die oorheersende doel in hierdie studie was om te fokus op partikelgrootte en verspreiding (bepaal
met behulp van die VGD) as primêre karakteristieke eienskap, asook die graad van aanpaksel op die
reaktorwand gedurende produksie.
Vanuit eerste beginsel is alle moontlike veranderlikes geanaliseer, waarna die hoeveelheid verminder
is na slegs dié wat die karakteristieke eienskap die meeste beïnvloed. Deur gebruik te maak van
eksperimentele ontwerp is die wetenskaplike model ontwikkel wat die effek van hierdie eienskappe
statisties omsluit.
'n Afskerms eksperimentele ontwerp is uitgevoer om onbeduidende veranderlikes te elimineer van
dié wat meer betekenisvol is. Die hars is gekaraktiseer met 'n getal wat gebruik word om die aantal
suur groepe per molekuul aan te dui, asook die hars se viskositeit. Hierdie twee eienskappe, tesame
met ander belangrike eienskappe is gebruik om 'n karakteristieke oppervlakte model te ontwikkel
wat hul invloed op die VGD van die partikels en reaktor aanpakking beskryf. Deur gebruik te maak
van 'n robuuste ontwerp, beter beskryf as 'n fout verspreidingsmodel, is die MVP sisteem numeries
geoptimiseer. Dit het tot gevolg dat die MVP binne spesifikasie bly en die VGD se sensitiwiteit vir
variasie in die hars verminder het.
Geldigheidstoetse op die oppervlakte model het aangetoon dat die gemiddelde fout in VGD 2.16μm
(5.16%) was. Dit is stem goed ooreen met die 1.96μm standaard afwyking tussen herhaalde lopies.
Hoë Pred-R2 waardes (0.839) en lae geldigheidsfout waardes het getoon dat die voorgestelde model
die VGD eienskappe uiters goed beskryf. Toepassing van die fout verspreidingsmodel gedurende
optimisering het tot gevolg dat die VGD vanaf die standaard gemiddelde van 44.56μm verskuif het na
die geoptimiseerde gemiddelde van 47.84μm. Dit is aansienlik nader aan die verlangde optimum
waarde van 47.5μm. Die grootste waarde wat toegevoeg is na afloop van hierdie studie, is dat die afwyking rondom die gemiddelde VGD, toegeskryf aan die eienskappe van die hars, verminder het
met oor die 30%1 (vanaf die standaard tot die optimiseerde sisteem).
Verdere dimensionele analise van die sisteem deur spesifiek gebruik te maak van die Buckingham-Π
metode het gelei tot die ontwikkeling van 'n semi-empiriese dimensielose VGD model. Regressie op
eksperimentele data verkry uit die eksperimentele ontwerp is vergelyk met verskeie modelle beskryf
in ander literatuur bronne. Hierdie dimensionele model was nie ideaal om die VGD te beskryf nie,
aangesien die R2 waarde 0.59 was en die gemiddelde fout van 21.25% relatief hoog was. Nietemin,
hierdie model beskryf die VGD beter as enige ander model voorgestel in die literatuur. In talle gevalle
is negatiewe R2 waardes verkry, wat hierdie literatuur modelle geheel en al ongeskik maak vir
toepassing in die MVP sisteem.
|
Page generated in 0.0764 seconds