• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1178
  • 401
  • 179
  • 170
  • 105
  • 36
  • 34
  • 19
  • 19
  • 19
  • 19
  • 19
  • 19
  • 19
  • 13
  • Tagged with
  • 2544
  • 555
  • 370
  • 247
  • 218
  • 200
  • 180
  • 160
  • 145
  • 144
  • 137
  • 132
  • 128
  • 126
  • 121
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Essays on Corruption and Preferences

Viceisza, Angelino Casio 13 January 2008 (has links)
This dissertation comprises three essays. The theme that unifies them is "experiments on corruption and preferences." The first essay (chapter 2) reports theory-testing experiments on the effect of yardstick competition (a form of government competition) on corruption. The second essay (chapter 3) reports theory-testing experiments on the effect of efficiency and transparency on corruption. Furthermore, this essay revisits the yardstick competition question by implementing an alternative experimental design and protocol. Finally, the third essay (chapter 4) reports a theory-testing randomized field experiment that identifies the causes and consequences of corruption. The first essay finds the following. Theoretically, the paper derives a main proposition which suggests that institutions with more noise give rise to an increase in corrupt behavior and a decrease in voter welfare. Empirically, the paper finds a few key results. First, there are an initial nontrivial proportion of good incumbents in the population. This proportion goes down as the experiment session progresses. Secondly, a large proportion of bad incumbents make theoretically inconsistent choices given the assumptions of the model. Third, overall evidence of yardstick competition is mild. Yardstick competition has little effect as a corruption-taming mechanism when the proportion of good incumbents is low. Namely, an institution that is characterized by a small number of good incumbents has little room for yardstick competition, since bad incumbents are likely to be replaced by equally bad incumbents. Thus, incumbents have less of an incentive to build a reputation. This is also the case in which (1) yardstick competition leads to non-increasing voter welfare and (2) voters are more likely to re-elect bad domestic incumbents. Finally, a partitioning of the data by gender suggests that males and females exhibit different degrees of learning depending on the payoffs they face. Furthermore, male voter behavior exhibits mild evidence of yardstick competition when voters face the pooling equilibrium payoff. The second essay finds the following. First, efficiency is an important determinant of corruption. A decrease in efficiency makes it more costly for incumbents to "do the right thing." This drives them to divert maximum rents. While voters retaliate slightly, voters tend to be worse off. Secondly, increased lack of a particular form of transparency (as defined in terms of an increase in risk in the distribution of the unit cost) leaves corrupt incumbent behavior unchanged. In particular, if the draw of the unit cost is unfavorable, incumbents tend to be less corrupt. Third, there is strong evidence of yardstick competition. On the incumbent's side, yardstick competition acts as a corruption-taming mechanism if the incumbent is female. On the voter's side, voters are less likely to re-elect the incumbent in the presence of yardstick competition. Specifically, voters pay attention to the difference between the tax signal in their own jurisdiction and that in another. As this difference increases, voters re-elect less. This gives true meaning to the concept of "benchmarking." Finally, the analysis sheds light on the role of history and beliefs on behavior. Beliefs are an important determinant of incumbents' choices. If an incumbent perceives a tax signal to be associated with a higher likelihood of re-election, he is more likely to choose it. On the voter's side, history tends to be important. In particular, voters are more likely to vote out incumbents as time progresses. This suggests that incumbents care about tax signals because they provide access to re-elections while voters use the history of taxes and re-elections in addition to current taxes to formulate their re-election decisions. Finally, the third essay finds the following. First, 19.08% of mail is lost. Secondly, money mail is more likely to be lost at a rate of 20.90% and this finding is significant at the 10% level. This finding suggests that loss of mail is systematic (non-random), which implies that this type of corruption is due to strategic behavior as opposed to plain shirking on the part of mail handlers. Third, we find that loss of mail is non-random across other observables. In particular, middle-income neighborhoods are more likely to experience lost (money) mail. Also, female heads of household in low-income neighborhoods are more likely to experience lost mail while female heads of household in high-income neighborhoods are much less likely to experience lost (money) mail. Finally, this form of corruption is costly to different stakeholders. The sender of mail bears a direct and an indirect cost. The direct cost is the value of the mail. The indirect cost is the cost of having to switch carriers once mail has been lost. Corruption is also costly to the intended mail recipient as discussed above. Finally, corruption is costly the mail company (SERPOST) in terms of lost revenue and to society in terms of loss of trust. Overall, the findings suggest that public-private partnerships need not increase efficiency by reducing corruption; particularly, when the institution remains a monopoly. Increased efficiency in mail delivery is likely to require (1) privatization and (2) competition; otherwise, the monopolist has no incentive to provide better service and loss of mail is likely to persist.
212

Economic action and reference points: an experimental analysis

Solà Belda, Carles 12 March 2001 (has links)
Aquesta tesi analitza diversos aspectes de les motivacions individuals i de les seves implicacions en processos econòmics. Específicament, analitzo en detall criteris normatius que poden aplicar els individus com són els de justícia i reciprocitat. En la Introducció defineixo l'ús que en faig de conceptes com la reciprocitat, la justícia, la "dependència del menú" i els "punts de referència" donat que s'empren en el desenvolupament dels diferents capítols. També es descriu la metodologia emprada, que consisteix en alguns models teòrics sobre el comportament dels individus en situacions estratègiques, incorporant elements de la Teoria dels Jocs i l'ús de la metodologia experimental. En el segon capítol, " El concepte de justícia de Rabin i la provisió privada de béns públics", analitzo en detall les implicacions de la teoria de Rabin (1993) sobre el comportament estratègic d'individus. Aquest model introdueix en la funció d'utilitat , a més dels pagaments econòmics que un individu obté, aspectes psicològics com el sentit de justícia en les relacions econòmiques amb altres individus. En aquest capítol examino les implicacions d'una extensió de la teoria a un camp a on existeix una acumulació de resultats experimentals en contradicció amb el comportament predit pels models estàndard de la teoria dels jocs. Mostro que la teoria d'en Rabin és consistent amb el que s'anomena "splitting" però inconsistent amb el que es coneix com a "efecte MPCR". El tercer capítol, "Punts de referència i reciprocitat negativa en jocs seqüencials simples", analitza la influència que poden tenir certs vectors de pagaments no disponibles en un moment de decisió, anomenats "punts de referència", sobre la preferència per un altre conjunt de vectors de pagaments. Això es connecta amb l'atribució de certes intencions a altres subjectes quan trien determinats cursos d'acció en el joc. Mitjançant la utilització d'experiments s'obtenen resultats que confirmen la importància dels punts de referència en les consideracions de reciprocitat que empren els individus. El quart capítol, " Aspectes distribucionals i els punts de referència", analitza alguns aspectes que poden combinar-se amb els punts de referència en la atribució d'intencions. Aquests aspectes són: el pagament que podia rebre un agent en el punt de referència, el seu pagament relatiu a l'altre agent i, finalment, el pagament conjunt que podien obtenir els dos agents en el punt de referència. Els resultats experimentals obtinguts mostren que cap d'aquests efectes pot explicar per ell mateix els resultats. Finalment, el cinquè capítol, " Els joc del dilema dels presoners en forma seqüencial: Reciprocitat i efectes de dimensió del grup" estudia les reaccions dels individus a certes decisions d'altres individus del procés i els canvis d'aquestes reaccions amb la dimensió del grup. Els resultats experimentals obtinguts , mostren que el comportament observat és consistent amb consideracions de reciprocitat i d'aversió a la desigualtat. / This thesis analyzes several aspects of the motivations that drive individuals and their implications in economic processes. In particular, I analyze in detail normative criteria that individuals apply such as those of fairness and reciprocity. In the Introduction I define the use I make of the concepts of reciprocity, fairness, menu dependence and reference points that will be used in the course of the different chapters. The methodology developed in this thesis employs some theoretical models on the behavior of individuals in strategic interactions, using elements of Game Theory and Experimental Economics. In the second chapter, "On Rabin's Concept of Fairness and the Private Provision of Public Goods", I analyze in detail the implications of Rabin's (1993) theory of individual behavior and its implications. This model introduces, apart from the economic payoffs that the individual obtains in a strategic interaction, psychological phenomena, mainly a sense of fairness in the relation with other agents. In this chapter I analyze the implications of an extended version of this theory to a field where there exists a vast amount of experimental evidences contradicting the behavior predicted by standard game theoretical models. I show that Rabin's theory is consistent with one piece of evidence repeatedly found in experiments, the so call "splitting". I also show that the model is inconsistent with another piece of evidence in the field, the "MPCR effect". The third chapter, "Reference Points and Negative Reciprocity in Simple Sequential Games", analyzes the influence that certain payoff vectors, the "reference points", not attainable at that time, may have on the preference by other payoff vectors. This is connected with the attribution of certain intentions to the other players when selecting some courses of action. By using experiments I obtain results that confirm the importance of these reference points in the reciprocity considerations that individuals apply. Chapter four , "Distributional Concerns and Reference Points", analyzes some aspects that may interact with the reference points in the attributions of intentions. These aspects are the payoff to the agent from a given course of action, his/her relative payoff and the joint payoff. The experimental results show that none of these elements is able to explain by itself the results. Finally, the fifth chapter, "The Sequential Prisoner's Dilemma Game: Reciprocity and Group Size Effects" analyzes how aspects of the individual motivations interact with social aspects. In particular it studies how the reactions of individuals change with the dimension of the group in certain processes. The experimental results obtained show that in the prisoner's dilemma game (two-person and three-person games) the behavior of subjects may be consistent with reciprocity considerations and with inequality aversion considerations.
213

Three Essays on Experimental Economics

Pintér, Ágnes 18 September 2006 (has links)
There was a time when the conventional wisdom was that, because economics is a science concerned with complex, naturally occuring systems, laboratory experiments had little to offer economists. But experimental economics has now become a well-established tool that plays an important role in helping game theory bridge the gap between the study of ideally rational behavior modeled in theory and the study of actual "real-world" behavior of agents. Although it has older antecedents, experimental economics is a fairly new line of work, having originiated more or less contemporaneously with game theory. As economist focused on microeconomic models which depend on the preferences of the agents, the fact that these are dificult to observe in natural environments made it increasingly attractive to look to the laboratory to see -in a controlled environment- whether the assumptions made about individuals were descriptive of their behavior. But game theory is the part of economic theory that does not focus solely on the strategic behavior of individuals in economic environments, but also other issues that will be critical in the design of economic institutions, such as how information is distributed, the influence of agents' expectations and beliefs, and the tension between equilibrium and efficiency. Game theory has already achieved important insights into issues sucs as the design of contracts and allocation mechanisms that take into account the sometimes counterintuitive ways in which individual incentives operate in environments with decision makers that have different information and objectives.This thesis is divided into three chapters that present self-contained studies of economic situations where experiments may help game theory to explain field observations. In deriving the results, besides the game theory literature, rigorous statistical and econometric methods are used.
214

Integration in Computer Experiments and Bayesian Analysis

Karuri, Stella January 2005 (has links)
Mathematical models are commonly used in science and industry to simulate complex physical processes. These models are implemented by computer codes which are often complex. For this reason, the codes are also expensive in terms of computation time, and this limits the number of simulations in an experiment. The codes are also deterministic, which means that output from a code has no measurement error. <br /><br /> One modelling approach in dealing with deterministic output from computer experiments is to assume that the output is composed of a drift component and systematic errors, which are stationary Gaussian stochastic processes. A Bayesian approach is desirable as it takes into account all sources of model uncertainty. Apart from prior specification, one of the main challenges in a complete Bayesian model is integration. We take a Bayesian approach with a Jeffreys prior on the model parameters. To integrate over the posterior, we use two approximation techniques on the log scaled posterior of the correlation parameters. First we approximate the Jeffreys on the untransformed parameters, this enables us to specify a uniform prior on the transformed parameters. This makes Markov Chain Monte Carlo (MCMC) simulations run faster. For the second approach, we approximate the posterior with a Normal density. <br /><br /> A large part of the thesis is focused on the problem of integration. Integration is often a goal in computer experiments and as previously mentioned, necessary for inference in Bayesian analysis. Sampling strategies are more challenging in computer experiments particularly when dealing with computationally expensive functions. We focus on the problem of integration by using a sampling approach which we refer to as "GaSP integration". This approach assumes that the integrand over some domain is a Gaussian random variable. It follows that the integral itself is a Gaussian random variable and the Best Linear Unbiased Predictor (BLUP) can be used as an estimator of the integral. We show that the integration estimates from GaSP integration have lower absolute errors. We also develop the Adaptive Sub-region Sampling Integration Algorithm (ASSIA) to improve GaSP integration estimates. The algorithm recursively partitions the integration domain into sub-regions in which GaSP integration can be applied more effectively. As a result of the adaptive partitioning of the integration domain, the adaptive algorithm varies sampling to suit the variation of the integrand. This "strategic sampling" can be used to explore the structure of functions in computer experiments.
215

Bayesian Experimental Design Framework Applied to Complex Polymerization Processes

Nabifar, Afsaneh 26 June 2012 (has links)
The Bayesian design approach is an experimental design technique which has the same objectives as standard experimental (full or fractional factorial) designs but with significant practical benefits over standard design methods. The most important advantage of the Bayesian design approach is that it incorporates prior knowledge about the process into the design to suggest a set of future experiments in an optimal, sequential and iterative fashion. Since for many complex polymerizations prior information is available, either in the form of experimental data or mathematical models, use of a Bayesian design methodology could be highly beneficial. Hence, exploiting this technique could hopefully lead to optimal performance in fewer trials, thus saving time and money. In this thesis, the basic steps and capabilities/benefits of the Bayesian design approach will be illustrated. To demonstrate the significant benefits of the Bayesian design approach and its superiority to the currently practised (standard) design of experiments, case studies drawn from representative complex polymerization processes, covering both batch and continuous processes, are presented. These include examples from nitroxide-mediated radical polymerization of styrene (bulk homopolymerization in the batch mode), continuous production of nitrile rubber in a train of CSTRs (emulsion copolymerization in the continuous mode), and cross-linking nitroxide-mediated radical copolymerization of styrene and divinyl benzene (bulk copolymerization in the batch mode, with cross-linking). All these case studies address important, yet practical, issues in not only the study of polymerization kinetics but also, in general, in process engineering and improvement. Since the Bayesian design technique is perfectly general, it can be potentially applied to other polymerization variants or any other chemical engineering process in general. Some of the advantages of the Bayesian methodology highlighted through its application to complex polymerization scenarios are: improvements with respect to information content retrieved from process data, relative ease in changing factor levels mid-way through the experimentation, flexibility with factor ranges, overall “cost”-effectiveness (time and effort/resources) with respect to the number of experiments, and flexibility with respect to source and quality of prior knowledge (screening experiments versus models and/or combinations). The most important novelty of the Bayesian approach is the simplicity and the natural way with which it follows the logic of the sequential model building paradigm, taking full advantage of the researcher’s expertise and information (knowledge about the process or product) prior to the design, and invoking enhanced information content measures (the Fisher Information matrix is maximized, which corresponds to minimizing the variances and reducing the 95% joint confidence regions, hence improving the precision of the parameter estimates). In addition, the Bayesian analysis is amenable to a series of statistical diagnostic tests that one can carry out in parallel. These diagnostic tests serve to quantify the relative importance of the parameters (intimately related to the significance of the estimated factor effects) and their interactions, as well as the quality of prior knowledge (in other words, the adequacy of the model or the expert’s opinions used to generate the prior information, as the case might be). In all the case studies described in this thesis, the general benefits of the Bayesian design were as described above. More specifically, with respect to the most complex of the examples, namely, the cross-linking nitroxide-mediated radical polymerization (NMRP) of styrene and divinyl benzene, the investigations after designing experiments through the Bayesian approach led to even more interesting detailed kinetic and polymer characterization studies, which cover the second part of this thesis. This detailed synthesis, characterization and modeling effort, trigged by the Bayesian approach, set out to investigate whether the cross-linked polymer network synthesized under controlled radical polymerization (CRP) conditions had a more homogeneous structure compared to the network produced by regular free radical polymerization (FRP). In preparation for the identification of network homogeneity indicators based on polymer properties, cross-linking kinetics of nitroxide-mediated radical polymerization of styrene (STY) in the presence of a small amount of divinyl benzene (DVB; as the cross-linker) and N-tert-butyl-N-(2-methyl)-1-phenylpropyl)-O-(1-phenylethyl) hydroxylamine (I-TIPNO; as the unimolecular initiator) was investigated in detail and the results were contrasted with regular FRP of STY/DVB and homopolymerization of STY in the presence of I-TIPNO, as reference systems. The effect of [DVB], [I-TIPNO] and [DVB]/ [I-TIPNO] were investigated on rate, molecular weights, gel content and swelling index. In parallel to our experimental investigations, a detailed mathematical model was developed and validated with the respective experimental data. Not only did model predictions follow the general experimental trends very well but also were in good agreement with experimental observations. Pursuing our investigations for a more reliable indicator for network homogeneity, corresponding branched and cross-linked polymers were characterized. Thermo-mechanical analysis was used as an attempt to investigate the difference between polymer networks synthesized through FRP and NMRP. Results from both Differential Scanning Calorimetry (DSC) and Dynamic Mechanical Analysis (DMA) showed that at the same cross-link density and conversion level, polymer networks produced by FRP and NMRP exhibit indeed comparable structures. Overall, it was notable that a wealth of process information was generated by such a practical experimental design technique, and with minimal experimental effort compared to previous (undesigned) efforts (and associated, often not well founded, claims) in the literature!
216

Optimization of the polishing procedure using a robot assisted polishing equipment

Gagnolet, Marielle January 2009 (has links)
Today, manual polishing is the most common method to improve the surface finish of mould and dies for e.g. plastic injection moulding, although it is a cumbersome and time-consuming process. Therefore, automated robots are being developed in order to speed up and secure the final result of this important final process. The purpose of this thesis is to find out some clues about the influence of different parameters for the polishing of a steel grade called Mirrax ESR (Uddeholm Tooling AB) using a Design of Experiment. The report starts with a brief description of mechanical polishing (the techniques and polishing mechanisms) and ends up with the optimization of the polishing procedure with a polishing machine, the Strecon RAP-200 made by Strecon A/S. Even if all the runs of the Design of Experiments couldn’t be carried out, the surfaces studied revealed some information about the importance of the previous process (turning marks not removed) and about the link between the aspect of the surfaces and the roughness parameters.
217

Thermal Optimization of Veo+ Projectors (thesis work at Optea AB) : Trying to reduce noise of the Veo+ projector by DOE (Design of Experiment) tests to find anoptimal solution for the fan algorithm while considering the thermal specifics of the unit

Hizli, Cem January 2010 (has links)
The Veo+ projector is using a cooling system that consists of fan and blowers. This system is cooling the electronic components of the device and the lamp of the projector, however extracting a high noise. To lower this noise the rpm speeds (rotational speed) of the fan and blowers should be decreased. Thus, lowering the speed will result in higher temperature values in whole system (inside the device). While lowering the speed, the higher temperature values should be kept within the thermal design specifications of the electronic components. The purpose of this thesis work is to find an optimal solution with lower rpm speeds of the fan and blowers while keeping the temperatures of the various components of the device (touch temperature of the enclosure and electronic components) within the temperature design limits. Before testing the device to find the optimum state, the design limits of the device are determined. Then, by using the design of experiment methods like Taguchi, the optimum state for the device within the design specifications is obtained. Finally, additional tests are applied within the optimum state to demonstrate a fan algorithm as a final solution. While doing the experiments thermocouples are used for measuring the component temperatures.
218

Integration in Computer Experiments and Bayesian Analysis

Karuri, Stella January 2005 (has links)
Mathematical models are commonly used in science and industry to simulate complex physical processes. These models are implemented by computer codes which are often complex. For this reason, the codes are also expensive in terms of computation time, and this limits the number of simulations in an experiment. The codes are also deterministic, which means that output from a code has no measurement error. <br /><br /> One modelling approach in dealing with deterministic output from computer experiments is to assume that the output is composed of a drift component and systematic errors, which are stationary Gaussian stochastic processes. A Bayesian approach is desirable as it takes into account all sources of model uncertainty. Apart from prior specification, one of the main challenges in a complete Bayesian model is integration. We take a Bayesian approach with a Jeffreys prior on the model parameters. To integrate over the posterior, we use two approximation techniques on the log scaled posterior of the correlation parameters. First we approximate the Jeffreys on the untransformed parameters, this enables us to specify a uniform prior on the transformed parameters. This makes Markov Chain Monte Carlo (MCMC) simulations run faster. For the second approach, we approximate the posterior with a Normal density. <br /><br /> A large part of the thesis is focused on the problem of integration. Integration is often a goal in computer experiments and as previously mentioned, necessary for inference in Bayesian analysis. Sampling strategies are more challenging in computer experiments particularly when dealing with computationally expensive functions. We focus on the problem of integration by using a sampling approach which we refer to as "GaSP integration". This approach assumes that the integrand over some domain is a Gaussian random variable. It follows that the integral itself is a Gaussian random variable and the Best Linear Unbiased Predictor (BLUP) can be used as an estimator of the integral. We show that the integration estimates from GaSP integration have lower absolute errors. We also develop the Adaptive Sub-region Sampling Integration Algorithm (ASSIA) to improve GaSP integration estimates. The algorithm recursively partitions the integration domain into sub-regions in which GaSP integration can be applied more effectively. As a result of the adaptive partitioning of the integration domain, the adaptive algorithm varies sampling to suit the variation of the integrand. This "strategic sampling" can be used to explore the structure of functions in computer experiments.
219

Bayesian Experimental Design Framework Applied to Complex Polymerization Processes

Nabifar, Afsaneh 26 June 2012 (has links)
The Bayesian design approach is an experimental design technique which has the same objectives as standard experimental (full or fractional factorial) designs but with significant practical benefits over standard design methods. The most important advantage of the Bayesian design approach is that it incorporates prior knowledge about the process into the design to suggest a set of future experiments in an optimal, sequential and iterative fashion. Since for many complex polymerizations prior information is available, either in the form of experimental data or mathematical models, use of a Bayesian design methodology could be highly beneficial. Hence, exploiting this technique could hopefully lead to optimal performance in fewer trials, thus saving time and money. In this thesis, the basic steps and capabilities/benefits of the Bayesian design approach will be illustrated. To demonstrate the significant benefits of the Bayesian design approach and its superiority to the currently practised (standard) design of experiments, case studies drawn from representative complex polymerization processes, covering both batch and continuous processes, are presented. These include examples from nitroxide-mediated radical polymerization of styrene (bulk homopolymerization in the batch mode), continuous production of nitrile rubber in a train of CSTRs (emulsion copolymerization in the continuous mode), and cross-linking nitroxide-mediated radical copolymerization of styrene and divinyl benzene (bulk copolymerization in the batch mode, with cross-linking). All these case studies address important, yet practical, issues in not only the study of polymerization kinetics but also, in general, in process engineering and improvement. Since the Bayesian design technique is perfectly general, it can be potentially applied to other polymerization variants or any other chemical engineering process in general. Some of the advantages of the Bayesian methodology highlighted through its application to complex polymerization scenarios are: improvements with respect to information content retrieved from process data, relative ease in changing factor levels mid-way through the experimentation, flexibility with factor ranges, overall “cost”-effectiveness (time and effort/resources) with respect to the number of experiments, and flexibility with respect to source and quality of prior knowledge (screening experiments versus models and/or combinations). The most important novelty of the Bayesian approach is the simplicity and the natural way with which it follows the logic of the sequential model building paradigm, taking full advantage of the researcher’s expertise and information (knowledge about the process or product) prior to the design, and invoking enhanced information content measures (the Fisher Information matrix is maximized, which corresponds to minimizing the variances and reducing the 95% joint confidence regions, hence improving the precision of the parameter estimates). In addition, the Bayesian analysis is amenable to a series of statistical diagnostic tests that one can carry out in parallel. These diagnostic tests serve to quantify the relative importance of the parameters (intimately related to the significance of the estimated factor effects) and their interactions, as well as the quality of prior knowledge (in other words, the adequacy of the model or the expert’s opinions used to generate the prior information, as the case might be). In all the case studies described in this thesis, the general benefits of the Bayesian design were as described above. More specifically, with respect to the most complex of the examples, namely, the cross-linking nitroxide-mediated radical polymerization (NMRP) of styrene and divinyl benzene, the investigations after designing experiments through the Bayesian approach led to even more interesting detailed kinetic and polymer characterization studies, which cover the second part of this thesis. This detailed synthesis, characterization and modeling effort, trigged by the Bayesian approach, set out to investigate whether the cross-linked polymer network synthesized under controlled radical polymerization (CRP) conditions had a more homogeneous structure compared to the network produced by regular free radical polymerization (FRP). In preparation for the identification of network homogeneity indicators based on polymer properties, cross-linking kinetics of nitroxide-mediated radical polymerization of styrene (STY) in the presence of a small amount of divinyl benzene (DVB; as the cross-linker) and N-tert-butyl-N-(2-methyl)-1-phenylpropyl)-O-(1-phenylethyl) hydroxylamine (I-TIPNO; as the unimolecular initiator) was investigated in detail and the results were contrasted with regular FRP of STY/DVB and homopolymerization of STY in the presence of I-TIPNO, as reference systems. The effect of [DVB], [I-TIPNO] and [DVB]/ [I-TIPNO] were investigated on rate, molecular weights, gel content and swelling index. In parallel to our experimental investigations, a detailed mathematical model was developed and validated with the respective experimental data. Not only did model predictions follow the general experimental trends very well but also were in good agreement with experimental observations. Pursuing our investigations for a more reliable indicator for network homogeneity, corresponding branched and cross-linked polymers were characterized. Thermo-mechanical analysis was used as an attempt to investigate the difference between polymer networks synthesized through FRP and NMRP. Results from both Differential Scanning Calorimetry (DSC) and Dynamic Mechanical Analysis (DMA) showed that at the same cross-link density and conversion level, polymer networks produced by FRP and NMRP exhibit indeed comparable structures. Overall, it was notable that a wealth of process information was generated by such a practical experimental design technique, and with minimal experimental effort compared to previous (undesigned) efforts (and associated, often not well founded, claims) in the literature!
220

Modelling Recreation Demand Using Choice Experiments : Using Swedish Snowmobilers Demand for Groomed trails

John, Paul January 2010 (has links)
This paper is concerned with the use of the choice experiment method for modeling the demand for snowmobiling . The Choice Experiment includes five attributes, standard, composition, length, price day card and experience along trail. The paper estimates the snowmobile owners’ preferences and the most preferred attributes, including their will-ingness to pay for a daytrip on groomed snowmobile trail. The data consists of the an-swers from 479 registered snowmobile owners, who answered two hypothetical choice questions each. Estimating using the multinominal logit model, it is found that snow-mobilers on average are willing to pay 22.5 SEK for one day of snowmobiling on a trail with quality described as skidded every 14th day. Furthermore, it is found that the WTP increases with the quality of trail grooming. The result of this paper can be used as a yardstick for snowmobile clubs wanting to develop their trail net worth, organizations and companies developing snowmobiling as a recreational activities and marketers in-terested in marketing snowmobiling as recreational activities.

Page generated in 0.0666 seconds