• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 13
  • 9
  • 6
  • 5
  • 1
  • 1
  • Tagged with
  • 115
  • 88
  • 35
  • 31
  • 26
  • 26
  • 25
  • 18
  • 17
  • 16
  • 15
  • 13
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Certified Compilation and Worst-Case Execution Time Estimation

Oliveira Maroneze, André 17 June 2014 (has links) (PDF)
Safety-critical systems - such as electronic flight control systems and nuclear reactor controls - must satisfy strict safety requirements. We are interested here in the application of formal methods - built upon solid mathematical bases - to verify the behavior of safety-critical systems. More specifically, we formally specify our algorithms and then prove them correct using the Coq proof assistant - a program capable of mechanically checking the correctness of our proofs, providing a very high degree of confidence. In this thesis, we apply formal methods to obtain safe Worst-Case Execution Time (WCET) estimations for C programs. The WCET is an important property related to the safety of critical systems, but its estimation requires sophisticated techniques. To guarantee the absence of errors during WCET estimation, we have formally verified a WCET estimation technique based on the combination of two main methods: a loop bound estimation and the WCET estimation via the Implicit Path Enumeration Technique (IPET). The loop bound estimation itself is decomposed in three steps: a program slicing, a value analysis based on abstract interpretation, and a loop bound calculation stage. Each stage has a chapter dedicated to its formal verification. The entire development has been integrated into the formally verified C compiler CompCert. We prove that the final estimation is correct and we evaluate its performances on a set of reference benchmarks. The contributions of this thesis include (a) the formalization of the techniques used to estimate the WCET, (b) the estimation tool itself (obtained from the formalization), and (c) the experimental evaluation. We conclude that our formally verified development obtains interesting results in terms of precision, but it requires special precautions to ensure the proof effort remains manageable. The parallel development of specifications and proofs is essential to this end. Future works include the formalization of hardware cost models, as well as the development of more sophisticated analyses to improve the precision of the estimated WCET.
52

Conflict management in consumer behaviour : examining the effect of preferred conflict management style on propensity to bargain

Daly, Timothy Michael January 2009 (has links)
This thesis focuses on two under-researched areas of consumer behaviour: conflict handling styles and consumer bargaining. As illustrated in this thesis, consumer bargaining is a substantial and important behaviour that has rarely been studied from a consumer perspective. Further, conflict handling, which is considered an important and wellresearched phenomenon in an organisational context, has been rarely applied to consumer behaviour, despite the potential for conflict in many areas. The aims of this thesis were to a) examine consumer bargaining behaviour across a variety of culturally diverse nations; b) develop and validate a new instrument to measure conflict handling styles; and c) examine the relationships between the likelihood of consumer bargaining, preferred conflict handling styles, and personal values. Consumer bargaining was found to be common in both developed and developing nations. Respondents from Australia and Germany reported bargaining for a broad range of products that vary in their prices, including cars, electronics, appliances, clothing, and computers. Bargaining in South Korea was even more common, including everyday purchases like clothing, and food and drink. Finally, bargaining in Brazil was almost as common as in South Korea, and also included expensive consumer durable purchases, such as electronic products and cars, in addition to everyday purchases, such as clothing, and food and drink. The conflict handling style instrument developed in this project had convergent validity with existing ratings scales, reproduced the theorised structure of the dual-concerns model of conflict handling, and had predictive validity in a service recovery context. The benefits iii of the new scale over existing ratings scales include: a) capturing relative preference for the conflict handling styles; b) reduction of sources of common method variance; c) reduction of ratings scale response biases; and d) reduction of numerical effect biases, such as different perceived distances between response categories. The newly developed scale was also used to assess the hypothesised relationships between personal values, conflict handling styles, and consumer bargaining intensity in a developed Western country (Germany). As expected, the dominate conflict handling style was positively related, while the avoid conflict handling style was negatively related to consumer bargaining intensity. Although no relationship was found between personal values and consumer bargaining intensity, personal values were found to be an antecedent of conflict handling styles. Specifically, the power value type was found to be a positive predictor of the dominate conflict handling style, while benevolence and social universalism were found to be positive predictors of the integrate conflict handling style.
53

Optimization Techniques for Performance and Power Dissipation in Test and Validation

Jayaraman, Dheepakkumaran 01 May 2012 (has links)
The high cost of chip testing makes testability an important aspect of any chip design. Two important testability considerations are addressed namely, the power consumption and test quality. The power consumption during shift is reduced by efficiently adding control logic to the design. Test quality is studied by determining the sensitization characteristics of a path to be tested. The path delay fault models have been used for the purpose of studying this problem. Another important aspect in chip design is performance validation, which is increasingly perceived as the major bottleneck in integrated circuit design. Given the synthesizable HDL code, the proposed technique will efficiently identify infeasible paths, subsequently, it determines the worst case execution time (WCET) in the HDL code.
54

Worst-case delay analysis of real-time switched Ethernet networks with flow local synchronization / L’analyse pire cas de délai sur des réseaux Ethernet commuté temps réels avec la synchronisation locale de flux

Li, Xiaoting 19 September 2013 (has links)
Les réseaux Ethernet commuté full-duplex constituent des solutions intéressantes pour des applications industrielles. Mais le non-déterminisme d’un commutateur IEEE 802.1d, fait que l’analyse pire cas de délai de flux critiques est encore un problème ouvert. Plusieurs méthodes ont été proposées pour obtenir des bornes supérieures des délais de communication sur des réseaux Ethernet commuté full duplex temps réels, faisant l’hypothèse que le trafic en entrée du réseau peut être borné. Le problème principal reste le pessimisme introduit par la méthode de calcul de cette borne supérieure du délai. Ces méthodes considèrent que tous les flux transmis sur le réseau sont indépendants. Ce qui est vrai pour les flux émis par des nœuds sources différents car il n’existe pas, dans le cas général, d’horloge globale permettant de synchroniser les flux. Mais pour les flux émis par un même nœud source, il est possible de faire l’hypothèse d’une synchronisation locale de ces flux. Une telle hypothèse permet de bâtir un modèle plus précis des flux et en conséquence élimine des scénarios impossibles qui augmentent le pessimisme du calcul. Le sujet principal de cette thèse est d’étudier comment des flux périodiques synchronisés par des offsets peuvent être gérés dans le calcul des bornes supérieures des délais sur un réseau Ethernet commuté temps-réel. Dans un premier temps, il s’agit de présenter l’impact des contraintes d’offsets sur le calcul des bornes supérieures des délais de bout en bout. Il s’agit ensuite de présenter comment intégrer ces contraintes d’offsets dans les approches de calcul basées sur le Network Calculus et la méthode des Trajectoires. Une méthode Calcul Réseau modifiée et une méthode Trajectoires modifiée sont alors développées et les performances obtenues sont comparées. Le réseau avionique AFDX (Avionics Full-Duplex Switched Ethernet) est pris comme exemple d’un réseau Ethernet commuté full-duplex. Une configuration AFDX industrielle avec un millier de flux est présentée. Cette configuration industrielle est alors évaluée à l’aide des deux approches, selon un choix d’allocation d’offsets donné. De plus, différents algorithmes d’allocation des offsets sont testés sur cette configuration industrielle, pour trouver un algorithme d’allocation quasi-optimal. Une analyse de pessimisme des bornes supérieures calculées est alors proposée. Cette analyse est basée sur l’approche des trajectoires (rendue optimiste) qui permet de calculer une sous-approximation du délai pire-cas. La différence entre la borne supérieure du délai (calculée par une méthode donnée) et la sous-approximation du délai pire cas donne une borne supérieure du pessimisme de la méthode. Cette analyse fournit des résultats intéressants sur le pessimisme des approches Calcul Réseau et méthode des Trajectoires. La dernière partie de la thèse porte sur une architecture de réseau temps réel hétérogène obtenue par connexion de réseaux CAN via des ponts sur un réseau fédérateur de type Ethernet commuté. Deux approches, une basée sur les composants et l’autre sur les Trajectoires sont proposées pour permettre une analyse des délais pire-cas sur un tel réseau. La capacité de calcul d’une borne supérieure des délais pire-cas dans le contexte d’une architecture hétérogène est intéressante pour les domaines industriels. / Full-duplex switched Ethernet is a promising candidate for interconnecting real-time industrial applications. But due to IEEE 802.1d indeterminism, the worst-case delay analysis of critical flows supported by such a network is still an open problem. Several methods have been proposed for upper-bounding communication delays on a real-time switched Ethernet network, assuming that the incoming traffic can be upper bounded. The main problem remaining is to assess the tightness, i.e. the pessimism, of the method calculating this upper bound on the communication delay. These methods consider that all flows transmitted over the network are independent. This is true for flows emitted by different source nodes since, in general, there is no global clock synchronizing them. But the flows emitted by the same source node are local synchronized. Such an assumption helps to build a more precise flow model that eliminates some impossible communication scenarios which lead to a pessimistic delay upper bounds. The core of this thesis is to study how local periodic flows synchronized with offsets can be handled when computing delay upper-bounds on a real-time switched Ethernet. In a first step, the impact of these offsets on the delay upper-bound computation is illustrated. Then, the integration of offsets in the Network Calculus and the Trajectory approaches is introduced. Therefore, a modified Network Calculus approach and a modified Trajectory approach are developed whose performances are compared on an Avionics Full-DupleX switched Ethernet (AFDX) industrial configuration with one thousand of flows. It has been shown that, in the context of this AFDX configuration, the Trajectory approach leads to slightly tighter end-to-end delay upper bounds than the ones of the Network Calculus approach. But offsets of local flows have to be chosen. Different offset assignment algorithms are then investigated on the AFDX industrial configuration. A near-optimal assignment can be exhibited. Next, a pessimism analysis of the computed upper-bounds is proposed. This analysis is based on the Trajectory approach (made optimistic) which computes an under-estimation of the worst-case delay. The difference between the upper-bound (computed by a given method) and the under-estimation of the worst-case delay gives an upper-bound of the pessimism of the method. This analysis gives interesting comparison results on the Network Calculus and the Trajectory approaches pessimism. The last part of the thesis, deals with a real-time heterogeneous network architecture where CAN buses are interconnected through a switched Ethernet backbone using dedicated bridges. Two approaches, the component-based approach and the Trajectory approach, are developed to conduct a worst-case delay analysis for such a network. Clearly, the ability to compute end-to-end delays upper-bounds in the context of heterogeneous network architecture is promising for industrial domains.
55

Analyse pire cas exact du réseau AFDX / Exact worst-case communication delay analysis of AFDX network

Adnan, Muhammad 21 November 2013 (has links)
L'objectif principal de cette thèse est de proposer les méthodes permettant d'obtenir le délai de transmission de bout en bout pire cas exact d'un réseau AFDX. Actuellement, seules des bornes supérieures pessimistes peuvent être calculées en utilisant les approches de type Calcul Réseau ou par Trajectoires. Pour cet objectif, différentes approches et outils existent et ont été analysées dans le contexte de cette thèse. Cette analyse a mis en évidence le besoin de nouvelles approches. Dans un premier temps, la vérification de modèle a été explorée. Les automates temporisés et les outils de verification ayant fait leur preuve dans le domaine temps réel ont été utilisés. Ensuite, une technique de simulation exhaustive a été utilisée pour obtenir les délais de communication pire cas exacts. Pour ce faire, des méthodes de réduction de séquences ont été définies et un outil a été développé. Ces méthodes ont été appliquées à une configuration réelle du réseau AFDX, nous permettant ainsi de valider notre travail sur une configuration de taille industrielle du réseau AFDX telle que celle embarquée à bord des avions Airbus A380. The main objective of this thesis is to provide methodologies for finding exact worst case end to end communication delays of AFDX network. Presently, only pessimistic upper bounds of these delays can be calculated by using Network Calculus and Trajectory approach. To achieve this goal, different existing tools and approaches have been analyzed in the context of this thesis. Based on this analysis, it is deemed necessary to develop new approaches and algorithms. First, Model checking with existing well established real time model checking tools are explored, using timed automata. Then, exhaustive simulation technique is used with newly developed algorithms and their software implementation in order to find exact worst case communication delays of AFDX network. All this research work has been applied on real life implementation of AFDX network, allowing us to validate our research work on industrial scale configuration of AFDX network such as used on Airbus A380 aircraft. / The main objective of this thesis is to provide methodologies for finding exact worst case end to end communication delays of AFDX network. Presently, only pessimistic upper bounds of these delays can be calculated by using Network Calculus and Trajectory approach. To achieve this goal, different existing tools and approaches have been analyzed in the context of this thesis. Based on this analysis, it is deemed necessary to develop new approaches and algorithms. First, Model checking with existing well established real time model checking tools are explored, using timed automata. Then, exhaustive simulation technique is used with newly developed algorithms and their software implementation in order to find exact worst case communication delays of AFDX network. All this research work has been applied on real life implementation of AFDX network, allowing us to validate our research work on industrial scale configuration of AFDX network such as used on Airbus A380 aircraft.
56

Comparação entre diferentes estratégias de melhoria visando à redução do lead time

Utiyama, Marcel Heimar Ribeiro 18 February 2016 (has links)
Submitted by Livia Mello (liviacmello@yahoo.com.br) on 2016-09-21T14:29:18Z No. of bitstreams: 1 TeseMHRU.pdf: 2754873 bytes, checksum: 04976803627828cf7bb2eebe936f3f24 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-09-21T18:37:37Z (GMT) No. of bitstreams: 1 TeseMHRU.pdf: 2754873 bytes, checksum: 04976803627828cf7bb2eebe936f3f24 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-09-21T18:37:43Z (GMT) No. of bitstreams: 1 TeseMHRU.pdf: 2754873 bytes, checksum: 04976803627828cf7bb2eebe936f3f24 (MD5) / Made available in DSpace on 2016-09-21T18:37:53Z (GMT). No. of bitstreams: 1 TeseMHRU.pdf: 2754873 bytes, checksum: 04976803627828cf7bb2eebe936f3f24 (MD5) Previous issue date: 2016-02-18 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / The focus in time is the cornerstone of modern manufacturing management approaches; among them the Time Based Competition (TBC) and the Quick Response Manufacturing (QRM). Lead time reduction brings significant gains which are obtained by means of improvements in the shop floor variables. To make improvements in these variables, managers need to choose the best way to invest his limited financial resources. This work look for the best strategy regarding the improvement in time to repair, time between failures and setup time: To perform improvements focused on mean, variability or eliminate the worst cases. The main focus of this work is the worst case strategy, it means, to find the situations in which this strategy is the best option. To perform this comparison a modeling/simulation was conducted and the three variables addressed in this work were modeled using the normal and lognormal probability distributions. Results show that for situations with moderate and high variability the worst case strategy is the best improvement option. We believe this strategy can bring significant benefits and it is probably less costly and easier to implement. For low variability situations we find that the mean strategy is the best option. In addition for this case, for two variables (time to repair and setup time) the worst case strategy is a good alternative for situations which it is not possible to reduce the mean of these variables. For the time to failure, the only good option remains the mean strategy. / O foco no tempo constitui o aspecto fundamental de modernas estratégias de gestão da manufatura. Dentre elas merece destaque a competição baseada no tempo (time-based competition-TBC) e a manufatura de resposta rápida (Quick Response Manufacturing-QRM). A redução do lead time implica em ganhos significativos e é obtida por meio de programas de melhorias em variáveis do chão de fábrica. Para a realização de melhorias nessas variáveis, um gerente de produção, precisa escolher a melhor maneira de investir os seus recursos financeiros limitados. O presente trabalho investiga qual a melhor estratégia no que se refere à melhoria no tempo de reparo, tempo entre falhas e tempo de setup: Realizar melhorias na média, na variabilidade ou eliminar os piores casos. O foco principal deste trabalho é a estratégia de melhoria no pior caso, ou seja, identificar as situações nas quais a mesma produz um efeito no lead time superior ou semelhante às estratégias de melhoria na média e na variabilidade. Para isso foi realizada uma modelagem/simulação e as três variáveis abordadas nesse trabalho foram modeladas utilizando as distribuições de probabilidade normal e lognormal. Os resultados encontrados na presente tese mostram que para situações com variabilidades moderadas e altas a estratégia focada no pior caso é a melhor opção de melhoria considerando as três variáveis investigadas neste estudo. Ou seja, a estratégia de melhoria focada nos piores casos pode trazer benefícios mais facilmente e a um menor custo. Adicionalmente, para baixa variabilidade a estratégia de melhoria na média é a melhor opção. No entanto, foi observado que para o tempo de reparo e tempo de setup a estratégia de melhoria no pior caso é uma boa alternativa para as situações nas quais não seja possível efetuar uma melhoria na média dessas variáveis. Já para o tempo entre falhas, essa constatação não se sustenta, pois nesse caso a única alternativa que apresenta benefícios significativos é a estratégia de melhoria focada na média.
57

Estudo do pior caso na validação de limpeza de equipamentos de produção de radiofármacos de reagentes liofilizados. Validação de metodologia de carbono orgânico total / Worst-case study for cleaning validation of equipments in the radiopharmaceutical production of lyophilized reagents. Metodology validation of total organic carbon.

Luciana Valeria Ferrari Machado Porto 18 December 2015 (has links)
Os radiofármacos são definidos como preparações farmacêuticas contendo um radionuclídeo em sua composição, são administrados intravenosamente em sua maioria, e, portanto, o cumprimento dos princípios de Boas Práticas de Fabricação (BPF) é essencial e indispensável à tais produtos. A validação de limpeza é um requisito das BPF e consiste na evidência documentada que demonstra que os procedimentos de limpeza removem os resíduos a níveis pré-determinados de aceitação, garantindo que não haja contaminação cruzada. Uma simplificação da validação dos processos de limpeza é admitida, e consiste na escolha de um produto, denominado de \"pior caso\" ou worst case, para representar a limpeza de todos os equipamentos da mesma linha de produção. Uma das etapas da validação de limpeza é o estabelecimento e validação do método analítico para quantificação do resíduo. O objetivo deste estudo foi estabelecer o pior caso para a validação de limpeza dos equipamentos de produção de reagentes liofilizados-RL para marcação com 99mTc, avaliar a utilização do teor de carbono orgânico total (COT) como indicador de limpeza dos equipamentos utilizados na fabricação dos RL, validar o método para determinação de CONP (carbono orgânico não purgável/volátil) e realizar testes de recuperação com o produto escolhido como pior caso. A escolha do produto pior caso baseou-se no cálculo de um índice denominado \"índice para pior caso - Worst Case Index (WCI)\", utilizando informações de solubilidade dos fármacos, dificuldade de limpeza dos equipamentos e taxa de ocupação dos produtos na linha de produção. O produto indicado como pior caso entre os RL foi o MIBI-TEC. Os ensaios de validação do método foram realizados utilizando-se um analisador de carbono modelo TOC-Vwp acoplado a um amostrador automático modelo ASI-V, ambos da marca Shimadzu&reg e controlados por software TOC Control-V Shimadzu&reg. Foi utilizado o método direto de quantificação do CONP. Os parâmetros avaliados na validação do método foram: conformidade do sistema, robustez, linearidade, limites de detecção (LD) e de quantificação (LQ), precisão (repetibilidade e precisão intermediária), e exatidão (recuperação) e foram definidos como: 4% acidificante, 2,5 mL de oxidante, tempo de integração da curva de 4,5 minutos, tempo de sparge de 3,0 minutos e linearidade na faixa de 40-1000 μgL-1, com coeficiente de correlação (r) e soma residual dos mínimos quadrados (r2) > 0,99 respectivamente. LD e LQ para CONP foram 14,25 ppb e 47,52 ppb, respectivamente, repetibilidade entre 0,11 4,47%; a precisão intermediária entre 0,59 a 3,80% e exatidão entre 97,05 - 102,90%. A curva analítica para Mibi mostrou-se linear na faixa de 100-800 μgL-1, com r e r2 > 0,99, apresentando parâmetros similares aos das curvas analíticas de CONP. Os resultados obtidos neste estudo demonstraram que a abordagem do pior caso para validação de limpeza é um meio simples e eficaz para diminuir a complexidade e morosidade do processo de validação, além de proporcionar uma redução nos custos envolvidos nestas atividades. Todos os resultados obtidos nos ensaios de validação de método CONP atenderam as exigências e especificações preconizadas pela norma RE 899/2003 da ANVISA para considerar a metodologia validada. / Radiopharmaceuticals are defined as pharmaceutical preparations containing a radionuclide in their composition, mostly intravenously administered, and therefore compliance with the principles of Good Manufacturing Practices (GMP) is essential and indispensable. Cleaning validation is a requirement of the current GMP, and consists of documented evidence, which demonstrates that the cleaning procedures are able to remove residues to pre-determined acceptance levels, ensuring that no cross contamination occurs. A simplification of cleaning processes validation is accepted, and consists in choosing a product, called \"worst case\", to represent the cleaning processes of all equipment of the same production area. One of the steps of cleaning validation is the establishment and validation of the analytical method to quantify the residue. The aim of this study was to establish the worst case for cleaning validation of equipment in the radiopharmaceutical production of lyophilized reagent (LR) for labeling with 99mTc, evaluate the use of Total Organic Carbon (TOC) content as indicator of equipment cleaning used in the LR manufacture, validate the method of Non-Purgeable Organic Carbon (NPOC), and perform recovery tests with the product chosen as worst case.Worst case products choice was based on the calculation of an index called \"Worst Case Index\" (WCI), using information about drug solubility, difficulty of cleaning the equipment and occupancy rate of the products in line production. The products indicated as worst case was the LR MIBI-TEC. The method validation assays were performed using carbon analyser model TOC-Vwp coupled to an autosampler model ASI-V, both from Shimadzu&reg, controlled by TOC Control-V software. It was used the direct method for NPOC quantification. The parameters evaluated in the validation method were: system suitability, robustness, linearity, detection limit (DL) and quantification limit (QL), precision (repeatability and intermediate precision), and accuracy (recovery) and they were defined as follows: 4% acidifying reagent, 2.5 ml oxidizing reagent, 4.5 minutes integration curve time, 3 minutes sparge time and linearity in 40-1000 μgL-1 range, with correlation coefficient (r) and residual sum of minimum squares (r2) greater than 0.99 respectively. DL and QL for NPOC were 14.25 ppb e 47.52 ppb respectively, repeatability between 0.11 and 4.47%; the intermediate precision between 0.59 and 3.80% and accuracy between 97.05 and 102.90%. The analytical curve for Mibi was linear in 100-800 μgL-1 range with r and r2 greater than 0.99, presenting similar parameters to NPOC analytical curves. The results obtained in this study demonstrated that the worst-case approach to cleaning validation is a simple and effective way to reduce the complexity and slowness of the validation process, and provide a costs reduction involved in these activities. All results obtained in NPOC method validation assays met the requirements and specifications recommended by the RE 899/2003 Resolution from ANVISA to consider the method validated.
58

Comparing best-worst and ordered logit approaches for user satisfaction in transit services

Echaniz, Eneko, Ho, Chinh Q., Rodriguez, Andres, dell'Olio, Luigi 21 December 2020 (has links)
Customer overall satisfaction towards a public transport system depends mainly on two factors: how satisfied they are with different aspects that make up the service and how important each of the service aspects is to the customer. Traditionally, researchers use revealed preference surveys and ordered probit/logit models to estimate the contribution of each service attribute towards the overall satisfaction. This paper aims to verify the possibility of replacing the traditional method with the more cost-effective best-worst case 1 method, using a customer survey recently conducted in Santander, Spain. The results show that the satisfaction level obtained from these alternative methods are remarkably similar. The relative importance of each attribute delivered by the two methods differ, with the Best-Worst approach showing more intuitive and consistent results with the literature on public transport customer satisfaction. A regression method is developed to derive customer satisfaction with each service attribute from Best-Worst modelling results.
59

EVALUATING DATA QUALITY IN DISCRETE CHOICE EXPERIMENTS

Courtney L Bir (8068292) 03 December 2019 (has links)
Although data collection through discrete choice experimentsconducted using surveys are commonly used in research, aimingto improve data quality is still serviceable and necessary. Three distinct experiments were conducted with the objectives of improving data quality by better tailoring experiments to market conditionsas well as decreasing complexity and fatigue. First, consumer willingness-to-pay(WTP) for yogurt attributes was estimatedusing a survey targeted to be nationally representative of the US.A novel approach was used to allow for self-selection into the choice experiment for commonly purchased types of yogurt.On average, respondentswere willing-to-paya positive amount for requiring pasture access and not permitting dehorning/disbudding for both traditional and Greek yogurt. Respondents had positive WTPfor Greek yogurt labeled free of high fructose corn syrup, and were willing-to-pay morefor low-fat yogurt when compared to nonfat for both yogurt types.<div><br></div><div> Second, anew WTP data collection method, employing component discrete choice experiments in place of traditional larger experimental designs,was proposedand compared to the traditional method to elicit yogurt consumer’s WTP for attributes in yogurt. The new WTP data collection method was designed with the objective of decreasing complexity by having respondents participate in fewer choice scenarios. Incidences of attribute non-attendance (ANA), a potential simplifying heuristic that results from complexity, occurred less frequently for all attributes in the new WTP data collection method with one exception. Exhibiting ANA for any attribute was negatively correlated with the time respondents took to complete the choice experiment.<br></div><div><br></div><div>Finally, through the use of a newbest-worst scaling(BWS)data collection method,consumer preferences for fluid dairy milk attributes were elicited and results as well as measures of data quality were compared to the traditional method of BWS. Nine attributes of fluid milk were included in this study: container material, rbST-free, price, container size, fat content, humane handling of cattle, brand, required pasture access for cattle, and cattle fed an organic diet. The top (price) and bottom (container material) attributes in terms of relative ranking did not change between the new BWS data collection method and the traditional BWS method. The new BWS data collection method resulted in fewer incidences of ANA for all attributes except one. There was not a statistical difference in the number of transitivity (an axiom of consumer theory) violators,between the new and traditional BWS methods.<br></div>
60

Predicting Residential Heating Energy Consumption and Savings Using Neural Network Approach

Al Tarhuni, Badr 30 May 2019 (has links)
No description available.

Page generated in 0.0745 seconds