601 |
Avaliação do desempenho de quatro metodos de escalonamento em testes sensoriais de aceitação utilizando modelos normais aditivos de analise da variancia e mapas internos de preferencia / Assessing the performance of four methods of phasing in tests of sensory acceptance additives using standard models of analysis of variance and internal maps of preferenceMontes Villanueva, Nilda Doris 31 July 2003 (has links)
Orientadores: Maria Aparecida A. Pereira da Silva, Ademir Jose Petenate / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia de Alimentos / Made available in DSpace on 2018-08-03T16:42:14Z (GMT). No. of bitstreams: 1
MontesVillanueva_NildaDoris_D.pdf: 7571939 bytes, checksum: 5b97da35754c8719f94ee3b21e0cf955 (MD5)
Previous issue date: 2003 / Resumo: Em testes sensoriais, a análise dos dados geralmente é realizada através de algum modelo ANOVA. Estes modelos pressupõem que as respostas experimentais sejam: i) independentes, ii) normalmente distribuídas, m) homoscedásticas (variâncias iguais) e, iv) provenientes de uma mesma escala de medida (aditividade). Os principais problemas na análise de dados sensoriais através de modelos ANOV A referem-se aos dois últimos pressupostos. A homogeneidade das variâncias não pode ser assegurada devido à existência de pelo menos duas fontes potenciais de variabilidade dos dados, quais sejam: provadores e tratamentos. Por outro lado, a aditividade pode ser violada quando um provador utiliza faixas consistentemente mais (ou menos) amplas da escala para expressar a sua impressão sobre o produto. A maneira pessoal com que cada provador utiliza a escala para avaliar os produtos, chama-se de variação da expansibilidade entre provadores. Tanto a falta de homogeneidade das variâncias como a não aditividade do modelo, acarretam conseqüências sérias na obtenção do verdadeiro nível de significância para o efeito dos tratamentos, podendo afetar adversamente as comparações entre as médias dos tratamentos e comprometer seriamente tanto a interpretação dos resultados fornecidos pelo experimento como a validade do modelo ANOVA. Em testes com consumidores, escalas tradicionais como a escala hedônica de 9 pontos freqüentemente apresentam a seguinte problemática: i) geram dados que freqüentemente não satisfazem os pressupostos estatísticos de normalidade, aditividade e homoscedasticidade exigidos nos modelos ANOVA, ii) oferecem pouca liberdade aos consumidores para expressarem. suas percepções sensoriais, devido ao limitado número de categorias, m) induzem efeitos numéricos e contextuais no julgamento dos provadores e, iv) os valores numéricos associados às suas categorias, embora numericamente possuam intervalos iguais, não refletem iguais diferenças em percepção. Das metodologias utilizadas em testes sensoriais com consumidores, a escala hedônica de 9 pontos, é sem dúvida, a mais utilizada. Porém, em função da problemática anteriormente mencionada, surge a necessidade de serem pesquisadas escalas alternativas que possuam um melhor desempenho que a escala hedônica tradicional, tanto quando os dados são analisados através de modelos ANOVA como quando os mesmos são analisados através de métodos multivariados como Mapa Interno de Preferência - MDPREF. De um modo geral, o objetivo do presente trabalho foi pesquisar o desempenho de duas escalas alternativas em estudos com consumidores, quais sejam: escala autoajustável e escala hedônica híbrida, comparando-as com métodos afetivos tradicionais como a escala de ordenação e escala hedônica de 9 pontos. Para isso, três experimentos foram realizados conforme descrito a seguir: o primeiro experimento foi realizado com o objetivo de se avaliar em condições reais de teste de consumidor, o desempenho da escala autoajustável em relação à escala hedônica de 9 pontos e escala de ordenação, utilizando-se os seguintes critérios: i) diferenças em expansibilidade entre provadores, ii) poder discriminativo e, iii) adequação dos dados coletados por cada escala aos pressupostos do modelo ANOVA. Três marcas comerciais de confeitos foram avaliadas por 288 consumidores. Os resultados obtidos através das escalas hedônica de 9 pontos e autoajustável foram analisados através de ANOVA e os resultados da escala de ordenação, através do teste de Friedman. Os valores de pFamostra. pFprovador e QMresíduo fornecidos pela ANOVA de cada escala, foram respectivamente utilizados para avaliar o poder discriminativo, a expansibilidade dos provadores e a variabilidade residual dos dados. Teste de Tukey foi também aplicado para análise do poder discriminativo de cada escala. A normalidade dos dados foi verificada através do cálculo dos Coeficientes de assimetria e curtose, gráfico de probabilidade normal e teste de Kolmogorov-Smirnov. A homoscedasticidade, foi avaliada através de gráficos de dispersão e teste de Levene. Os resultados mostraram que a escala autoajustável foi efetiva para tratar o problema da expansibilidade entre provadores e da desigualdade das variâncias, porém, os resíduos mostraram moderados desvios da normalidade. A escala hedônica de 9 pontos apresentou problemas de heteroscedasticidade. As escalas autoajustável e de ordenação apresentaram o menor e o maior poder discriminativo respectivamente. Apesar dos problemas detectados, as três escalas apresentaram as mesmas tendências de preferência dos produtos avaliados. O segundo experimento foi realizado com o objetivo de se avaliar o desempenho da escala hedônica híbrida em estudos com consumidores, comparando-a à escala hedônica de 9 pontos, escala autoajustável, e escala de ordenação; através dos seguintes critérios: i) variabilidade das respostas sensoriais, ii) poder discriminativo, iii) adequação dos dados às suposições dos modelos ANOVA e, iv) facilidade de uso pelos consumidores. Cinco marcas de suco de laranja foram avaliadas por 80 consumidores, divididos em quatro grupos de 20 indivíduos cada. Todos os indivíduos avaliaram todas as amostras através de todas as escalas em 4 diferentes sessões de degustação. Um delineamento em quadrado latino 4x4, foi utilizado para controlar o efeito de ordem de apresentação das escalas e avaliar sem vícios a facilidade de uso das mesmas. Para cada escala, a ordem de apresentação das amostras e efeitos residuais ("carry-over") foram balanceados. Os resultados obtidos através das escalas hedônica tradicional, híbrida e autoajustável foram avaliados através de ANOVA. A normalidade dos dados foi verificada através do teste de Shapiro-Wilks, a homoscedasticidade através do teste de Brown-Forsythe e a aditividade, através do teste de Tukey para um grau de liberdade. Os valores de pFamostra, pFprovador e QMresíduo fornecidos pela ANOVA de cada escala, foram respectivamente utilizados para avaliar o poder discriminativo, a expansibilidade dos provadores e a variabilidade residual dos dados. O teste de REGWQ foi também aplicado para análise do poder discriminativo de cada escala. Os resultados obtidos através da escala de ordenação foram avaliados pelo teste de Friedman e, a facilidade de uso das escalas por testes de Cochran-Mantel-Haenszel. Os resultados sugeriram uma superioridade da escala hedônica híbrida sobre as escalas hedônica estruturada e a utoaj ustável , tanto em função do poder discriminativo como da adequação dos dados às suposições de normalidade e homoscedasticidade. A despeito dos dados da escala autoajustável terem apresentado maior variabilidade e sérios desvios da normalidade dos resíduos, o poder discriminativo desta escala foi ligeiramente superior ao da escala hedônica estruturada. A escala de ordenação apresentou o menor poder discriminativo em relação às demais. As escalas hedônicas estruturada e híbrida foram consideradas significativamente (p:S;0,01) mais fáceis de serem utilizadas que a autoajustável, não havendo diferença (p:s;O,OS) entre as duas primeiras. Finalmente, o objetivo do terceiro experimento foi avaliar o desempenho das escalas hedônica estruturada, hedônica híbrida e autoajustável na construção de Mapas Internos de Preferência - MDPREF. Nesta pesquisa, a aceitação global de 8 marcas comerciais de vinho tinto, a maioria deles varietal Cabernet Sauvignon, foi avaliada por 112 consumidores. Foram utilizados delineamentos experimentais balanceados para ordem de apresentação das escalas, ordem de apresentação das amostras e efeitos residuais. Os dados foram analisados através de ANOV A e MDPREF. O critério de avaliação do desempenho da cada escala baseou-se no número de consumidores significativamente ajustados (ps O,OS) e no grau de segmentação dos produtos e dos consumidores produzidos pelo MDPREF. Os resultados sugeriram uma superioridade da escala híbrida sobre a escala hedônica tradicional e autoajustável. O MDPREF gerado pelos dados da escala híbrida produziu um maior número de dimensões significativas de preferência (pS O,OS), trazendo como decorrência, uma porcentagem de 79,S% consumidores significativamente ajustados (pS O,OS), enquanto a escala autoajustável ajustou S4,S% dos consumidores e a escala hedônica S1,8%. Em geral a escala hedônica de 9 pontos apresentou um desempenho inferior ao das demais escalas. Os resultados do presente estudo sugerem fortemente que a escala hedônica híbrida é uma ferramenta válida e eficiente que pode ser utilizada na coleta de dados associados a estudos com consumidores, tanto quando eles forem analisados através de modelos normais para análise da variância como através da metodologia de Mapa Interno de Preferência / Abstract: In sensory tests, the basie statistieal toei for analyzing data is almost invariably some sort of analysis of variance models. These models presuppose that the experimental responses are: i) independent, ii) normally distributed, iii) homoscedastie (have equal varianees) and, iv) seores are on the same scale of measurement (additivity). The main problems arising from the analysis of sensory data using ANOVA models are related to the last two assumptions. Homogeneity of error variance is not assured, espeeially as there are at least two potential sources of heterogeneity: treatments and assessors. On the other hand, the additivity could be violated if one assessor used a eonsistently larger (or smaller) portion of the scale range, scoring more (or less) expansively than other assessors to express his opinion of the produet. The individual way in whieh eaeh panelist uses the scale to evaluate the produets is known as the differential expansiveness of seoring between assessors. 80th the laek of homogeneity of the variances and the non-additivity of the model, result in serious consequenees in obtaining a true levei of significance for the effect of the treatments and may adversely affeet the eomparison of treatment means. The non-additivity can seriously affeet and possible invalidate the analysis of variance and the interpretation of the results that it provides. In consumer tests, traditional scales sueh as the nine-point hedonie scale frequently present the following problems: i) they do not satisfy the statistical assumptions of independenee, normalityand homoscedastieity required by ANOVA models; ii) they give little freedom to the individuais to express their perceptions, due to the limited number of categories; iii) they induce numerical and contextual effects in the judgments by the panelists and, iv) the difference between numerical values associated with the categories do not reflect equivalent differenees in perception. Of the methodologies used in sensory tests with consumers, the 9-point hedonie scale is undoubtedly the most widely used. However, considering the previously mentioned problem, there is a need to investigate alternative scales providing better performanee than the traditional hedonie scale, both when the data are analyzed by ANOVA models and multivariate methods such as the Internal Preference Map - MDPREF. In general the objective of this research was to investigate the performance of two alternative scales in consumer studies, these being the self-adjusting scale and the hybrid hedonic scale, comparing them with traditional affective methods such as the ranking scale and the 9-point hedonic scale. With this objective three experiments were carried out as follows: The first experiment was carried out with the objective of evaluating the performance of the self-adjusting scale as compared to the 9-point hedonic scale and ranking scale under real consumer test conditions, using the following criteria: i) differential expansiveness between assessors, ii) discriminating power and, iii) compliance of the data collected by each scale with the ANOVA assumptions. Three commercial brands of candy were evaluated by 288 consumers. The results obtained from the 9-point hedonic and self-adjusting scales were analyzed by ANOVA and those of the ranking test by Friedman's test. The values for pFsample, pFassessor and QMerror provided by ANOV A for each scale, were used respectively to evaluate the discriminating power, the expansiveness of scoring between assessors and the data variability. Tukey's test was also applied to analyze the discriminating power of each scale. Normal probability plots, Kolmogorov-Smirnov test and coefficients of skewness and kurtosis checked data normality. Homoscedasticity was evaluated by scatter plots and the Levene test. The results showed that the self-adjusting scale was effective to deal with differential assessor expansiveness and produced homogeneous variances, however the residuais showed moderate deviations from normality. The 9-point hedonic scale showed problems with heteroscedasticity. Rank and the self-adjusting scales showed the highest and the lowest discriminating powers, respectively. Despite the problems detected, the three scales presented the same tendencies for preference amongst the products tested. The second experiment was carried out with the objective of evaluating the performance of the hybrid hedonic scale in consumer studies, comparing it with the 9point hedonic, the self-adjusting and the ranking scales, using the following criteria: i) variability of sensory response, ii) discriminative power, iii) data adequacy to the assumptions of ANOVA models and, iv) ease of use. Eighty consumers, divided into four groups of 20 individuais each, evaluated tive brands of orange juice. Ali the individuais evaluated ali the samples using ali the scales, in 4 distinct tasting sessions. A 4 x 4 Latin square design was used to control the effect of the order of presentation of the scales and evaluate their ease of use without biases. For each scale the presentation order and carry-over were balanced. The results obtained using the traditional hedonic, hybrid hedonic and self-adjusting scales were evaluated using ANOVA. Data normality was evaluated using the Shapiro-Wilks test, homoscedasticity by the Brown-Forsythe's test and the Tukey's one degree of freedom test for non-additivity. The values for pFsample, pFassessor and QMerror, provided by ANOVA for each scale, were used respectively to evaluate discriminating power, expansiveness between assessors and data variability. The REGWF test was also applied to analyze the discriminative power of each scale. The results obtained from the ranking test were evaluated by Friedman's test and the ease of use of the scales by the Cochran-Mantel-Haenszel tests. The results indicated the superiority of the hybrid hedonic scale as compared to the structured hedonic and self-adjusting scales, both with respect to discriminative power and to data adequacy to the assumptions of normality and homoscedasticity. The self-adjusting scale presented a slightly greater discriminative power than the structured hedonic scale, despite the former having presented data with a greater variability and lack of normality of the residuais. Of ali the methods, the ranking test presented the least discriminative power. The structured and hybrid hedonic scales were considered to be signiticantly (psO.01) easier to use than the self-adjusting scale, there being no difference (pSO.O5) between these first two scales. Finally, the objective of the third experiment was to evaluate the performance of the nine-point hedonic, hybrid hedonic and self-adjusting scales in the segmentation of samples and consumers using Internal Preference Mapping methodology. One hundred and twelve consumers evaluated the overall acceptability of 8 commercial brands of red wine, the majority being Cabernet Sauvignon. The effects of presentation order -scales and samples- and carry over effects were balanced. The data were analyzed by ANOVA and MDPREF. Scale performance was evaluated using as criteria: number of significant dimensions in the MDREF (psO.O5), number of consumers significantly adjusted (psO.O5) and the degree of segmentation of the products and consumers. The results suggested a superiority of the hybrid scale over the traditional hedonic and self-adjusting scales. The MDPREF generated by the hybrid scale data produced the greatest number of significant dimensions (p=5%), yielding 79.5% of the consumers significantly adjusted (p=5%), while the MDPREF generated by the self-adjusting scale adjusted 54.5% of the consumers and that of the hedonic scale, 51.8%. Overall, the 9-point hedonic scale showed the worst performance in relation to the other scales examined. The results of this study strongly suggest that the hybrid hedonic scale is a valid and efficient tool for use in data collection associated with consumer studies, both when analyzed by normal models for the analysis of variance and by Internal Preference Mapping methodology / Doutorado / Doutor em Alimentos e Nutrição
|
602 |
Analise estatistica de polimorfismo molecular em sequencias de DNA utilizando informações filogeneticasKiihl, Samara Flamini, 1980- 25 February 2005 (has links)
Orientador: Hildete Prisco Pinheiro / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-04T03:26:12Z (GMT). No. of bitstreams: 1
Kiihl_SamaraFlamini_M.pdf: 1063133 bytes, checksum: 4c4cce349b3bd30501a820e973a44fb2 (MD5)
Previous issue date: 2005 / Resumo: Variacao genetica no nivel de nucleotideo e uma fonte poderosa de informacao para o estudo da evolucao de uma populacao. Importantes aspectos da evolucao de populacoes naturais tem sido investigados utilizando sequencias de nucleotideos. A quantidade ? = 4N?, em que N e o tamanho efetivo da populacao e ? e a taxa de mutacao por sequencia (gene, locus) por geracao, e um parametro essencial porque determina o grau de polimorfismo em um locus. O sucesso da inferencia sobre a evolucao de uma populacao e medido pela acuracia da estimacao deste parametro. Esta dissertacao de mestrado apresenta diversos metodos de estimacao do parametro ?, bem como uma comparacao entre eles atraves de simulacoes e aplicacoes a dados reais. Utilizando informacoes filogeneticas de amostras de sequencias de DNA, constr'oi-se um modelo linear onde o coeficiente da variavel independente e a estimativa do parametro ?. Verificou-se que utilizando informacoes filogeneticas dos dados obtem-se estimadores bem mais eficientes / Abstract: Genetic variation at the nucleotide level is a powerful source of information for studying the evolution of a population. Important aspects of the evolution of a population have been investigated by using nucleotide sequences. The quantity ? = 4N?, where N is the effective size of the population and ? is the mutation rate per sequence (gene, locus) per generation, is an essential parameter because it determines the degree of polymorphism at the locus. The degree of success in our inference about the evolution of a population is measured to some extent by the accuracy of estimation of this essential parameter. This work presents some methods of estimation of this parameter, comparisons between the different methods through computational simulations and applications to real data. The evolution of a species can be seen through a phylogenetic tree and a linear model can be constructed by using the phylogenetic information to estimate ?. It has been verified that the use of such information leads us to more accurate
estimators of ? / Mestrado / Estatistica / Mestre em Estatística
|
603 |
[en] FORECASTING HOURLY ELECTRICITY LOAD FOR LIGHT / [pt] MODELO DE PREVISÃO HORÁRIA DE CARGA ELÉTRICA PARA LIGHTANA PAULA BARBOSA SOBRAL 09 November 2005 (has links)
[pt] Nessa dissertação é desenvolvido um modelo de previsão de
curto prazo para cargas horárias empregando informações
climáticas. Tal modelo é montado para a companhia de
eletricidade LIGHT. O modelo proposto combina diferentes
metodologias, são elas: Redes Neurais, Métodos
Estatísticos e Lógica Nebulosa.
Primeiramente, emprega-se o Mapa Auto-Organizável de
Kohonen para identificar as curvas típicas de carga que
são incluídas em um modelo de previsão estatística. Com
intuito de melhorar o desempenho do modelo em termos do
erro de previsão é adicionado, através de Lógica Nebulosa,
o efeito da temperatura na carga.
Por fim, é montado um procedimento com alguns conceitos de
Lógica Nebulosa para identificar o tipo de curva de carga
do dia a ser previsto. / [en] In the dissertation a new model to short-term forecasting
of hourly loads using weather information is developed.
This model was developed for the electricity distributing
utility LIGHT and it combines different methodologies,
namely: Neural Networks, Statistical Methods and Fuzzy
Logic.
First, the Kohonen Self-Organizing Map makes the
identification of the load curves profiles and these are
included in the statistical model. In order to improve the
performance of the model in terms of forecasting error,
the effect of temperature on the load is inserted by means
of Fuzzy Logic.
Finally, a procedure with some concepts of Fuzzy Logic was
established to identify the type of curve of the day to be
forecasted.
|
604 |
Anomaly detection in trajectory data for surveillance applicationsLaxhammar, Rikard January 2011 (has links)
Abnormal behaviour may indicate important objects and events in a wide variety of domains. One such domain is intelligence and surveillance, where there is a clear trend towards more and more advanced sensor systems producing huge amounts of trajectory data from moving objects, such as people, vehicles, vessels and aircraft. In the maritime domain, for example, abnormal vessel behaviour, such as unexpected stops, deviations from standard routes, speeding, traffic direction violations etc., may indicate threats and dangers related to smuggling, sea drunkenness, collisions, grounding, hijacking, piracy etc. Timely detection of these relatively infrequent events, which is critical for enabling proactive measures, requires constant analysis of all trajectories; this is typically a great challenge to human analysts due to information overload, fatigue and inattention. In the Baltic Sea, for example, there are typically 3000–4000 commercial vessels present that are monitored by only a few human analysts. Thus, there is a need for automated detection of abnormal trajectory patterns. In this thesis, we investigate algorithms appropriate for automated detection of anomalous trajectories in surveillance applications. We identify and discuss some key theoretical properties of such algorithms, which have not been fully addressed in previous work: sequential anomaly detection in incomplete trajectories, continuous learning based on new data requiring no or limited human feedback, a minimum of parameters and a low and well-calibrated false alarm rate. A number of algorithms based on statistical methods and nearest neighbour methods are proposed that address some or all of these key properties. In particular, a novel algorithm known as the Similarity-based Nearest Neighbour Conformal Anomaly Detector (SNN-CAD) is proposed. This algorithm is based on the theory of Conformal prediction and is unique in the sense that it addresses all of the key properties above. The proposed algorithms are evaluated on real world trajectory data sets, including vessel traffic data, which have been complemented with simulated anomalous data. The experiments demonstrate the type of anomalous behaviour that can be detected at a low overall alarm rate. Quantitative results for learning and classification performance of the algorithms are compared. In particular, results from reproduced experiments on public data sets show that SNN-CAD, combined with Hausdorff distance for measuring dissimilarity between trajectories, achieves excellent classification performance without any parameter tuning. It is concluded that SNN-CAD, due to its general and parameter-light design, is applicable in virtually any anomaly detection application. Directions for future work include investigating sensitivity to noisy data, and investigating long-term learning strategies, which address issues related to changing behaviour patterns and increasing size and complexity of training data.
|
605 |
Évaluation des méthodes d'analyse de la fiabilité des matrices de phototransistors bipolaires en silicium pour des applications spatiales / Reliability investigations of bipolar silicon phototransistor arrays for space applicationsSpezzigu, Piero 03 December 2010 (has links)
[FR] Les travaux de thèse s'inscrivent dans le contexte d'une évaluation de la fiabilité de matrices de phototransistors bipolaires en technologie silicium pour des applications de codage optique angulaire en environnement spatial. Après un état de l'art relatif aux technologies des phototransistors et un rappel sur leur fonctionnement physique, les conditions environnementales spécifiques liées au domaine spatial sont décrites. La caractérisation des paramètres électro-optiques des phototransistors, associée à une phase préliminaire de métrologie, a été effectuée à partir de bancs dédiés. L'étude de la sensibilité aux charges mobiles de technologies issues de différents fondeurs, habituellement piégées aux interfaces et identifiée comme un mécanisme fortement pénalisant en terme de durée de vie opérationnelle, a permis d'optimiser et fiabiliser une nouvelle source européenne. Une méthodologie originale basée sur le concept des plans d'expérience « D-optimal » a été mise en œuvre et validée. L'objectif est d'estimer le taux de dégradation d'un ou de plusieurs paramètres clés du composant en fonction des conditions environnementales imposées par l'orbite de rotation du satellite à partir d'un nombre limité d'expériences réalisées au sol. / [EN] The research activities presented in this thesis are related to the specific contextof the qualification tests, for space missions, of new sources of silicon phototransistor arraysfor optical angular encoders. Our studies on a first source revealed the fragility of thattechnology in active storage and ionizing radiation because of its sensitivity to oxidestrapped charges. Then, a study on a second set of components was performed in order toanalyze the reliability of phototransistors subjected to several constraints in terms of bothionizing and displacement doses. The methodology of “Design of Experiments” was for thefirst time implemented and validated in this context. Thanks to this methodology, it ispossible to obtain an estimate of the degradation of one or more key parameters of thecomponent in environmental conditions for a given mission profile with a limited number ofexperiments. / [IT] Il lavoro di tesi s’inscrive nel contesto particolare della valutazione dell'affidabilità di matrici di fototransistor in tecnologia bipolare in silicio per applicazioni di codifica ottica angolare in ambiente spaziale. Dopo uno stato dell’arte riguardante le tecnologie di fototransistor, una breve descrizione dell’applicazione per cui i dispositivi studiati sono intesi, ed infine un richiamo sul loro funzionamento fisico, vengono descritte le condizioni ambientali specifiche legate al settore spaziale. La caratterizzazione dei parametri elettro-ottici dei fototransistor, associata ad una fase preliminare di metrologia, è stata effettuata a partire da banchi dedicati. Lo studio della sensibilità di diversi design di fototransistor agli effetti di cariche mobili intrappolate alle interfacce ossido-silicio, di solito identificata come un meccanismo fortemente penalizzante in termini di durata di vita operativa del dispositivo, ha permesso d'ottimizzare e accrescere l’affidabilità di fototransistor di un nuovo fabbricante europeo. Una metodologia originale basata sul concetto dei piani d'esperienza “D-optimal” è stata attuata e convalidata. L'obiettivo è di ottenere il tasso di degradazione di uno o più parametri chiave del dispositivo in funzione delle condizioni ambientali imposte dall'orbita in cui il satellite si troverà a operare, e a partire da un numero limitato d'esperienze realizzate a terra.
|
606 |
Epidémiologie des traumatismes: quelles contributions des (méthodes) statistiques aux approches descriptive et analytique?Senterre, Christelle 28 November 2014 (has links)
L’épidémiologie de terrain peut être définie comme un ensemble de méthodes de collecte et de traitement de l’information combinant successivement les approches de l’épidémiologie descriptive mais aussi celles de l’épidémiologie analytique. La finalité de l’analyse descriptive sera de décrire et de quantifier la survenue du phénomène étudié dans une population donnée, permettant ainsi la formulation d’hypothèses préalables à la phase analytique. Phase, qui se focalisera sur les "associations" entre des "facteurs de risque" et la survenue du phénomène étudié. Dans la réponse aux questionnements posés ces deux phases les méthodes statistiques seront des outils incontournables. Afin que les résultats produits par ces analyses soient non seulement utiles mais aussi valables et utilisables, une bonne identification et une application adéquate des méthodes d’analyse s’avèreront primordiales. <p>A côté de ce constat méthodologique, il y a, dans le champ des traumatismes, tant en Belgique, qu’en pays en développement, la quasi absence d’informations pertinentes et rigoureuses pour documenter l’importance de cette problématique dans le champ de la santé. Pourtant, selon l’Organisation Mondiale de la Santé, plus de 5 millions de personnes décèdent des suites d’un traumatisme chaque année, avec 90% de ces décès survenant dans les pays à faible revenu ou à revenu intermédiaire. En Europe, les données montrent qu’une personne décède toutes les deux minutes des suites d’un traumatisme, et que pour chaque citoyen européen qui en meure, 25 personnes sont admises à l’hôpital, 145 sont traitées en ambulatoire et plus encore se font soigner ailleurs. <p> Au vu du double constat, qui est, d’une part, que les méthodes statistiques ne sont pas toujours exploitées correctement, et d’autre part, qu’il y a un manque d’informations appropriées et rigoureuses pour documenter l’ampleur du problème des traumatismes; ce travail de thèse poursuit l’objectif majeur, de montrer l’intérêt qu’il y a à appliquer de manière pertinente, adéquate et complète, des méthodes statistiques (univariées, multivariables et multivariées) adaptées aux différentes sources de données disponibles, afin de documenter l’importance des traumatismes, et des facteurs qui y sont associés, tant en pays industrialisés (exemple de la Belgique) qu’en pays en développement (exemple du Cameroun).<p>La partie classiquement appelée "résultats", correspond dans ce travail à deux chapitres distincts. Le premier fait la synthèse de ce qui a été objectivé par la revue de la littérature en termes de sources de données exploitées et de méthodes d’analyse statistique utilisées. Le second correspond à l’exploitation de quatre bases de données :une "généraliste et populationnelle" (First Health of Young People Survey - Cameroun), une "généraliste et hospitalière" (Résumé Hospitalier Minimum - Belgique), une "spécifique et populationnelle" (données issue de compagnies d’assurances belges), et une " spécifique et hospitalière" (Service SOS Enfants du CHU St Pierre - Belgique). <p>Les constats majeurs à l’issue de ce travail sont qu’il est possible de trouver dans le panel des méthodes statistiques "classiques", les méthodes nécessaires pour répondre aux questionnements de surveillance "en routine" en termes d’occurrence et de facteurs associés. L’accent devrait être mis sur une (meilleure) utilisation (justifiée, correcte et complète) de ces méthodes et sur une meilleure présentation (plus complète) des résultats. L’utilisation adéquate s’assurant d’une part, par une meilleure formation en méthodologie statistique pour les praticiens mais aussi par l’intégration, à part entière, des statisticiens dans les équipes de recherches. En ce qui concerne les sources de données utilisées, le potentiel d’information existe. Chaque source de données a ses avantages et ses inconvénients mais utilisées conjointement elles permettent d’avoir une vision plus globale du fardeau des traumatismes. L’accent devrait être mis sur l’amélioration de la disponibilité, la mise en commun mais aussi sur la qualité des données qui seraient disponibles. Dès lors, en vue de s’intégrer dans une dynamique de "Système de Surveillance des Traumatismes", une réflexion sur une utilisation globale (qu’elle soit couplée ou non) de ces différentes sources de données devrait être menée. <p>En Belgique, de nombreuses données, contenant de l’information sur les traumatismes, sont collectées en routine, au travers des données hospitalières, et ponctuellement, au travers de données d’enquêtes. Actuellement, ces données, dont la qualité reste discutable pour certaines, sont sous-utilisées dans le champ qui nous intéresse. Dans le futur, "plutôt que de ne rien savoir", il est important de continuer à exploiter l’existant pour produire et diffuser de l’information, mais cette exploitation et cette diffusion doivent s’accompagner non seulement de réflexion mais aussi d’action sur la qualité des données. En ce qui concerne l’utilisation des méthodes statistiques, nous préconisons une double approche :l’intégration et la formation. Par intégration, nous entendons le fait qu’il faut d’une part considérer le statisticien comme un professionnel ayant à la fois des compétences techniques pointues sur les méthodes, qui pourront être mises à disposition pour garantir le bon déroulement de la collecte et de l’analyse des données, mais aussi comme un chercheur capable de s’intéresser plus spécifiquement à des problématiques de santé publique, comme la problématique des traumatismes par exemple. Par formation, nous entendons le fait qu’il est essentiel d’augmenter et/ou de parfaire non seulement les connaissances des futurs professionnels de la santé (publique) en cours de formation mais aussi celles des praticiens déjà actifs sur le terrain et dès lors premiers acteurs de la collecte de l’information et de son utilisation dans une démarche de prise de décision, de détermination de priorité d’action et d’évaluation. <p>L’objectif majeur de ce travail de thèse était de montrer l’intérêt qu’il y a à appliquer de manière pertinente, adéquate et complète, des méthodes statistiques adaptées aux différentes sources de données disponibles, afin de documenter l’importance des traumatismes, et des facteurs qui y sont associés. En ayant discuté de l’existence de plusieurs sources potentielles de données en Belgique et en ayant appliqué une série de méthodes statistiques univariées, multivariables et multivariées, sur quelques-unes de celles-ci, nous avons montré qu’il était possible de documenter le fardeau des traumatismes au-travers de résultats utiles mais aussi valables et utilisables dans une approche de santé publique.<p> / Doctorat en Sciences de la santé publique / info:eu-repo/semantics/nonPublished
|
607 |
Spatial autocorrelation and the analysis of patterns resulting from crime occurrenceWard, Gary J January 1978 (has links)
From Introduction: In geography during the 1950's there was a definite move away from the study of unique phenomena to the study of generalized phenomena or pattern (Mather and Openshaw, 1974). At the same time interrelationships between phenomena distributed in space and time became the topic of much interest among geographers, as well as members of other disciplines. The changing emphasis initiated acceptance of certain scientific principles (Cole, 1973), and mathematical techniques became the recognized and respected means through which objective analysis of pattern, structure, and interrelationships between a really distributed phenomena could be achieved (Ackerman, 1972; Burton, 1972; Gould, 1973). Geographers, as do members of other disciplines, frequently borrow mathematical techniques developed for problems encountered in the pure sciences and apply these techniques to what are felt to be analogous situations in geography.
|
608 |
Introduction de critères ergonomiques dans un système de génération automatique d’interfaces de supervision / Introduction of ergonomic criteria in an automatic generation system of supervision interfacesRechard, Julien 06 November 2015 (has links)
La conception d’interface écologique se décompose en deux étapes, une analyse du domaine de travail et une retranscription des informations du domaine en des représentations écologiques (Naikar, 2010). Ce type de conception a montré son efficacité pour la supervision de système complexe (Burns, 2008). Cependant, Vicente (2002) a pointé deux lacunes le temps de conceptions très long et la difficulté à transcrire de manière formalisée un domaine de travail en des représentations écologiques. De même, il n’existe pas d’outil formel de validation de domaine de travail. Dans ce manuscrit, nous proposons plusieurs réponses à la question : comment formaliser la conception d’une interface écologique, afin de réduire le temps et les efforts liés à la conception ? La première proposition est un outil de vérification de modèle de domaine de travail basé sur la méthode TMTA (Morineau, 2010). La seconde apporte, au travers d’une deuxième version du flot Anaxagore (Bignon, 2012), une intégration des travaux de Liu et al (2002) avec le principe d’une bibliothèque de widgets écologiques associée à un schéma d’entrées de haut niveau. Sur la base du domaine de travail d’un système d’eau douce sanitaire à bord d’un navire, une interface écologique a été implémentée et validée expérimentalement. Cette interface a été comparée à une interface conventionnelle générée également par le flot Anaxagore. Les résultats montrent que les interfaces écologiques favorisent un plus grand nombre de parcours cohérents dans un domaine de travail. Elles favorisent également une meilleure précision du diagnostic pour les opérateurs utilisant les interfaces écologiques. / The ecological interface design is composed of two steps, a work domain analysis and a transcription of the information of the work domain into ecological representation (Naikar, 2010). This kind of design showed his effectiveness for the supervision of complex system (Burns, 2008). Nevertheless, Vicente (2002) highlighted two issues, the long design time and the difficulties to translate with a formal way a work domain into ecological representation. Moreover, he doesn’t exist a formal tool of validation for a work domain. Several tools and works allow to be comfortable in the possibility to find some solution (Functional methodology (Liu et al, 2002), TMTA (Morineau, 2010) and Anaxagore (Bignon, 2012). We propose several answers at the issue: how formalize the design of an ecological interface in order to reduce the time and effort linked to the design? The first proposition is a tool of verification of model of work domain based on a simulation by TMTA. The second bring thanks to a second version of the Anaxagore flow, an integration of the works of Liu et al (2002) with the principle of the ecological library of ecological widget linked to a scheme of input of high level. Based on the work domain of a fresh water system in a ship, an ecological interface has been implemented and validated experimentally. This interface has been compared with a conventional interface also generated by Anaxagore. The results show that the ecological interface promotes a biggest numbers of coherent ways in the work domain. This kind of interface also promotes a better accuracy of the diagnostic for the operators using the ecological interface.
|
609 |
The correction of skewness of a task performance measureEwinyu, Ayado 18 July 2013 (has links)
M.Comm. (Industrial Psychology) / Orientation: In theory, work-based identities have been perceived to predict employee performance at work. The rationale behind this thinking is that individuals apply their identities as they work. Little research is available on the exact nature of the relationship between work-based identity and task performance. Research purpose: The aim of this study is to investigate the relationship between work-based identity and task performance before and after the correction of the negatively skewed task performance measure. Motivation of the study: This study will shed light on how to statistically correct negatively skewed task performance ratings. Currently, limited literature exists on how to correct this skewness with the aim of understanding the work-based identity task performance correlation. Research design: The study utilised a secondary data analysis (SDA) approach within the quantitative research paradigm. This study was performed on a cross-sectional survey (n = 2,429) of data which was collected from middle management level, and management levels that fell beneath this, in a large South African Information and Communication Technologies (ICT) sector company (N = 23,134). Scales used in the study were the Work-based Identity (WI-28) and Task Performance Scales. Results: The results confirm a relationship between work-based identity and task performance before and after the logarithmic transformation of the negatively skewed task performance ratings. The results also indicate that the relationship between workbased identity and task performance remains unchanged after the transformation. Practical/Managerial Implications: Employee behaviours impact general organisational outcomes. Managers should strive to design interventions that draw on employee strengths, such as work-based identity and skills that would lead to improved work experiences. Contribution/Value-Add: The study described in this article builds on the work-based identity literature by showing that this construct can be used to predict task performance. The study also provides evidence of how to statistically correct a negatively skewed task performance measure.
|
610 |
Keratometric variation during pregnancy and postpartumKlaassen, Donald Gregory Istvan 27 August 2012 (has links)
M.Phil. / Keratometric readings on three subjects were taken both during pregnancy and postpartum. One subject was visually non-compensated and did not require refractive correction, one was a contact lens wearer and one had undergone radial keratotomy. Twenty readings were taken by means of an automatic keratometer on each eye, morning and afternoon, every fortnight. The recent matrix method of optometric statistical analysis was employed and the results graphically compared and analysed. Findings indicate diurnal variations including variation in corneal curvature and variance through the course of normal pregnancy. Most evident was an increase in keratometric variation in all three subjects at the time of birth and a substantial decrease in corneal refractive power in the subject who had before undergone radial keratotomy. This result may have far-reaching implications on the long term prognosis of refractive surgery especially for females of child bearing age. Outliers representing transient increases in curvature were most common in the vertical meridian (indicating possible lid interaction), while the presence of bimodal distributions suggests a sensitivity of the automatic keratometer to changes in head posture.
|
Page generated in 0.0868 seconds