• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 6
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 32
  • 32
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Towards the improvement of food flavor analysis through the modelling of olfactometry data and expert knowledge integration / Innover dans l'analyse de la flaveur des aliments : développer une approche de modélisation associant résultats d’analyse et dires d’experts

Roche, Alice 25 October 2018 (has links)
Parmi les dimensions sensorielles engagées dans la perception de la flaveur, la composante odorante est déterminante car elle porte le plus souvent l’identité et la typicité d’un aliment. L’analyse chimique de la composante odorante repose sur une stratégie séparative qui permet d’identifier les différents odorants présents dans l’aliment. Cependant, la perception des odorants en mélange induit des interactions au niveau perceptif qui ne sont pas prises en compte dans les techniques séparatives. Les mécanismes sous-jacents aux interactions perceptives sont mal connus, ce qui limite les possibilités de prédiction de l’odeur d’un aliment sur la base de sa composition chimique. En réponse à cette problématique deux approches émergent de la revue de la littérature. La première est basée sur la prédiction d’odeur d’après la structure moléculaire des odorants. Cependant, les études concernent des odorants seuls et non leurs mélanges. La seconde repose sur la recombinaison d’odorants en mélange après l’étape d’analyse séparative, mais le choix des odorants à associer est essentiellement empirique. Ainsi, deux questions se posent : Comment prédire l'odeur de mélanges de molécules d’après la structure moléculaire des odorants? Comment améliorer l'analyse de la flaveur dans le but de prédire l'odeur d’aliments complexes composés de plusieurs dizaine d’odorants en mélanges? Ces deux questions ont été abordées dans cette thèse dont les travaux sont décrits dans ce manuscrit selon deux axes principaux.Le premier axe décrit l'utilisation et le développement d’un modèle basé sur le concept des distances angulaires calculées à partir de la structure moléculaire des odorants avec pour objectif de prédire la similarité perceptive de mélanges plus ou moins complexes d’odorants. Les résultats soulignent l'importance de prendre en compte la dimension d'intensité des odorants afin d’améliorer la qualité de la prédiction. Des perspectives d’amélioration du modèle sont dégagées pour permettre de dépasser la dimension de similarité et prédire des dimensions qualitatives de l’odeur.Le deuxième axe présente une démarche originale d’intégration de connaissances liées à l’expertise dans la procédure d'analyse de la flaveur. Ainsi, trois types de données hétérogènes sont agrégés dans un modèle mathématique global : des données chimiques, des données sensorielles et des connaissances d’experts aromaticiens. L'expertise est intégrée à travers la création d'une ontologie qui est ensuite associée à une approche de logique floue optimisée par algorithme évolutionnaire. Le modèle développé permet de prédire le profil odorant de seize vins rouges sur la base de leur composition en odorants. Au final, l’ensemble des travaux menés dans cette thèse apporte des résultats originaux permettant une meilleure compréhension de la construction des odeurs des aliments et permet d’élaborer des hypothèses quant aux relations sous-jacentes de l'espace perceptif des odeurs en mélanges complexes. / Among the sensory dimensions involved in food flavor, the odor component is critical because it often determines the identity and the typicality of the food. Chemical flavor analysis provides a list of the odorants contained in a food product but is not sufficient to predict the odor resulting from their mixture. Indeed, odor perception relies on the processing by the olfactory system of many odorants embedded in complex mixtures and several perceptual interactions can occur. Thus, the prediction of the perceptual outcome of a complex odor mixture remains challenging and two main approaches emerge from the literature review. On the one hand, predictive approaches based on the molecular structure of odorants have been proposed but have been limited to single odorants only. On the other hand, methodologies relying on recombination strategies after the chemical analyses of flavor have been successfully applied to identify those odorants that are key to the food odor. However, the choices of odorants to be recombined are mostly based on empirical approaches. Thus, two questions arise: How can we predict the odor quality of a mixture on the basis of the molecular structure of its odorants? How can we improve food flavor analysis in order to predict the odor of a food containing several tens of odorants? These two questions are at the basis of the thesis and of this manuscript which is divided in two main axes.The first axis describes the development of a model based on the concept of angle distances computed from the molecular structure of odorants in order to predict the odor similarity between mixtures. The results highlight the importance of taking into account the odor intensity dimension to reach a good prediction level. Moreover, several perspectives are proposed to extend the model prediction beyond the similarity dimension and to predict more qualitative dimensions of odors.The second axis presents an innovative strategy which allows integrating experts’ knowledge in the flavor analysis procedure. Three different types of heterogeneous data are embedded in a mathematical model: chemical data, sensory data and knowledge from expert flavorists. Experts’ knowledge is integrated owing to the development of an ontology, which is further used to define fuzzy rules optimized by evolutionary algorithms. The final output of the model is the prediction of red wines’ odor profile on the basis of their odorants’ composition. Overall, the thesis work brings original results allowing a better understanding of food odor construction and gives insights on the underlying relationships within the odor perceptual space for complex mixtures.
22

Le politique du développement : les usages politiques des savoirs experts et de la participation des populations indiennes au Mexique / Development politics : the political uses of expert knowledge and participation of indigenous peoples in Mexico

Parizet, Raphaëlle 06 December 2013 (has links)
Mot valise véhiculé par les agences internationales, la notion de « développement avec identité » traduit une volonté de prendre en compte les spécificités locales et culturelles des populations autochtones et la promotion d’une approche en terme de développement, présenté comme universel et apolitique. C’est cette contradiction que cette thèse propose d’explorer. À partir du cas mexicain, il s’agit de comprendre comment les dispositifs de développement fonctionnent comme des instruments de connaissance, mais également comme des instruments performatifs par leurs inductions prescriptives et les usages sociaux qui en sont faits. Au final, le « développement avec identité » renvoie à un « art de gouverner » les populations marquées par une disqualification sociale. Celui-ci s’appuie sur deux volets : l’élaboration de savoirs spécifiques sur ces populations et la participation des individus et groupes autochtones aux dispositifs de développement. Cette thèse propose une contribution sociologique à l’analyse du développement et aux travaux sur la question autochtone. Cette étude de la circulation des discours, des instruments et des pratiques du développement se fonde sur une enquête ethnographique de trois espaces où se construisent, s’élaborent et se mettent en pratique les dispositifs du développement autochtone au Mexique : le bureau du Programme des Nations unies à Mexico, l’instance nationale en charge de l’action publique de développement des populations autochtones, et enfin des groupes sociaux de la région du Chiapas dans lesquels des dispositifs de développement sont également élaborés et mis en œuvre / A buzzword broadcasted by international agencies, the concept of “development with identity” refers to a willingness to take into account local and cultural specificities of indigenous peoples. It entails a promotion of the development approach, presented as both universal and apolitical. This thesis proposes to explore this contradiction. Focusing on the Mexican case, it aims to understand how development apparatuses function as instruments of knowledge, but also as performative instruments by their prescriptive inductions and the social uses they are made of. Finally, “development with identity” refers to an “art of government” of populations labeled as socially disqualified. It relies on two key components: the elaboration of a specific knowledge on these populations and the participation of indigenous individuals and groups in development apparatuses.This thesis proposes a sociological contribution to the analysis of development and works on indigenous issues. In order to study the circulation of development speeches, instruments and practices, this work is based on a political ethnography of three spaces in which the apparatuses of indigenous development in Mexico are elaborated, formulated and put into practice: the Office of the United Nations Development Program in Mexico, the national authority in charge of development public policy for indigenous peoples, and finally social groups in the region of Chiapas in which development apparatuses are developed and implemented
23

Melhor isso do que nada! participação e responsabilização na gestão dos riscos no Pólo Petroquimico de Camaçari-BA.

Silva, Ana Licks Almeida January 2006 (has links)
p. 1-231 / Submitted by Santiago Fabio (fabio.ssantiago@hotmail.com) on 2013-04-24T20:51:28Z No. of bitstreams: 2 55555555555.pdf: 485267 bytes, checksum: f7ca906ae3a45c2734e5a69cefe2b37a (MD5) 66666666666666.pdf: 1164093 bytes, checksum: 315af7474aa37a66367077e95f9b0a0b (MD5) / Approved for entry into archive by Maria Creuza Silva(mariakreuza@yahoo.com.br) on 2013-05-04T17:07:19Z (GMT) No. of bitstreams: 2 55555555555.pdf: 485267 bytes, checksum: f7ca906ae3a45c2734e5a69cefe2b37a (MD5) 66666666666666.pdf: 1164093 bytes, checksum: 315af7474aa37a66367077e95f9b0a0b (MD5) / Made available in DSpace on 2013-05-04T17:07:19Z (GMT). No. of bitstreams: 2 55555555555.pdf: 485267 bytes, checksum: f7ca906ae3a45c2734e5a69cefe2b37a (MD5) 66666666666666.pdf: 1164093 bytes, checksum: 315af7474aa37a66367077e95f9b0a0b (MD5) Previous issue date: 2006 / Este trabalho apresenta uma análise do modelo adotado pelo Programa Atuação Responsável na construção dos Conselhos Consultivos CC. O recorte empírico trata do Conselho Comunitário Consultivo de Camaçari - Ba, primeiro adotado no país e que tem sido referência para a implantação de outros. Os CCs têm sido divulgados pelo setor químico industrial como ferramenta democrática, consensual e transparente, cujos objetivos são a promoção, aproximação e o diálogo entre complexos industriais e comunidades vizinhas. Ao lado disso permite estabelecer uma interação entre a percepção das comunidades e as ações das indústrias químico-petroquímicas instaladas em Camaçari, buscando a melhoria crescente nas condições de segurança, saúde e meio ambiente associadas às atividades das referidas indústrias. Dezessete entrevistas, registros de reuniões e observação participante foram as principais fontes de dados, cuja análise aponta para 3 principais características deste instrumento: falta de autonomia dos membros representantes da comunidade, ênfase no consenso e hegemonia do discurso técnicocientífico. O Conselho se constitui num sofisticado mecanismo de domesticação, docilização e responsabilização pela disseminação de uma ideologia organizacional hegemônica e de modos de governança neoliberais. Na raiz deste processo está o poder, protegido das massas e concentrado em mãos dominantes, que impossibilita a participação e o empoderamento dos segmentos populares. O consenso, considerado signo de civilidade, se apresenta mais como recurso retórico do que como prática. Embora as discussões geralmente aconteçam frente-a-frente, é permanente o risco de falseamento ou escamoteamento dos seus sentidos, pois não há compromisso explícito acerca da autonomia dos membros. As informações técnico-científicas referentes à saúde ambiental, questões ambientais de saúde e segurança do trabalhador são provenientes das empresas, não havendo outras fontes de informação para os conselheiros a não ser aquelas oriundas do senso comum. São grandes, portanto, as dificuldades de contraposição a um conhecimento socialmente legitimado, fazendo crer que o celebrado consenso é algo construído com base na omissão e perpetuação da concentração de poder. / Salvador
24

Developing A Dialogue Based Knowledge Acquisition Method For Automatically Acquiring Expert Knowledge To Diagnose Mechanical Assemblies

Madhusudanan, N 12 1900 (has links) (PDF)
Mechanical assembly is an important step during product realization, which is an integrative process that brings together the parts of the assembly, the people performing the assembly and the various technologies that are involved. Assembly planning involves deciding on the assembly sequence, the tooling and the processes to be used. Assembly planning should enable the actual assembly process to be as effective as possible.Assembly plans may have to be revised due to issues arising during assembly. Many of these revisions can be avoided at the planning stage if assembly planners have prior knowledge of these issues and how to resolve them. General guidelines to make assembly easier (e.g. Design for Assembly) are usually suited for mass-manufactured assemblies and are applied where similar issues are faced regularly. However, for very specific issues that are unique to some domains only, such as aircraft assembly, only expert knowledge in that domain can identify and resolve the issues. Assembly experts are the sources of knowledge for identifying and resolving these issues. If assembly planners could receive assembly experts’ advice about the potential issues and resolutions that are likely to occur in a given assembly situation, they could use this advice to revise the assembly plan in order to avoid these issues. This link between assembly experts and planners can be provided using knowledge based systems. Knowledge-based systems contain a knowledge base to store experts’ knowledge, and an inference engine that derives certain conclusions using this knowledge. However, knowledge acquisition for such systems is a difficult process with substantial resistance to being automated. Methods reported in literature propose various ways of addressing the problem of automating knowledge acquisition. However, there are many limitations to these methods, which have been the motivations for the research work reported in this thesis. This thesis proposes a dialog-like method of questioning an expert to automatically acquire knowledge from assembly experts. The questions are asked in the context of an assembly situation shown to them. During the interviews, the knowledge required for diagnosing potential issues and resolutions are identified. The experts were shown a situation, and asked to identify issues and suggest solutions. The above knowledge is translated into the rules for a knowledge based system. This knowledge based system can then be used to advise assembly planners about potential issues and solutions in an assembly situation. After a manual verification, the questioning procedure has been implemented on computer as a software named EXpert Knowledge Acquisition and Validation (ExKAV). A preliminary evaluation of ExKAV has been carried out, in which assembly experts interacted with the tool using the researcher as an intermediary. The results of these sessions have been discussed in the thesis and assessed against the original research objectives. The current limitations of the procedure and its implementation have been highlighted, and potential directions for improving the knowledge acquisition process are discussed.
25

Analyse du risque de mildiou de la vigne dans le Bordelais à partir de données régionales et d’informations locales collectées en cours de saison / Grape downy mildew risk analysis in Bordeaux vineyards based on regional survey data and local expert knowledge analysis

Chen, Mathilde 12 December 2019 (has links)
L’utilisation de pesticides permet de réduire les pertes de récolte mais génère des impacts environnementaux négatifs. Il est important de fournir des informations précises sur les risques épidémiques concernant les bioagresseurs afin de raisonner l’utilisation des pesticides, en particulier dans le cas du mildiou de la vigne, responsable en moyenne de 43% des traitements utilisés dans le Bordelais. Cette thèse évalue l’intérêt de la date d’apparition des symptômes de mildiou de la vigne pour raisonner l’usage des fongicides dans la lutte contre cette maladie.En nous basant sur des observations régionales et de l’expertise locale, nous montrons que dans le Bordelais, les premiers traitements sont réalisés en moyenne quatre semaines avant l’apparition des premiers symptômes. Nous montrons que reporter la date du premier traitement anti-mildiou à la date d’apparition de la maladie permet d’économiser en moyenne 56% des traitements, par rapport aux pratiques actuelles de cette région. Nos résultats montrent que combiner cette stratégie avec le port d’équipements de protection réduit l’exposition des opérateurs de plus de 70%.En utilisant des méthodes de machine learning, nous montrons que la précocité et la gravité des épidémies de mildiou sont fortement liées. Les prévisions de nos modèles peuvent être utilisées pour déclencher les traitements contre la maladie dans les cas de risques élevés, entraînant une réduction de plus de 50% des traitements anti-mildiou par rapport aux pratiques actuelles.Nos résultats et les méthodes employées sont discutés et mis en perspective avec d’autres moyens de réduction de l’usage des pesticides en viticulture. / Abstract: Pesticides reduce yield losses but have negative environmental consequences. It is important to provide precise information on the epidemic risks concerning harmful organisms in order to reason the use of pesticides, in particular in the case of grape downy mildew, which is responsible on average for 43% of pesticides used in Bordeaux vineyards.The objective of this work is to estimate the benefits of using downy mildew onset date to avoid unjustified sprays in the control of this disease. Based on regional observations and local expertise, we show that in Bordeaux, the first treatments are applied on average four weeks before the first symptoms appear.We show that postponing the date of the first downy mildew spray to disease onset reduces fungicide use by an average of 56% compared to current practices in this region. For operators, our results show that combining this strategy with the use of personal protective equipment reduces exposure by more than 70%.By using machine learning methods, we also show that the precocity and severity of downy mildew epidemics are strongly linked. Our predictions can be used to trigger disease treatments only in high-risk cases, resulting in a reduction of more than 50% in mildew treatments compared to current practices. Our results and the used methods are discussed and compared with other methods for reducing the use of pesticides in viticulture.
26

Immobilienbewertung in Märkten mit geringen Transaktionen – Möglichkeiten statistischer Auswertungen

Soot, Matthias 28 July 2021 (has links)
Markttransparenz in Deutschland wird durch die Gutachterausschüsse und auch durch verschiedene private Akteure am Immobilienmarkt realisiert. Insbesondere in Teilmärkten mit geringen Transaktionszahlen stellt die Markttransparenz eine Herausforderung dar, da nicht ausreichend Daten zur Analyse der jeweiligen Märkte zur Verfügung stehen. Aus diesem Grund bedürfen diese Märkte einer tiefergehenden Untersuchung, um auch hier eine ausreichende Markttransparenz zu erreichen. Die Vielfältigkeit der Teilmärkte mit geringen Transaktionszahlen muss dafür differenziert betrachtet werden. Im Rahmen der Arbeit werden zunächst Unterschiede in den Eigenschaften der Märkte mit geringen Transaktionszahlen untersucht. Hierzu wird mittels einer qualitativen Untersuchung von Leitfadeninterviews sowie der Literatur zum Thema eine Theorie zur Systematisierung der Märkte gebildet. Differenziert für einzelne Märkte kann mit dieser Strukturierung eine passende Auswertestrategie entwickelt werden. Anschließend erfolgt die Untersuchung von verschiedenen Daten, die bereits in den Märkten mit geringer Transaktionszahl genutzt werden. Kauffälle, die unvollständig erfasst sind, werden derzeit bei Auswertungen vollständig ausgeschlossen (Fallweiser Ausschluss). Teilweise fehlt jedoch nur eine Information für eine multivariate Analyse. Im Rahmen der Arbeit wird untersucht, ob und mit welchen Methoden diese Datenlücken geeignet gefüllt werden können, um eine höhere Genauigkeit in den Analysen auch mit wenigen Daten zu erhalten. Als Methoden werden neben dem Fallweisen Ausschluss eine Mittelwertimputation sowie die Auffüllung der Datenlücken mittels Expectation-Maximization und Random-Forest-Regression untersucht. Darüber hinaus wird das Expertenwissen, das in verschiedenen Formen von Expertisen (Befragungen, Angebotspreise, Gutachten) geäußert werden kann, untersucht. Zur Erlangung eines Überblicks, wird zunächst das Expertenwissen im Rahmen einer quantitativen Befragung näher betrachtet, um Handlungsweisen und Unterschiede von Experten aus verschiedenen Gruppen aufzudecken. Anschließend werden intersubjektive Experten- und Laienbefragungen im Kontext der Immobilienbewertung ausgewertet sowie Angebotspreise, die von Maklern und ohne Makler vermarktet werden, im Verhältnis zu den realisierten Kaufpreisen untersucht. Da die untersuchten zusätzlichen Daten wie Angebotsdaten oder Expertenbefragungen in einigen Teilmärkten nicht zur Verfügung stehen oder nur mit hohem Aufwand erzeugt werden können, sind alternative Nutzungsansätze notwendig. Hierzu werden zwei Methoden auf ihre Eignung hinsichtlich räumlich zusammengefasster Auswertungen geprüft. Der Vergleich erfolgt zur in der Praxis etablierten multiplen linearen Regressionsanalyse. Zum einen werden die geographisch gewichtete Regressionsanalyse, die lokale Märkte besser abbilden kann, zum anderen die künstlichen neuronalen Netze, die Nichtlinearitäten besser abbilden können, angewendet. Im Ergebnis zeigt sich, dass eine Strukturierung der Märkte mit geringer Transaktionszahl möglich ist. Eine sinnvolle Strukturierung erfolgt anhand der Grundgesamtheit des jeweiligen sachlichen/-räumlichen Marktes. Ebenso kann eine Differenzierung nach ländlichen und urbanen Räumen erfolgen. Mit Imputationsmethoden können die Ergebnisse von Regressionsanalysen deutlich verbessert werden. Selbst bei einem großen Vorkommen von Datenlücken in unterschiedlichen Parametern kann eine Auswertung noch gute Ergebnisse in der Größenordnung der vollständigen Kauffälle liefern. Auch mit der simplen Methode der Mittelwertimputation kann ein gutes Ergebnis erzielt werden. Experten im Bereich der Immobilienbewertung haben die unterschiedlichsten beruflichen Herkünfte. In ihrer Arbeitsweise lassen sich jedoch keine wesentlichen Systematiken feststellen. Lediglich bei der Nutzung von Daten können Systematiken aufgedeckt werden. Expertenbefragungen weisen grundsätzlich hohe Streuungsmaße auf. Die Streuungsmaße werden dann reduziert, wenn bei den Befragungen Einschränkungen beispielsweise durch eine vorgegebene Skala oder durch vorgeschlagene Werte erfolgen. Weitere Untersuchungen sind dahingehend notwendig. Auch die Abschläge zwischen Angebotspreisen und Kaufpreisen, aber auch die Anpassung von Angebotspreisen im Vermarktungszeitraum, weisen hohe Streuungsbreiten auf. Einen signifikanten Unterschied zwischen der Vermarktung mit oder ohne Makler kann in der untersuchten Stichprobe nicht nachgewiesen werden. Sowohl die Nutzung der geographisch gewichteten Regressionsanalyse (GWR) als auch die Nutzung von künstlichen neuronalen Netzen (KNN) bieten bei der Auswertung von räumlich zusammengefassten Daten in einer Kreuzvalidierung einen Vorteil. Dies lässt darauf schließen, dass die Märkte sowohl räumlich inhomogen als auch nichtlinear sind. Zielführend erscheint eine Kombination der geographischen Komponente mit nichtparametrischen Ansätzen wie dem Lernverfahren der KNN. / In Germany market transparency is realised by expert’s committees and due to the publication of market reports and market values and by various private players in the real estate market. In sub-markets with low transaction numbers, market transparency is a challenge because not enough data is available to analyse the respective markets. These markets require a more in-depth investigation to achieve sufficient market transparency. The diversity of sub-markets with low transaction numbers must be considered in a differentiated way. In the context of this work, differences in the characteristics of markets with a small number of transactions are examined. A theory for the systematisation of these markets is formed, using a qualitative investigation of guideline interviews and literature on the topic. Differentiated for individual markets, a suitable evaluation strategy can be developed using the proposed structuring. Subsequently, the analysis of different data, which is already used in real estate valuation, is carried out to investigate its usability for regions with few transactions. Purchase cases which are recorded incompletely, are today excluded from evaluations (case-wise exclusion). However, most of the time only one or two pieces of information for multivariate analysis are missing per case. It is examined whether and with which methods these data gaps can be filled suitably. Besides the case-by-case rejection (default method today), a mean-value-imputation, as well as the filling of data gaps using Expectation-Maximization and Random-Forest-Regression are investigated. Furthermore, the expert’s knowledge, which can be expressed in different forms of expert’s opinions (surveys, offer prices, expert reports), is examined. First of all, the expert knowledge, in general, is examined more closely within the framework of a quantitative survey to uncover patterns of action and differences between experts from different groups. Subsequently, intersubjective expert and layman surveys are evaluated in the context of real estate valuation. Additional offer prices, marketed with or without real estate agents, are compared to the realised purchase prices. Since the additional data examined, such as the supply data or the expert surveys, is not available in some sub-markets or can only be generated at great expense, alternative approaches to utilisation are necessary. For this purpose, two methods are tested for their suitability with regard to spatially summarised data. A comparison to the classically used linear regression analysis is made. On one hand, the geographically weighted regression analysis, which represents local markets more accurately, and the artificial neural networks, which are more suited to represent non-linearities, are applied. The result shows that a systematisation of markets with a low number of transactions is possible. A structuring based on the population of the respective functional/spatial sub-market takes place. It is also possible to differentiate between rural and urban areas. With imputation methods, the results of regression analyses can be improved significantly. Even if there are large numbers of data gaps in different parameters, an evaluation can still provide adequate results in comparison to an analysis with complete purchase cases if the overall sample is big enough. Already the simple method of mean-value-imputation leads to good results. Experts in the field of real estate valuation have a wide variety of professional backgrounds. However, significant systematics cannot be identified in their working methods. Different behaviour can only be identified by the usage of different data sources. Expert surveys generally show a high degree of dispersion. This degree of dispersion is reduced if the surveys are restricted, e.g. by a given scale or suggested values. Further investigations on these topics are necessary. The discounts between offer prices and purchase prices as well as the adjustment of offer prices within the marketing period are showing a high degree of dispersion. A significant difference between marketing with or without an agent cannot be proven in the examined sample. Both, the use of geographically weighted regression analysis and the use of artificial neural networks (ANN) offer an advantage when evaluating spatially summarised data in cross-validation. This leads to the conclusion that the markets are both geographically inhomogeneous and non-linear. A combination of the geographic component with non-parametric approaches such as the learning procedure of the ANN is appropriate.
27

Sur le pronostic des systèmes stochastiques

Ouladsine, Radouane 09 December 2013 (has links)
Cette thèse porte sur la problématique du pronostic des systèmes. Plus précisément, elle est dédiée aux systèmes stochastiques et deux contributions principales sont proposées. La première concerne la problématique du pronostic à base de connaissances d’expert. Le système considéré est supposé être exploité en vue de réaliser une mission. Durant cette dernière, on suppose disposer d’information, à travers les connaissances d’expert, sur l’environnement ; Cependant, à cause des phénomènes aléatoires, ces connaissances peuvent être que partielles. Par conséquent, une méthodologie de pronostic, basée sur des techniques fondée sur le principe de Maximum d’Entropie Relative (MER), est proposée. Ensuite, pour modéliser l’impact de cet environnement aléatoire sur le système les trajectoires de dégradation sont construites à travers un cumul stochastique basé sur la méthode Monte Carlo chaine de Markov. La deuxième contribution est dédiée au pronostic à base de modèle d’état non-linéaire et stochastique. Dans ce travail, on suppose que seule la structure de la fonction de dégradation est connue. Cette structure est supposée dépendre de la dynamique d’un paramètre inconnu. L’objectif ici, est d’estimer ce paramètre en vue de déterminer la dynamique de la dégradation. Dans ce cadre, une stratégie, du pronostic basée sur la technique de filtre bayésien, est proposée. La technique consiste à combiner deux filtres de Kalman. Le premier filtre est utilisé afin de déterminer le paramètre inconnu. Puis, en utilisant la valeur du paramètre estimée, le deuxième filtre assure la convergence de la dégradation. Une serie d’exemples est traitée afin d’illustrer notre contribution. / This thesis focuses on the problem of the systems prognostic. More precisely, it is dedicated to stochastic systems and two main contributions are proposed. The first one is about the prognostic of stochastic systems based on the expert knowledge and the proposed approach consists in assessing the system availability during a mission. This mission is supposed to model the user profile that express the environment in which the system will evolve. We suppose also that this profile is given through a partial knowledge provided by the expert. In fact, since the complexity of systems under consideration the expert can provide only incomplete information. The aim of the contribution is to estimate the system’s damage trajectory and analyse the mission success. In this case, a three steps methodology is proposed. The first step consists on estimating the environment probability distribution. Indeed, a probabilistic method based on maximum relative entropy (MRE) is used. The second step is dedicated to the damage trajectory construction. This step is performed by using a Markov Chain Monte Carlo (MCMC) simulation. Finally, the prediction of the mission success is performed. Note that, the models describing the damage behaviour, and in order to be more realistic, is supposed to be stochastic. The second contribution, concerns the model-based prognosis approach. More precisely, it about the use of the Bayesian filtering on the prognosis problem. The aim of the proposed approach is identify the damage parameter by using an Ensemble Kalman Filtre (EnKF). Then, estimate the RUL based on the damage propagation. To illustrate our contributions, a series of examples is treated.
28

Integration of Auxiliary Data Knowledge in Prototype Based Vector Quantization and Classification Models

Kaden, Marika 23 May 2016 (has links)
This thesis deals with the integration of auxiliary data knowledge into machine learning methods especially prototype based classification models. The problem of classification is diverse and evaluation of the result by using only the accuracy is not adequate in many applications. Therefore, the classification tasks are analyzed more deeply. Possibilities to extend prototype based methods to integrate extra knowledge about the data or the classification goal is presented to obtain problem adequate models. One of the proposed extensions is Generalized Learning Vector Quantization for direct optimization of statistical measurements besides the classification accuracy. But also modifying the metric adaptation of the Generalized Learning Vector Quantization for functional data, i. e. data with lateral dependencies in the features, is considered.:Symbols and Abbreviations 1 Introduction 1.1 Motivation and Problem Description . . . . . . . . . . . . . . . . . 1 1.2 Utilized Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Prototype Based Methods 19 2.1 Unsupervised Vector Quantization . . . . . . . . . . . . . . . . . . 22 2.1.1 C-means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.1.2 Self-Organizing Map . . . . . . . . . . . . . . . . . . . . . . 25 2.1.3 Neural Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.1.4 Common Generalizations . . . . . . . . . . . . . . . . . . . 30 2.2 Supervised Vector Quantization . . . . . . . . . . . . . . . . . . . . 35 2.2.1 The Family of Learning Vector Quantizers - LVQ . . . . . . 36 2.2.2 Generalized Learning Vector Quantization . . . . . . . . . 38 2.3 Semi-Supervised Vector Quantization . . . . . . . . . . . . . . . . 42 2.3.1 Learning Associations by Self-Organization . . . . . . . . . 42 2.3.2 Fuzzy Labeled Self-Organizing Map . . . . . . . . . . . . . 43 2.3.3 Fuzzy Labeled Neural Gas . . . . . . . . . . . . . . . . . . 45 2.4 Dissimilarity Measures . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.4.1 Differentiable Kernels in Generalized LVQ . . . . . . . . . 52 2.4.2 Dissimilarity Adaptation for Performance Improvement . 56 3 Deeper Insights into Classification Problems - From the Perspective of Generalized LVQ- 81 3.1 Classification Models . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.2 The Classification Task . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.3 Evaluation of Classification Results . . . . . . . . . . . . . . . . . . 88 3.4 The Classification Task as an Ill-Posed Problem . . . . . . . . . . . 92 4 Auxiliary Structure Information and Appropriate Dissimilarity Adaptation in Prototype Based Methods 93 4.1 Supervised Vector Quantization for Functional Data . . . . . . . . 93 4.1.1 Functional Relevance/Matrix LVQ . . . . . . . . . . . . . . 95 4.1.2 Enhancement Generalized Relevance/Matrix LVQ . . . . 109 4.2 Fuzzy Information About the Labels . . . . . . . . . . . . . . . . . 121 4.2.1 Fuzzy Semi-Supervised Self-Organizing Maps . . . . . . . 122 4.2.2 Fuzzy Semi-Supervised Neural Gas . . . . . . . . . . . . . 123 5 Variants of Classification Costs and Class Sensitive Learning 137 5.1 Border Sensitive Learning in Generalized LVQ . . . . . . . . . . . 137 5.1.1 Border Sensitivity by Additive Penalty Function . . . . . . 138 5.1.2 Border Sensitivity by Parameterized Transfer Function . . 139 5.2 Optimizing Different Validation Measures by the Generalized LVQ 147 5.2.1 Attention Based Learning Strategy . . . . . . . . . . . . . . 148 5.2.2 Optimizing Statistical Validation Measurements for Binary Class Problems in the GLVQ . . . . . . . . . . . . . 155 5.3 Integration of Structural Knowledge about the Labeling in Fuzzy Supervised Neural Gas . . . . . . . . . . . . . . . . . . . . . . . . . 160 6 Conclusion and Future Work 165 My Publications 168 A Appendix 173 A.1 Stochastic Gradient Descent (SGD) . . . . . . . . . . . . . . . . . . 173 A.2 Support Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . 175 A.3 Fuzzy Supervised Neural Gas Algorithm Solved by SGD . . . . . 179 Bibliography 182 Acknowledgements 201
29

Současná legitimizační ekologie ve vzdělávání: pozice, vědění a kritika / Contemporary Educational Legitimation Ecology: positions, knowledge, and critique

Wirthová, Jitka January 2021 (has links)
This dissertation focuses on the current Czech space of legitimation practices in education as a variable sphere of justification and critique of educational goals rooted in global transformations of educational institutions, autonomies of the nation states and transnational comparative data. Since the debate on educational reform (2004), the Czech legitimation educational ecology has been diversified by different types of knowledge and actors (state, non- profit, private sector). In this work, I argue that legitimation as a critical action is today, in various ways and processes (knowledge regimes, patterns of actorship) derived from traditional jurisdictions (state and professional structures) and moves to more flexible structures, which I call topologies. In jurisdictions, mostly passive audiences remain. New legitimation topologies connect values and data and, in many ways, replace dysfunctional state structures, using specific disconnections, but also question the public nature of negotiating educational goals. Based on relational ontology and sociological topological studies and through a qualitative relational analysis of legitimation practices in three fields (published normative documents, public debates and semi-structured interviews with state and non-state actors) I show in the period...
30

Approche bayésienne de l'évaluation de l'incertitude de mesure : application aux comparaisons interlaboratoires

Demeyer, Séverine 04 March 2011 (has links)
La modélisation par équations structurelles est très répandue dans des domaines très variés et nous l'appliquons pour la première fois en métrologie dans le traitement de données de comparaisons interlaboratoires. Les modèles à équations structurelles à variables latentes sont des modèles multivariés utilisés pour modéliser des relations de causalité entre des variables observées (les données). Le modèle s'applique dans le cas où les données peuvent être regroupées dans des blocs disjoints où chaque bloc définit un concept modélisé par une variable latente. La structure de corrélation des variables observées est ainsi résumée dans la structure de corrélation des variables latentes. Nous proposons une approche bayésienne des modèles à équations structurelles centrée sur l'analyse de la matrice de corrélation des variables latentes. Nous appliquons une expansion paramétrique à la matrice de corrélation des variables latentes afin de surmonter l'indétermination de l'échelle des variables latentes et d'améliorer la convergence de l'algorithme de Gibbs utilisé. La puissance de l'approche structurelle nous permet de proposer une modélisation riche et flexible des biais de mesure qui vient enrichir le calcul de la valeur de consensus et de son incertitude associée dans un cadre entièrement bayésien. Sous certaines hypothèses l'approche permet de manière innovante de calculer les contributions des variables de biais au biais des laboratoires. Plus généralement nous proposons un cadre bayésien pour l'amélioration de la qualité des mesures. Nous illustrons et montrons l'intérêt d'une modélisation structurelle des biais de mesure sur des comparaisons interlaboratoires en environnement. / Structural equation modelling is a widespread approach in a variety of domains and is first applied here to interlaboratory comparisons in metrology. Structural Equation Models with latent variables (SEM) are multivariate models used to model causality relationships in observed variables (the data). It is assumed that data can be grouped into separate blocks each describing a latent concept modelled by a latent variable. The correlation structure of the observed variables is transferred into the correlation structure of the latent variables. A Bayesian approach of SEM is proposed based on the analysis of the correlation matrix of latent variables using parameter expansion to overcome identifiability issues and improving the convergence of the Gibbs sampler. SEM is used as a powerful and flexible tool to model measurement bias with the aim of improving the reliability of the consensus value and its associated uncertainty in a fully Bayesian framework. The approach also allows to compute the contributions of the observed variables to the bias of the laboratories, under additional hypotheses. More generally a global Bayesian framework is proposed to improve the quality of measurements. The approach is illustrated on the structural equation modelling of measurement bias in interlaboratory comparisons in environment.

Page generated in 0.219 seconds